Jan 22 16:29:37 crc systemd[1]: Starting Kubernetes Kubelet... Jan 22 16:29:37 crc restorecon[4688]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:29:37 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 16:29:38 crc restorecon[4688]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 16:29:38 crc restorecon[4688]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 22 16:29:38 crc kubenswrapper[4758]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 16:29:38 crc kubenswrapper[4758]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 22 16:29:38 crc kubenswrapper[4758]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 16:29:38 crc kubenswrapper[4758]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 16:29:38 crc kubenswrapper[4758]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 22 16:29:38 crc kubenswrapper[4758]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.673083 4758 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676327 4758 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676345 4758 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676350 4758 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676354 4758 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676357 4758 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676361 4758 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676365 4758 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676369 4758 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676372 4758 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676376 4758 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676380 4758 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676383 4758 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676388 4758 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676392 4758 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676396 4758 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676400 4758 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676404 4758 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676410 4758 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676413 4758 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676422 4758 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676426 4758 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676429 4758 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676434 4758 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676439 4758 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676443 4758 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676448 4758 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676453 4758 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676457 4758 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676461 4758 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676465 4758 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676468 4758 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676472 4758 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676476 4758 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676480 4758 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676483 4758 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676487 4758 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676490 4758 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676494 4758 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676499 4758 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676503 4758 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676507 4758 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676511 4758 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676516 4758 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676520 4758 feature_gate.go:330] unrecognized feature gate: Example Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676523 4758 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676528 4758 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676531 4758 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676535 4758 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676538 4758 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676542 4758 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676546 4758 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676549 4758 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676553 4758 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676556 4758 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676560 4758 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676563 4758 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676568 4758 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676572 4758 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676576 4758 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676580 4758 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676584 4758 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676588 4758 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676592 4758 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676596 4758 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676599 4758 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676602 4758 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676606 4758 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676609 4758 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676613 4758 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676616 4758 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.676620 4758 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.676858 4758 flags.go:64] FLAG: --address="0.0.0.0" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.676895 4758 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.676908 4758 flags.go:64] FLAG: --anonymous-auth="true" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.676913 4758 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.676919 4758 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.676924 4758 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.676930 4758 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.676935 4758 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.676939 4758 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.676943 4758 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.676948 4758 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.676952 4758 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.676956 4758 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.676960 4758 flags.go:64] FLAG: --cgroup-root="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.676965 4758 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.676969 4758 flags.go:64] FLAG: --client-ca-file="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.676973 4758 flags.go:64] FLAG: --cloud-config="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.676977 4758 flags.go:64] FLAG: --cloud-provider="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.676981 4758 flags.go:64] FLAG: --cluster-dns="[]" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.676990 4758 flags.go:64] FLAG: --cluster-domain="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.676995 4758 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677000 4758 flags.go:64] FLAG: --config-dir="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677005 4758 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677010 4758 flags.go:64] FLAG: --container-log-max-files="5" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677018 4758 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677022 4758 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677028 4758 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677033 4758 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677038 4758 flags.go:64] FLAG: --contention-profiling="false" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677043 4758 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677049 4758 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677054 4758 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677059 4758 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677065 4758 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677071 4758 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677076 4758 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677080 4758 flags.go:64] FLAG: --enable-load-reader="false" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677085 4758 flags.go:64] FLAG: --enable-server="true" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677090 4758 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677102 4758 flags.go:64] FLAG: --event-burst="100" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677108 4758 flags.go:64] FLAG: --event-qps="50" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677113 4758 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677118 4758 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677123 4758 flags.go:64] FLAG: --eviction-hard="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677129 4758 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677135 4758 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677140 4758 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677146 4758 flags.go:64] FLAG: --eviction-soft="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677150 4758 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677156 4758 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677162 4758 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677172 4758 flags.go:64] FLAG: --experimental-mounter-path="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677177 4758 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677182 4758 flags.go:64] FLAG: --fail-swap-on="true" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677187 4758 flags.go:64] FLAG: --feature-gates="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677194 4758 flags.go:64] FLAG: --file-check-frequency="20s" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677199 4758 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677205 4758 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677211 4758 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677216 4758 flags.go:64] FLAG: --healthz-port="10248" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677221 4758 flags.go:64] FLAG: --help="false" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677226 4758 flags.go:64] FLAG: --hostname-override="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677231 4758 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677237 4758 flags.go:64] FLAG: --http-check-frequency="20s" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677242 4758 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677246 4758 flags.go:64] FLAG: --image-credential-provider-config="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677254 4758 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677259 4758 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677263 4758 flags.go:64] FLAG: --image-service-endpoint="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677268 4758 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677273 4758 flags.go:64] FLAG: --kube-api-burst="100" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677278 4758 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677284 4758 flags.go:64] FLAG: --kube-api-qps="50" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677288 4758 flags.go:64] FLAG: --kube-reserved="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677293 4758 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677298 4758 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677303 4758 flags.go:64] FLAG: --kubelet-cgroups="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677308 4758 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677313 4758 flags.go:64] FLAG: --lock-file="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677317 4758 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677322 4758 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677327 4758 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677335 4758 flags.go:64] FLAG: --log-json-split-stream="false" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677340 4758 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677344 4758 flags.go:64] FLAG: --log-text-split-stream="false" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677349 4758 flags.go:64] FLAG: --logging-format="text" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677354 4758 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677365 4758 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677370 4758 flags.go:64] FLAG: --manifest-url="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677375 4758 flags.go:64] FLAG: --manifest-url-header="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677381 4758 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677386 4758 flags.go:64] FLAG: --max-open-files="1000000" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677393 4758 flags.go:64] FLAG: --max-pods="110" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677399 4758 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677404 4758 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677409 4758 flags.go:64] FLAG: --memory-manager-policy="None" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677414 4758 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677420 4758 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677426 4758 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677431 4758 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677443 4758 flags.go:64] FLAG: --node-status-max-images="50" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677449 4758 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677456 4758 flags.go:64] FLAG: --oom-score-adj="-999" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677461 4758 flags.go:64] FLAG: --pod-cidr="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677466 4758 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677475 4758 flags.go:64] FLAG: --pod-manifest-path="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677479 4758 flags.go:64] FLAG: --pod-max-pids="-1" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677484 4758 flags.go:64] FLAG: --pods-per-core="0" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677489 4758 flags.go:64] FLAG: --port="10250" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677494 4758 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677499 4758 flags.go:64] FLAG: --provider-id="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677504 4758 flags.go:64] FLAG: --qos-reserved="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677509 4758 flags.go:64] FLAG: --read-only-port="10255" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677514 4758 flags.go:64] FLAG: --register-node="true" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677519 4758 flags.go:64] FLAG: --register-schedulable="true" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677523 4758 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677532 4758 flags.go:64] FLAG: --registry-burst="10" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677537 4758 flags.go:64] FLAG: --registry-qps="5" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677542 4758 flags.go:64] FLAG: --reserved-cpus="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677547 4758 flags.go:64] FLAG: --reserved-memory="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677553 4758 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677558 4758 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677563 4758 flags.go:64] FLAG: --rotate-certificates="false" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677571 4758 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677576 4758 flags.go:64] FLAG: --runonce="false" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677581 4758 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677586 4758 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677591 4758 flags.go:64] FLAG: --seccomp-default="false" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677596 4758 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677601 4758 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677619 4758 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677630 4758 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677635 4758 flags.go:64] FLAG: --storage-driver-password="root" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677640 4758 flags.go:64] FLAG: --storage-driver-secure="false" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677645 4758 flags.go:64] FLAG: --storage-driver-table="stats" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677651 4758 flags.go:64] FLAG: --storage-driver-user="root" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677656 4758 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677661 4758 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677666 4758 flags.go:64] FLAG: --system-cgroups="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677671 4758 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677680 4758 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677684 4758 flags.go:64] FLAG: --tls-cert-file="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677689 4758 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677696 4758 flags.go:64] FLAG: --tls-min-version="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677701 4758 flags.go:64] FLAG: --tls-private-key-file="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677705 4758 flags.go:64] FLAG: --topology-manager-policy="none" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677710 4758 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677715 4758 flags.go:64] FLAG: --topology-manager-scope="container" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677720 4758 flags.go:64] FLAG: --v="2" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677727 4758 flags.go:64] FLAG: --version="false" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677734 4758 flags.go:64] FLAG: --vmodule="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677766 4758 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.677772 4758 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.677911 4758 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.677920 4758 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.677926 4758 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.677933 4758 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.677938 4758 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.677942 4758 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.677947 4758 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.677952 4758 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.677956 4758 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.677961 4758 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.677965 4758 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.677969 4758 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.677974 4758 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.677978 4758 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.677982 4758 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.677987 4758 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.677991 4758 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.677995 4758 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.677999 4758 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678004 4758 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678008 4758 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678012 4758 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678016 4758 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678021 4758 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678025 4758 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678029 4758 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678034 4758 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678038 4758 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678042 4758 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678047 4758 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678052 4758 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678057 4758 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678061 4758 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678065 4758 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678069 4758 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678074 4758 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678079 4758 feature_gate.go:330] unrecognized feature gate: Example Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678083 4758 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678087 4758 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678091 4758 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678095 4758 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678099 4758 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678104 4758 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678108 4758 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678112 4758 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678117 4758 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678121 4758 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678126 4758 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678130 4758 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678135 4758 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678139 4758 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678144 4758 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678148 4758 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678152 4758 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678157 4758 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678161 4758 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678166 4758 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678170 4758 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678175 4758 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678179 4758 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678183 4758 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678187 4758 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678193 4758 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678199 4758 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678204 4758 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678208 4758 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678213 4758 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678218 4758 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678222 4758 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678227 4758 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.678231 4758 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.678386 4758 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.685254 4758 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.685292 4758 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685387 4758 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685399 4758 feature_gate.go:330] unrecognized feature gate: Example Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685405 4758 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685413 4758 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685420 4758 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685426 4758 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685433 4758 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685440 4758 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685446 4758 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685452 4758 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685458 4758 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685464 4758 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685470 4758 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685476 4758 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685482 4758 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685488 4758 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685494 4758 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685501 4758 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685506 4758 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685512 4758 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685518 4758 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685526 4758 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685534 4758 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685541 4758 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685547 4758 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685555 4758 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685565 4758 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685572 4758 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685579 4758 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685588 4758 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685596 4758 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685603 4758 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685611 4758 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685617 4758 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685624 4758 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685629 4758 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685634 4758 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685640 4758 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685646 4758 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685652 4758 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685659 4758 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685666 4758 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685672 4758 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685678 4758 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685684 4758 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685690 4758 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685696 4758 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685702 4758 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685707 4758 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685712 4758 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685717 4758 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685722 4758 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685726 4758 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685732 4758 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685767 4758 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685775 4758 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685781 4758 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685789 4758 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685795 4758 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685801 4758 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685807 4758 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685814 4758 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685820 4758 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685826 4758 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685832 4758 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685838 4758 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685844 4758 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685853 4758 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685861 4758 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685868 4758 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.685877 4758 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.685888 4758 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686067 4758 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686078 4758 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686085 4758 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686100 4758 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686107 4758 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686114 4758 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686119 4758 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686125 4758 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686130 4758 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686135 4758 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686140 4758 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686145 4758 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686150 4758 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686155 4758 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686160 4758 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686165 4758 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686170 4758 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686174 4758 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686180 4758 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686187 4758 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686191 4758 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686196 4758 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686203 4758 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686209 4758 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686214 4758 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686219 4758 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686224 4758 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686229 4758 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686234 4758 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686238 4758 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686243 4758 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686251 4758 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686259 4758 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686267 4758 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686275 4758 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686283 4758 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686290 4758 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686296 4758 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686303 4758 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686309 4758 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686315 4758 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686321 4758 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686327 4758 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686334 4758 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686340 4758 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686347 4758 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686353 4758 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686359 4758 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686365 4758 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686371 4758 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686377 4758 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686383 4758 feature_gate.go:330] unrecognized feature gate: Example Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686389 4758 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686395 4758 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686402 4758 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686408 4758 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686414 4758 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686421 4758 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686427 4758 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686433 4758 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686439 4758 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686448 4758 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686455 4758 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686463 4758 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686471 4758 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686476 4758 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686483 4758 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686492 4758 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686499 4758 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686506 4758 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.686513 4758 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.686524 4758 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.687016 4758 server.go:940] "Client rotation is on, will bootstrap in background" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.690043 4758 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.690133 4758 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.690735 4758 server.go:997] "Starting client certificate rotation" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.690777 4758 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.691138 4758 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-18 20:27:16.341524895 +0000 UTC Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.691238 4758 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.700855 4758 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 22 16:29:38 crc kubenswrapper[4758]: E0122 16:29:38.702433 4758 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.702474 4758 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.709806 4758 log.go:25] "Validated CRI v1 runtime API" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.725506 4758 log.go:25] "Validated CRI v1 image API" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.726664 4758 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.728839 4758 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-22-16-23-09-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.728866 4758 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.741828 4758 manager.go:217] Machine: {Timestamp:2026-01-22 16:29:38.740887592 +0000 UTC m=+0.224226897 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:83805c52-2bba-4705-bdbe-9101a9d1190e BootID:f7288053-8dca-462f-b24f-6a9d8be738b3 Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:db:77:ca Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:db:77:ca Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:8d:49:67 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:6b:14:47 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:8d:ad:3c Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:8e:29:17 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:3e:85:ad:fb:2c:39 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:ce:4c:2d:de:7d:82 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.742029 4758 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.742141 4758 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.742655 4758 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.742865 4758 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.742906 4758 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.743291 4758 topology_manager.go:138] "Creating topology manager with none policy" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.743313 4758 container_manager_linux.go:303] "Creating device plugin manager" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.743443 4758 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.743588 4758 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.743907 4758 state_mem.go:36] "Initialized new in-memory state store" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.744004 4758 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.745102 4758 kubelet.go:418] "Attempting to sync node with API server" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.745130 4758 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.745226 4758 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.745248 4758 kubelet.go:324] "Adding apiserver pod source" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.745262 4758 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.748032 4758 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.748042 4758 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 16:29:38 crc kubenswrapper[4758]: E0122 16:29:38.748146 4758 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 16:29:38 crc kubenswrapper[4758]: E0122 16:29:38.748173 4758 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.749098 4758 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.749500 4758 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.750167 4758 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.750878 4758 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.750904 4758 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.750914 4758 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.750924 4758 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.750939 4758 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.750947 4758 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.750956 4758 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.750973 4758 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.750986 4758 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.751000 4758 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.751016 4758 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.751028 4758 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.751266 4758 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.751804 4758 server.go:1280] "Started kubelet" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.752399 4758 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.752452 4758 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.752881 4758 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 22 16:29:38 crc systemd[1]: Started Kubernetes Kubelet. Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.753660 4758 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.754672 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.754713 4758 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 22 16:29:38 crc kubenswrapper[4758]: E0122 16:29:38.754258 4758 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.223:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188d1a830adce2fb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 16:29:38.751759099 +0000 UTC m=+0.235098394,LastTimestamp:2026-01-22 16:29:38.751759099 +0000 UTC m=+0.235098394,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 16:29:38 crc kubenswrapper[4758]: E0122 16:29:38.754907 4758 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.754925 4758 server.go:460] "Adding debug handlers to kubelet server" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.754884 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 21:20:57.882980873 +0000 UTC Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.755022 4758 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.755036 4758 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.755060 4758 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 22 16:29:38 crc kubenswrapper[4758]: E0122 16:29:38.755528 4758 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" interval="200ms" Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.755658 4758 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 16:29:38 crc kubenswrapper[4758]: E0122 16:29:38.755703 4758 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.755927 4758 factory.go:55] Registering systemd factory Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.755943 4758 factory.go:221] Registration of the systemd container factory successfully Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.756160 4758 factory.go:153] Registering CRI-O factory Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.756173 4758 factory.go:221] Registration of the crio container factory successfully Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.756218 4758 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.756235 4758 factory.go:103] Registering Raw factory Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.756248 4758 manager.go:1196] Started watching for new ooms in manager Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.756784 4758 manager.go:319] Starting recovery of all containers Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.768690 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.768777 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.768827 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.768847 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.768863 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.768879 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.768893 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.768951 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.768974 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.768991 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769010 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769026 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769043 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769061 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769075 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769086 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769097 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769135 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769154 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769166 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769178 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769192 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769204 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769216 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769229 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769243 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769260 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769274 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769287 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769298 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769310 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769323 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769356 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769389 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769404 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769422 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769439 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769457 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769475 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769492 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769509 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769526 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769545 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769564 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769581 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769599 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769618 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769644 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769706 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769730 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769774 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769791 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769817 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769837 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769855 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769874 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769892 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769908 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769925 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769942 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769961 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769976 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.769993 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770008 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770025 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770041 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770055 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770070 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770084 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770099 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770114 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770128 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770143 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770157 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770173 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770188 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770203 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770219 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770238 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770261 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770278 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770294 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770312 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770328 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770344 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770361 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770376 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770392 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770409 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770426 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770444 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770460 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770478 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770494 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770513 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770529 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770544 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770560 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770575 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770591 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770609 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770625 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770642 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770659 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770681 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770699 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770717 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770776 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770801 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770819 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770841 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770858 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770876 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770894 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770912 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770929 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770945 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770960 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770976 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.770992 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771007 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771022 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771041 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771058 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771077 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771093 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771112 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771130 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771145 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771161 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771176 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771192 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771209 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771225 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771244 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771262 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771277 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771291 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771307 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771328 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771342 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771358 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771376 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771391 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771406 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771421 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771439 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771456 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771472 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771488 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771504 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771520 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771534 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771548 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771564 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771589 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771653 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771672 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771690 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771706 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771721 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771737 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771789 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771803 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771820 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771836 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771850 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771865 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771881 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771904 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771920 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771939 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771957 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771972 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.771989 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.772004 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.772019 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.772034 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.772050 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.772066 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.772082 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.772097 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.772113 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.772130 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.772147 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.772162 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.772178 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.772194 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.772211 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.772227 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.772244 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.772259 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.772276 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.772292 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.772308 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.772324 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.772342 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.773286 4758 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.773321 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.773339 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.773356 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.773375 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.773391 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.773407 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.773422 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.773438 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.773470 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.773482 4758 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.773494 4758 reconstruct.go:97] "Volume reconstruction finished" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.773502 4758 reconciler.go:26] "Reconciler: start to sync state" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.776980 4758 manager.go:324] Recovery completed Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.786971 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.788381 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.788409 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.788418 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.789345 4758 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.789360 4758 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.789376 4758 state_mem.go:36] "Initialized new in-memory state store" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.804420 4758 policy_none.go:49] "None policy: Start" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.805138 4758 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.805692 4758 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.805780 4758 state_mem.go:35] "Initializing new in-memory state store" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.806729 4758 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.806787 4758 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.806814 4758 kubelet.go:2335] "Starting kubelet main sync loop" Jan 22 16:29:38 crc kubenswrapper[4758]: E0122 16:29:38.806859 4758 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 22 16:29:38 crc kubenswrapper[4758]: W0122 16:29:38.807492 4758 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 16:29:38 crc kubenswrapper[4758]: E0122 16:29:38.807539 4758 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 16:29:38 crc kubenswrapper[4758]: E0122 16:29:38.855299 4758 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.861269 4758 manager.go:334] "Starting Device Plugin manager" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.861317 4758 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.861329 4758 server.go:79] "Starting device plugin registration server" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.861704 4758 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.861791 4758 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.863184 4758 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.863306 4758 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.863319 4758 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 22 16:29:38 crc kubenswrapper[4758]: E0122 16:29:38.872832 4758 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.907560 4758 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.907661 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.908613 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.908672 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.908685 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.908897 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.909089 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.909119 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.909813 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.909840 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.909849 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.910001 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.910023 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.910034 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.910201 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.910414 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.910477 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.910801 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.910825 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.910834 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.910940 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.911051 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.911093 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.911440 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.911470 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.911482 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.911854 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.911881 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.911892 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.912345 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.912387 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.912396 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.912539 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.912623 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.912655 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.913268 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.913288 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.913296 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.913401 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.913424 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.913703 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.913721 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.913728 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.914068 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.914098 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.914109 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:38 crc kubenswrapper[4758]: E0122 16:29:38.956523 4758 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" interval="400ms" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.961983 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.963355 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.963397 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.963414 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.963446 4758 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 16:29:38 crc kubenswrapper[4758]: E0122 16:29:38.964022 4758 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.223:6443: connect: connection refused" node="crc" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.975604 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.975666 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.975697 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.975725 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.975775 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.975803 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.975830 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.975860 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.975889 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.975916 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.975945 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.975975 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.976001 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.976027 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 16:29:38 crc kubenswrapper[4758]: I0122 16:29:38.976057 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.077122 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.077175 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.077200 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.077221 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.077241 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.077262 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.077281 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.077302 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.077342 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.077361 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.077381 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.077400 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.077414 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.077472 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.077503 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.077420 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.077478 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.077433 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.077420 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.077525 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.077573 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.077577 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.077647 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.077649 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.077668 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.077672 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.077692 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.077767 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.077732 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.077928 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.164857 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.166668 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.166723 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.166736 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.166784 4758 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 16:29:39 crc kubenswrapper[4758]: E0122 16:29:39.167287 4758 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.223:6443: connect: connection refused" node="crc" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.244718 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.259242 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.268346 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:29:39 crc kubenswrapper[4758]: W0122 16:29:39.275505 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-7dc4c9a16bd184923755e3b4fb324cafe94144d28834950019dca46d67e64beb WatchSource:0}: Error finding container 7dc4c9a16bd184923755e3b4fb324cafe94144d28834950019dca46d67e64beb: Status 404 returned error can't find the container with id 7dc4c9a16bd184923755e3b4fb324cafe94144d28834950019dca46d67e64beb Jan 22 16:29:39 crc kubenswrapper[4758]: W0122 16:29:39.283880 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-705f868088456892b5c20681840df7c5ee4eaf1c8fde87ecb1c048e77d3b6992 WatchSource:0}: Error finding container 705f868088456892b5c20681840df7c5ee4eaf1c8fde87ecb1c048e77d3b6992: Status 404 returned error can't find the container with id 705f868088456892b5c20681840df7c5ee4eaf1c8fde87ecb1c048e77d3b6992 Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.286016 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.290628 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 16:29:39 crc kubenswrapper[4758]: W0122 16:29:39.292956 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-129eded3dce41597a90ce5150ee34d00682416564be9965571b10ed13ab33493 WatchSource:0}: Error finding container 129eded3dce41597a90ce5150ee34d00682416564be9965571b10ed13ab33493: Status 404 returned error can't find the container with id 129eded3dce41597a90ce5150ee34d00682416564be9965571b10ed13ab33493 Jan 22 16:29:39 crc kubenswrapper[4758]: W0122 16:29:39.304727 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-3ff95a36a4c801f1fa12b4a7647ac409fd29afbc147ce9247232c34f3cd28def WatchSource:0}: Error finding container 3ff95a36a4c801f1fa12b4a7647ac409fd29afbc147ce9247232c34f3cd28def: Status 404 returned error can't find the container with id 3ff95a36a4c801f1fa12b4a7647ac409fd29afbc147ce9247232c34f3cd28def Jan 22 16:29:39 crc kubenswrapper[4758]: W0122 16:29:39.309988 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-8f803ecdad93a63e251d786f6e91230c92539b67fda131008e5f09b5ebe5d9a2 WatchSource:0}: Error finding container 8f803ecdad93a63e251d786f6e91230c92539b67fda131008e5f09b5ebe5d9a2: Status 404 returned error can't find the container with id 8f803ecdad93a63e251d786f6e91230c92539b67fda131008e5f09b5ebe5d9a2 Jan 22 16:29:39 crc kubenswrapper[4758]: E0122 16:29:39.357997 4758 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" interval="800ms" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.568347 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.569734 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.569817 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.569827 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.569852 4758 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 16:29:39 crc kubenswrapper[4758]: E0122 16:29:39.570297 4758 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.223:6443: connect: connection refused" node="crc" Jan 22 16:29:39 crc kubenswrapper[4758]: W0122 16:29:39.703766 4758 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 16:29:39 crc kubenswrapper[4758]: E0122 16:29:39.704120 4758 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.753524 4758 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.755578 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 20:23:50.402099289 +0000 UTC Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.811194 4758 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="f65d6332d7a785ece6b513b6dc9c2b705475831c3d926b61070af12139bd51bd" exitCode=0 Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.811271 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"f65d6332d7a785ece6b513b6dc9c2b705475831c3d926b61070af12139bd51bd"} Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.811352 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"7dc4c9a16bd184923755e3b4fb324cafe94144d28834950019dca46d67e64beb"} Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.811429 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.812418 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.812443 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.812453 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.812471 4758 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="981ef0ee873407291236dfd734567e3213a9451d495eb97e1029696cc788acbb" exitCode=0 Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.812524 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"981ef0ee873407291236dfd734567e3213a9451d495eb97e1029696cc788acbb"} Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.812585 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"8f803ecdad93a63e251d786f6e91230c92539b67fda131008e5f09b5ebe5d9a2"} Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.812683 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.813316 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.813340 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.813350 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:39 crc kubenswrapper[4758]: W0122 16:29:39.813801 4758 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 16:29:39 crc kubenswrapper[4758]: E0122 16:29:39.813849 4758 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.815518 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b"} Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.815544 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3ff95a36a4c801f1fa12b4a7647ac409fd29afbc147ce9247232c34f3cd28def"} Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.818043 4758 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4" exitCode=0 Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.818129 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4"} Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.818168 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"129eded3dce41597a90ce5150ee34d00682416564be9965571b10ed13ab33493"} Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.818254 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.819154 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.819187 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.819200 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.819522 4758 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8" exitCode=0 Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.819551 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8"} Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.819568 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"705f868088456892b5c20681840df7c5ee4eaf1c8fde87ecb1c048e77d3b6992"} Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.819645 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.820285 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.820307 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.820315 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.822520 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.823360 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.823388 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:39 crc kubenswrapper[4758]: I0122 16:29:39.823398 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:39 crc kubenswrapper[4758]: W0122 16:29:39.875949 4758 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 16:29:39 crc kubenswrapper[4758]: E0122 16:29:39.876057 4758 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 16:29:40 crc kubenswrapper[4758]: W0122 16:29:40.031157 4758 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 16:29:40 crc kubenswrapper[4758]: E0122 16:29:40.031240 4758 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 16:29:40 crc kubenswrapper[4758]: E0122 16:29:40.159778 4758 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" interval="1.6s" Jan 22 16:29:40 crc kubenswrapper[4758]: I0122 16:29:40.371273 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:29:40 crc kubenswrapper[4758]: I0122 16:29:40.372349 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:40 crc kubenswrapper[4758]: I0122 16:29:40.372376 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:40 crc kubenswrapper[4758]: I0122 16:29:40.372385 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:40 crc kubenswrapper[4758]: I0122 16:29:40.372404 4758 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 16:29:40 crc kubenswrapper[4758]: E0122 16:29:40.372734 4758 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.223:6443: connect: connection refused" node="crc" Jan 22 16:29:40 crc kubenswrapper[4758]: I0122 16:29:40.756219 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 15:27:52.964696349 +0000 UTC Jan 22 16:29:40 crc kubenswrapper[4758]: I0122 16:29:40.768889 4758 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 22 16:29:40 crc kubenswrapper[4758]: I0122 16:29:40.823149 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"cdcb3871deb3a437bfd84b017af8233d06a10cbc0da01bb1aca18a10b40ca3fc"} Jan 22 16:29:40 crc kubenswrapper[4758]: I0122 16:29:40.823266 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:29:40 crc kubenswrapper[4758]: I0122 16:29:40.824339 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:40 crc kubenswrapper[4758]: I0122 16:29:40.824405 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:40 crc kubenswrapper[4758]: I0122 16:29:40.824419 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:40 crc kubenswrapper[4758]: I0122 16:29:40.825877 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a3cdc36e13e13f43cb329beb4b415f17dab3d8427338168449ea3771053d668a"} Jan 22 16:29:40 crc kubenswrapper[4758]: I0122 16:29:40.825913 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"0197852c20ea1961ea8cff956886a8a42967c95fad73d2ed8bd37e6f763cca59"} Jan 22 16:29:40 crc kubenswrapper[4758]: I0122 16:29:40.825948 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4d9f742b25c51806335d17c6c67e8ad4944228fde89626352044f62ee1e708c9"} Jan 22 16:29:40 crc kubenswrapper[4758]: I0122 16:29:40.826073 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:29:40 crc kubenswrapper[4758]: I0122 16:29:40.826870 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:40 crc kubenswrapper[4758]: I0122 16:29:40.826915 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:40 crc kubenswrapper[4758]: I0122 16:29:40.826929 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:40 crc kubenswrapper[4758]: I0122 16:29:40.827980 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8"} Jan 22 16:29:40 crc kubenswrapper[4758]: I0122 16:29:40.828016 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:29:40 crc kubenswrapper[4758]: I0122 16:29:40.828022 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85"} Jan 22 16:29:40 crc kubenswrapper[4758]: I0122 16:29:40.828039 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e"} Jan 22 16:29:40 crc kubenswrapper[4758]: I0122 16:29:40.828730 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:40 crc kubenswrapper[4758]: I0122 16:29:40.828769 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:40 crc kubenswrapper[4758]: I0122 16:29:40.828778 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:40 crc kubenswrapper[4758]: I0122 16:29:40.834172 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43"} Jan 22 16:29:40 crc kubenswrapper[4758]: I0122 16:29:40.834233 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649"} Jan 22 16:29:40 crc kubenswrapper[4758]: I0122 16:29:40.834246 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb"} Jan 22 16:29:40 crc kubenswrapper[4758]: I0122 16:29:40.834275 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b"} Jan 22 16:29:40 crc kubenswrapper[4758]: I0122 16:29:40.834291 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790"} Jan 22 16:29:40 crc kubenswrapper[4758]: I0122 16:29:40.834486 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:29:40 crc kubenswrapper[4758]: I0122 16:29:40.835757 4758 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5" exitCode=0 Jan 22 16:29:40 crc kubenswrapper[4758]: I0122 16:29:40.835790 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5"} Jan 22 16:29:40 crc kubenswrapper[4758]: I0122 16:29:40.835861 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:29:40 crc kubenswrapper[4758]: I0122 16:29:40.836379 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:40 crc kubenswrapper[4758]: I0122 16:29:40.836405 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:40 crc kubenswrapper[4758]: I0122 16:29:40.836413 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:40 crc kubenswrapper[4758]: I0122 16:29:40.836882 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:40 crc kubenswrapper[4758]: I0122 16:29:40.836901 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:40 crc kubenswrapper[4758]: I0122 16:29:40.836910 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:41 crc kubenswrapper[4758]: I0122 16:29:41.757259 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 00:34:46.296128668 +0000 UTC Jan 22 16:29:41 crc kubenswrapper[4758]: I0122 16:29:41.841379 4758 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9" exitCode=0 Jan 22 16:29:41 crc kubenswrapper[4758]: I0122 16:29:41.841472 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9"} Jan 22 16:29:41 crc kubenswrapper[4758]: I0122 16:29:41.841528 4758 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 16:29:41 crc kubenswrapper[4758]: I0122 16:29:41.841578 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:29:41 crc kubenswrapper[4758]: I0122 16:29:41.841600 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:29:41 crc kubenswrapper[4758]: I0122 16:29:41.841699 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:29:41 crc kubenswrapper[4758]: I0122 16:29:41.843654 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:41 crc kubenswrapper[4758]: I0122 16:29:41.843696 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:41 crc kubenswrapper[4758]: I0122 16:29:41.843710 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:41 crc kubenswrapper[4758]: I0122 16:29:41.843855 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:41 crc kubenswrapper[4758]: I0122 16:29:41.843910 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:41 crc kubenswrapper[4758]: I0122 16:29:41.843937 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:41 crc kubenswrapper[4758]: I0122 16:29:41.844248 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:41 crc kubenswrapper[4758]: I0122 16:29:41.844307 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:41 crc kubenswrapper[4758]: I0122 16:29:41.844334 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:41 crc kubenswrapper[4758]: I0122 16:29:41.973013 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:29:41 crc kubenswrapper[4758]: I0122 16:29:41.974065 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:41 crc kubenswrapper[4758]: I0122 16:29:41.974104 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:41 crc kubenswrapper[4758]: I0122 16:29:41.974116 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:41 crc kubenswrapper[4758]: I0122 16:29:41.974146 4758 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 16:29:42 crc kubenswrapper[4758]: I0122 16:29:42.757727 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 03:03:16.091988599 +0000 UTC Jan 22 16:29:42 crc kubenswrapper[4758]: I0122 16:29:42.849243 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"43c7268055ac9d7def228857bd8b974a53bb71fa873e1e0495d4691b8ca11902"} Jan 22 16:29:42 crc kubenswrapper[4758]: I0122 16:29:42.849324 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5ca6e50d3a2acc2a4d43dc4a1fc1ff783ea5cb78978132377b7bb12b0dbd3e8d"} Jan 22 16:29:42 crc kubenswrapper[4758]: I0122 16:29:42.849355 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a2a945d54b82518c2cda9257528f766444b687693255c50680adafb11651c792"} Jan 22 16:29:42 crc kubenswrapper[4758]: I0122 16:29:42.849380 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6824555f2019c5b0c92137ccb0a9af419b01ce0c63e1739c1d22b155a97c98a4"} Jan 22 16:29:42 crc kubenswrapper[4758]: I0122 16:29:42.849405 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5fb71578e3eba87e91e6f6db0b03669e556cfbf38e2df367d20b6c8c79952f59"} Jan 22 16:29:42 crc kubenswrapper[4758]: I0122 16:29:42.849608 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:29:42 crc kubenswrapper[4758]: I0122 16:29:42.851992 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:42 crc kubenswrapper[4758]: I0122 16:29:42.852146 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:42 crc kubenswrapper[4758]: I0122 16:29:42.852170 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:43 crc kubenswrapper[4758]: I0122 16:29:43.138030 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:29:43 crc kubenswrapper[4758]: I0122 16:29:43.138231 4758 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 16:29:43 crc kubenswrapper[4758]: I0122 16:29:43.138271 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:29:43 crc kubenswrapper[4758]: I0122 16:29:43.139692 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:43 crc kubenswrapper[4758]: I0122 16:29:43.139735 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:43 crc kubenswrapper[4758]: I0122 16:29:43.139781 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:43 crc kubenswrapper[4758]: I0122 16:29:43.758145 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 13:28:40.409280734 +0000 UTC Jan 22 16:29:44 crc kubenswrapper[4758]: I0122 16:29:44.758922 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 17:13:32.889896633 +0000 UTC Jan 22 16:29:44 crc kubenswrapper[4758]: I0122 16:29:44.874923 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 16:29:44 crc kubenswrapper[4758]: I0122 16:29:44.875140 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:29:44 crc kubenswrapper[4758]: I0122 16:29:44.876532 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:44 crc kubenswrapper[4758]: I0122 16:29:44.876575 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:44 crc kubenswrapper[4758]: I0122 16:29:44.876598 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:45 crc kubenswrapper[4758]: I0122 16:29:45.122353 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:29:45 crc kubenswrapper[4758]: I0122 16:29:45.122618 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:29:45 crc kubenswrapper[4758]: I0122 16:29:45.124359 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:45 crc kubenswrapper[4758]: I0122 16:29:45.124433 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:45 crc kubenswrapper[4758]: I0122 16:29:45.124450 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:45 crc kubenswrapper[4758]: I0122 16:29:45.759659 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 14:20:12.500460136 +0000 UTC Jan 22 16:29:45 crc kubenswrapper[4758]: I0122 16:29:45.811783 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:29:45 crc kubenswrapper[4758]: I0122 16:29:45.831419 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 16:29:45 crc kubenswrapper[4758]: I0122 16:29:45.831703 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:29:45 crc kubenswrapper[4758]: I0122 16:29:45.833139 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:45 crc kubenswrapper[4758]: I0122 16:29:45.833202 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:45 crc kubenswrapper[4758]: I0122 16:29:45.833222 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:45 crc kubenswrapper[4758]: I0122 16:29:45.857671 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:29:45 crc kubenswrapper[4758]: I0122 16:29:45.858660 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:45 crc kubenswrapper[4758]: I0122 16:29:45.858728 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:45 crc kubenswrapper[4758]: I0122 16:29:45.858789 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:46 crc kubenswrapper[4758]: I0122 16:29:46.406222 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 22 16:29:46 crc kubenswrapper[4758]: I0122 16:29:46.406408 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:29:46 crc kubenswrapper[4758]: I0122 16:29:46.407554 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:46 crc kubenswrapper[4758]: I0122 16:29:46.407615 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:46 crc kubenswrapper[4758]: I0122 16:29:46.407629 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:46 crc kubenswrapper[4758]: I0122 16:29:46.681849 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 16:29:46 crc kubenswrapper[4758]: I0122 16:29:46.682371 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:29:46 crc kubenswrapper[4758]: I0122 16:29:46.683682 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:46 crc kubenswrapper[4758]: I0122 16:29:46.683717 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:46 crc kubenswrapper[4758]: I0122 16:29:46.683728 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:46 crc kubenswrapper[4758]: I0122 16:29:46.759801 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 22:27:48.401860241 +0000 UTC Jan 22 16:29:46 crc kubenswrapper[4758]: I0122 16:29:46.805533 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 16:29:46 crc kubenswrapper[4758]: I0122 16:29:46.811694 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 16:29:46 crc kubenswrapper[4758]: I0122 16:29:46.860946 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:29:46 crc kubenswrapper[4758]: I0122 16:29:46.862361 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:46 crc kubenswrapper[4758]: I0122 16:29:46.862412 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:46 crc kubenswrapper[4758]: I0122 16:29:46.862434 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:47 crc kubenswrapper[4758]: I0122 16:29:47.760839 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 10:45:23.752415729 +0000 UTC Jan 22 16:29:47 crc kubenswrapper[4758]: I0122 16:29:47.863101 4758 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 16:29:47 crc kubenswrapper[4758]: I0122 16:29:47.863175 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:29:47 crc kubenswrapper[4758]: I0122 16:29:47.864517 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:47 crc kubenswrapper[4758]: I0122 16:29:47.864571 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:47 crc kubenswrapper[4758]: I0122 16:29:47.864594 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:47 crc kubenswrapper[4758]: I0122 16:29:47.875221 4758 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 16:29:47 crc kubenswrapper[4758]: I0122 16:29:47.875292 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 16:29:48 crc kubenswrapper[4758]: I0122 16:29:48.266534 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 16:29:48 crc kubenswrapper[4758]: I0122 16:29:48.761658 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 15:11:21.738048288 +0000 UTC Jan 22 16:29:48 crc kubenswrapper[4758]: I0122 16:29:48.866285 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:29:48 crc kubenswrapper[4758]: I0122 16:29:48.867356 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:48 crc kubenswrapper[4758]: I0122 16:29:48.867394 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:48 crc kubenswrapper[4758]: I0122 16:29:48.867405 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:48 crc kubenswrapper[4758]: E0122 16:29:48.875186 4758 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 16:29:49 crc kubenswrapper[4758]: I0122 16:29:49.762008 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 12:09:25.295894428 +0000 UTC Jan 22 16:29:50 crc kubenswrapper[4758]: I0122 16:29:50.753793 4758 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 22 16:29:50 crc kubenswrapper[4758]: I0122 16:29:50.763111 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 21:51:09.951473055 +0000 UTC Jan 22 16:29:50 crc kubenswrapper[4758]: E0122 16:29:50.773696 4758 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 22 16:29:51 crc kubenswrapper[4758]: I0122 16:29:51.583521 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 22 16:29:51 crc kubenswrapper[4758]: I0122 16:29:51.583864 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:29:51 crc kubenswrapper[4758]: I0122 16:29:51.585532 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:51 crc kubenswrapper[4758]: I0122 16:29:51.585581 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:51 crc kubenswrapper[4758]: I0122 16:29:51.585597 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:51 crc kubenswrapper[4758]: I0122 16:29:51.600472 4758 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Jan 22 16:29:51 crc kubenswrapper[4758]: I0122 16:29:51.600551 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 22 16:29:51 crc kubenswrapper[4758]: I0122 16:29:51.606271 4758 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Jan 22 16:29:51 crc kubenswrapper[4758]: I0122 16:29:51.606362 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 22 16:29:51 crc kubenswrapper[4758]: I0122 16:29:51.764147 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 19:33:28.083667416 +0000 UTC Jan 22 16:29:52 crc kubenswrapper[4758]: I0122 16:29:52.764363 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 12:45:12.962065942 +0000 UTC Jan 22 16:29:53 crc kubenswrapper[4758]: I0122 16:29:53.765272 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 15:59:01.924796352 +0000 UTC Jan 22 16:29:54 crc kubenswrapper[4758]: I0122 16:29:54.765434 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 18:19:38.617818092 +0000 UTC Jan 22 16:29:54 crc kubenswrapper[4758]: I0122 16:29:54.918401 4758 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 22 16:29:54 crc kubenswrapper[4758]: I0122 16:29:54.934911 4758 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 22 16:29:55 crc kubenswrapper[4758]: I0122 16:29:55.766269 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 00:14:49.745086815 +0000 UTC Jan 22 16:29:55 crc kubenswrapper[4758]: I0122 16:29:55.820892 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:29:55 crc kubenswrapper[4758]: I0122 16:29:55.821105 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:29:55 crc kubenswrapper[4758]: I0122 16:29:55.822378 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:55 crc kubenswrapper[4758]: I0122 16:29:55.822416 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:55 crc kubenswrapper[4758]: I0122 16:29:55.822428 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:55 crc kubenswrapper[4758]: I0122 16:29:55.829714 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:29:55 crc kubenswrapper[4758]: I0122 16:29:55.885077 4758 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 16:29:55 crc kubenswrapper[4758]: I0122 16:29:55.885165 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:29:55 crc kubenswrapper[4758]: I0122 16:29:55.886546 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:55 crc kubenswrapper[4758]: I0122 16:29:55.886607 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:55 crc kubenswrapper[4758]: I0122 16:29:55.886627 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:56 crc kubenswrapper[4758]: E0122 16:29:56.581455 4758 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.584842 4758 trace.go:236] Trace[1071134114]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 16:29:42.491) (total time: 14092ms): Jan 22 16:29:56 crc kubenswrapper[4758]: Trace[1071134114]: ---"Objects listed" error: 14092ms (16:29:56.584) Jan 22 16:29:56 crc kubenswrapper[4758]: Trace[1071134114]: [14.092905688s] [14.092905688s] END Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.584894 4758 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.585124 4758 trace.go:236] Trace[1545832111]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 16:29:43.053) (total time: 13531ms): Jan 22 16:29:56 crc kubenswrapper[4758]: Trace[1545832111]: ---"Objects listed" error: 13531ms (16:29:56.584) Jan 22 16:29:56 crc kubenswrapper[4758]: Trace[1545832111]: [13.531046711s] [13.531046711s] END Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.585176 4758 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.585572 4758 trace.go:236] Trace[853997874]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 16:29:42.349) (total time: 14236ms): Jan 22 16:29:56 crc kubenswrapper[4758]: Trace[853997874]: ---"Objects listed" error: 14236ms (16:29:56.585) Jan 22 16:29:56 crc kubenswrapper[4758]: Trace[853997874]: [14.236118327s] [14.236118327s] END Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.585615 4758 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.587724 4758 trace.go:236] Trace[858143820]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 16:29:42.757) (total time: 13829ms): Jan 22 16:29:56 crc kubenswrapper[4758]: Trace[858143820]: ---"Objects listed" error: 13829ms (16:29:56.587) Jan 22 16:29:56 crc kubenswrapper[4758]: Trace[858143820]: [13.829893519s] [13.829893519s] END Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.587804 4758 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 22 16:29:56 crc kubenswrapper[4758]: E0122 16:29:56.588420 4758 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.592397 4758 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.649453 4758 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:34494->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.649458 4758 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:34500->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.649940 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:34500->192.168.126.11:17697: read: connection reset by peer" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.650103 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:34494->192.168.126.11:17697: read: connection reset by peer" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.650638 4758 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.650736 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.682872 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.688405 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.690650 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.754303 4758 apiserver.go:52] "Watching apiserver" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.756697 4758 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.757050 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.757413 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.757539 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.757549 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:29:56 crc kubenswrapper[4758]: E0122 16:29:56.757904 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:29:56 crc kubenswrapper[4758]: E0122 16:29:56.757979 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.757968 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.757945 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:29:56 crc kubenswrapper[4758]: E0122 16:29:56.758152 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.757969 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.759198 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.760256 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.760464 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.760528 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.760607 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.760820 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.760847 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.760982 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.762982 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.767277 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 15:55:15.627476786 +0000 UTC Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.809654 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.852087 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afc42466-9bb2-4e33-abde-6a09e897045b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.856544 4758 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.866733 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.881207 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.887548 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.889386 4758 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43" exitCode=255 Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.889424 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43"} Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.894113 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.894177 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.894198 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.894214 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.894231 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.894249 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.894388 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.894409 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.894425 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.894440 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.894454 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.894467 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.894510 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.894526 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.894540 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.894556 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.894570 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.894586 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.894602 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.894619 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.894650 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.894673 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.894694 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.894709 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.894727 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.894777 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.894796 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.894812 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.894877 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.894900 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.894920 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.894935 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.894953 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.894974 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.894998 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.895014 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.895050 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.895066 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.895085 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.895100 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.895116 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.895118 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.895130 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.895176 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.895195 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.895212 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.895229 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.895244 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.895259 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.895275 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.895291 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.895306 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.895322 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.895338 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.895353 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.895370 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.895370 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.895389 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.895407 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.895424 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.895441 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.895457 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.895494 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.895509 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.895524 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.895543 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.895558 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.895576 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.895593 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.895612 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.895629 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.895644 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.895662 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.896668 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 16:29:56 crc kubenswrapper[4758]: E0122 16:29:56.900316 4758 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.895649 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.895676 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.895691 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: E0122 16:29:56.895820 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:29:57.395804968 +0000 UTC m=+18.879144253 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.895841 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.895891 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.895995 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.896144 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.900924 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.896287 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.896284 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.896350 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.896432 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.896512 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.901057 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.896554 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.896776 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.896840 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.896984 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.897009 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.897017 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.897183 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.897368 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.901174 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.897425 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.897570 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.897576 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.897715 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.897762 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.897935 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.898020 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.898062 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.898258 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.898290 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.898551 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.898926 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.898927 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.898981 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.898996 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.899937 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.899969 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.900173 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.900394 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.901034 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.901254 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.895680 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.901395 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.901416 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.901425 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.901435 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.901453 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.901455 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.901473 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.901483 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.901494 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.901550 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.901558 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.901583 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.901609 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.901631 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.901669 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.901702 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.901714 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.901725 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.901782 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.901801 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.901819 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.901836 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.901852 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.901869 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.901886 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.901904 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.901921 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.901939 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.901963 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.901988 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902010 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902029 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902047 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902065 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902083 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902103 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902122 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902143 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902164 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902182 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902198 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902213 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902228 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902246 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902262 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902279 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902300 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902324 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902347 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902366 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902390 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902405 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902423 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902439 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902462 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902514 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902538 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902570 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902594 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902614 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902630 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902646 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902663 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902680 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902695 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902712 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902727 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902760 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902778 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902794 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902813 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902828 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902845 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902862 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902879 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902899 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902915 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902932 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902949 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902967 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902982 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902998 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.903012 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.903028 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.903044 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.903060 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.903077 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.903093 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.903109 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.903124 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.903140 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.903155 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.903170 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.903185 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.903202 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.903219 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.903233 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.903248 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.903263 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.903278 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.903295 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.903311 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.903328 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.903344 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.903367 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.903382 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.903397 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.903414 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.903431 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.903447 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.903463 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.903478 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.903495 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.903511 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.903527 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.903542 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.903558 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.903596 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.903613 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.903629 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.903644 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.903659 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.903675 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.903692 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.903708 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.903724 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.903754 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.904105 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.904156 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.904181 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.904200 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.904218 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.904238 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.904255 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.904272 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.904293 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.904313 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.904334 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.904360 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.904382 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.904400 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.904417 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.904474 4758 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.904486 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.904496 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.904505 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.904517 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.904527 4758 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.904536 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.904545 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.904554 4758 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.904563 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.904571 4758 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.904581 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.904590 4758 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.904602 4758 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.904614 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.904626 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.904638 4758 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.904650 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.904664 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.904677 4758 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.904689 4758 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.904700 4758 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.904714 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.904726 4758 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.904756 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.904771 4758 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.905540 4758 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.905724 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.905749 4758 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.905760 4758 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.905772 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.905782 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.905797 4758 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.905823 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.905844 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.905857 4758 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.905869 4758 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.905884 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.905898 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.905913 4758 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.905922 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.905931 4758 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.905940 4758 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.905950 4758 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.905959 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.905968 4758 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.905977 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.905987 4758 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.901940 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902189 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902243 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902379 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902462 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902656 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902791 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.902912 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.903678 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.904023 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.908260 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.904041 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.904094 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.904319 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.904346 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.908318 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.904788 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.904958 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.905289 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.905452 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.905475 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.905544 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.904903 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.905810 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.905916 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.907359 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.907398 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.907462 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.907628 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.907673 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.907930 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.907960 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.907991 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.909149 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.909214 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.909239 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.909321 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.908885 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.909509 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.910104 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.910180 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.910192 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.910270 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.910370 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.910581 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.910647 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.910896 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.910931 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.911205 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.911264 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.911255 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.911345 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.911390 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.911571 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.911784 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.911862 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.912397 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.912526 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.912665 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.912968 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.913025 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.913195 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.913243 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.913410 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.913659 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.913924 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.913962 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.914259 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.914415 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.914562 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.914895 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.914913 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.914965 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.914958 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.915000 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.915217 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.915227 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.915436 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.915500 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.915775 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.915800 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.915844 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.916187 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.916319 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.916375 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.916415 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.916419 4758 scope.go:117] "RemoveContainer" containerID="5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.918009 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.918606 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.918980 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.919067 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.919031 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.919409 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.919522 4758 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.919581 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.919765 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.920132 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.920210 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.920266 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.921318 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.921729 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.921866 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.922294 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.922542 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.922386 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.922685 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.922783 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.922798 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.923213 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.923295 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.923389 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.923558 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.923650 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.923892 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: E0122 16:29:56.923909 4758 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.923985 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.924035 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: E0122 16:29:56.924228 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 16:29:57.424173016 +0000 UTC m=+18.907512301 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.924318 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.924438 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.924532 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.924814 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.924863 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.925028 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.925132 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.925241 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.925330 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: E0122 16:29:56.925427 4758 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 16:29:56 crc kubenswrapper[4758]: E0122 16:29:56.925531 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 16:29:57.425515724 +0000 UTC m=+18.908855009 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.925762 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.926158 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.926103 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.926722 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.926861 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.926953 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.927031 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.927461 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.927626 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.944842 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.944964 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.945097 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.945155 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.945637 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.946603 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.946781 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: E0122 16:29:56.946829 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.946905 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.947031 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.947166 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.947366 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: E0122 16:29:56.947682 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 16:29:56 crc kubenswrapper[4758]: E0122 16:29:56.947783 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 16:29:56 crc kubenswrapper[4758]: E0122 16:29:56.947854 4758 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:29:56 crc kubenswrapper[4758]: E0122 16:29:56.947967 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 16:29:57.447948915 +0000 UTC m=+18.931288200 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.947988 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.947685 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: E0122 16:29:56.948235 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 16:29:56 crc kubenswrapper[4758]: E0122 16:29:56.948301 4758 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:29:56 crc kubenswrapper[4758]: E0122 16:29:56.948411 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 16:29:57.448402028 +0000 UTC m=+18.931741313 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.950946 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.951070 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.957346 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.957550 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.957875 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.967122 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.968142 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.970025 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.975461 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.977025 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.982254 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.987678 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 16:29:56 crc kubenswrapper[4758]: I0122 16:29:56.998131 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.008341 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.008953 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.009017 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.009083 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.009095 4758 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.009111 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.009121 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.009132 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.009142 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.009153 4758 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.009164 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.009174 4758 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.009185 4758 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.009197 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.009208 4758 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.009228 4758 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.009222 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.009270 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.009242 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.009410 4758 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.009429 4758 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.009486 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.009499 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.009515 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.009631 4758 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.009650 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.009665 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.009678 4758 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.009691 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.009704 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.009716 4758 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.009791 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.009806 4758 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.009819 4758 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.009858 4758 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.009874 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.009889 4758 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.009900 4758 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.009910 4758 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.009920 4758 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.009930 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.009939 4758 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.009949 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.009960 4758 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.009969 4758 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.009980 4758 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.009989 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.009999 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.010010 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.010020 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.010029 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.010040 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.010050 4758 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.010060 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.010070 4758 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.010080 4758 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.010091 4758 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.010103 4758 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.010115 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.010130 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.010143 4758 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.010156 4758 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.010166 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.010176 4758 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.010185 4758 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.010195 4758 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.010205 4758 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.010216 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.010225 4758 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.010235 4758 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.010245 4758 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.010255 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.010264 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.010274 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.010284 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.010294 4758 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.010304 4758 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.010315 4758 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.010327 4758 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.010338 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.010349 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.010358 4758 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.010368 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.010378 4758 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.010388 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.010836 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.010938 4758 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.010955 4758 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.010976 4758 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.010989 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011001 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011012 4758 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011021 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011031 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011040 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011050 4758 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011058 4758 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011067 4758 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011076 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011086 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011095 4758 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011106 4758 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011115 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011124 4758 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011133 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011142 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011153 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011162 4758 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011171 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011180 4758 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011189 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011199 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011209 4758 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011221 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011229 4758 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011238 4758 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011247 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011261 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011269 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011277 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011286 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011297 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011307 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011318 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011326 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011336 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011346 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011356 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011365 4758 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011374 4758 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011384 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011394 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011403 4758 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011413 4758 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011422 4758 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011431 4758 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011441 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011450 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011460 4758 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011474 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011483 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011491 4758 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011501 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011509 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011519 4758 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011527 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011537 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011545 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011554 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011563 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.011573 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.018023 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.027540 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f128c8ae-2e32-4884-a296-728579141589\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:29:51.087222 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:29:51.088631 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2674264491/tls.crt::/tmp/serving-cert-2674264491/tls.key\\\\\\\"\\\\nI0122 16:29:56.617863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:29:56.621506 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:29:56.621541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:29:56.621606 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:29:56.621634 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:29:56.631508 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:29:56.631550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631559 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:29:56.631576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0122 16:29:56.631574 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 16:29:56.631584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:29:56.631610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 16:29:56.634157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.036412 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afc42466-9bb2-4e33-abde-6a09e897045b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.044710 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.072481 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.081868 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 16:29:57 crc kubenswrapper[4758]: W0122 16:29:57.091288 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-58f1272e8a1f91dcb14a922292e317abaae48fda833a7c9b7c5746298ce45ca7 WatchSource:0}: Error finding container 58f1272e8a1f91dcb14a922292e317abaae48fda833a7c9b7c5746298ce45ca7: Status 404 returned error can't find the container with id 58f1272e8a1f91dcb14a922292e317abaae48fda833a7c9b7c5746298ce45ca7 Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.093196 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 16:29:57 crc kubenswrapper[4758]: W0122 16:29:57.096400 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-f45a010524ef4fdee02902419adda1617778217b7d17d62bea78c2c0449d1523 WatchSource:0}: Error finding container f45a010524ef4fdee02902419adda1617778217b7d17d62bea78c2c0449d1523: Status 404 returned error can't find the container with id f45a010524ef4fdee02902419adda1617778217b7d17d62bea78c2c0449d1523 Jan 22 16:29:57 crc kubenswrapper[4758]: W0122 16:29:57.140122 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-6ffd3362ed54e2de5190d730d9f913afc42afbf352408730d72f848a3c8fa695 WatchSource:0}: Error finding container 6ffd3362ed54e2de5190d730d9f913afc42afbf352408730d72f848a3c8fa695: Status 404 returned error can't find the container with id 6ffd3362ed54e2de5190d730d9f913afc42afbf352408730d72f848a3c8fa695 Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.415516 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:29:57 crc kubenswrapper[4758]: E0122 16:29:57.415663 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:29:58.415643553 +0000 UTC m=+19.898982838 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.516508 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.516810 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.516934 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.517037 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:29:57 crc kubenswrapper[4758]: E0122 16:29:57.516725 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 16:29:57 crc kubenswrapper[4758]: E0122 16:29:57.517206 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 16:29:57 crc kubenswrapper[4758]: E0122 16:29:57.517221 4758 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:29:57 crc kubenswrapper[4758]: E0122 16:29:57.516907 4758 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 16:29:57 crc kubenswrapper[4758]: E0122 16:29:57.517309 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 16:29:58.517288793 +0000 UTC m=+20.000628118 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 16:29:57 crc kubenswrapper[4758]: E0122 16:29:57.517089 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 16:29:57 crc kubenswrapper[4758]: E0122 16:29:57.517344 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 16:29:57 crc kubenswrapper[4758]: E0122 16:29:57.517353 4758 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:29:57 crc kubenswrapper[4758]: E0122 16:29:57.517380 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 16:29:58.517372295 +0000 UTC m=+20.000711680 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:29:57 crc kubenswrapper[4758]: E0122 16:29:57.517142 4758 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 16:29:57 crc kubenswrapper[4758]: E0122 16:29:57.517717 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 16:29:58.517706704 +0000 UTC m=+20.001045989 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 16:29:57 crc kubenswrapper[4758]: E0122 16:29:57.517783 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 16:29:58.517772766 +0000 UTC m=+20.001112111 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.768885 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 02:35:15.237248293 +0000 UTC Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.894546 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"1d9485b50dd3fa712a0f43f04b4d3ae98e0f152d17b5db4b6f214125c1e926a8"} Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.894632 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"58f1272e8a1f91dcb14a922292e317abaae48fda833a7c9b7c5746298ce45ca7"} Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.898473 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.900553 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6"} Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.901098 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.903513 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5"} Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.903537 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"10fc91a9777392383ea1a48bb940f13581052f2aaadce7c2d94588884a8ff832"} Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.903547 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"6ffd3362ed54e2de5190d730d9f913afc42afbf352408730d72f848a3c8fa695"} Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.906705 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"f45a010524ef4fdee02902419adda1617778217b7d17d62bea78c2c0449d1523"} Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.919279 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9485b50dd3fa712a0f43f04b4d3ae98e0f152d17b5db4b6f214125c1e926a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:57Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.938773 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:57Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.950498 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:57Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.963678 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:57Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.978134 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:57Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:57 crc kubenswrapper[4758]: I0122 16:29:57.988817 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:57Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.005925 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f128c8ae-2e32-4884-a296-728579141589\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:29:51.087222 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:29:51.088631 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2674264491/tls.crt::/tmp/serving-cert-2674264491/tls.key\\\\\\\"\\\\nI0122 16:29:56.617863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:29:56.621506 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:29:56.621541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:29:56.621606 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:29:56.621634 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:29:56.631508 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:29:56.631550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631559 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:29:56.631576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0122 16:29:56.631574 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 16:29:56.631584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:29:56.631610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 16:29:56.634157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.020163 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afc42466-9bb2-4e33-abde-6a09e897045b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.035319 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9485b50dd3fa712a0f43f04b4d3ae98e0f152d17b5db4b6f214125c1e926a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.046615 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.058878 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.075130 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.089655 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.107012 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10fc91a9777392383ea1a48bb940f13581052f2aaadce7c2d94588884a8ff832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.124805 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f128c8ae-2e32-4884-a296-728579141589\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:29:51.087222 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:29:51.088631 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2674264491/tls.crt::/tmp/serving-cert-2674264491/tls.key\\\\\\\"\\\\nI0122 16:29:56.617863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:29:56.621506 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:29:56.621541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:29:56.621606 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:29:56.621634 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:29:56.631508 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:29:56.631550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631559 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:29:56.631576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0122 16:29:56.631574 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 16:29:56.631584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:29:56.631610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 16:29:56.634157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.139803 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afc42466-9bb2-4e33-abde-6a09e897045b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.430571 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:29:58 crc kubenswrapper[4758]: E0122 16:29:58.430796 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:30:00.430775072 +0000 UTC m=+21.914114357 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.531936 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.532204 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.532323 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.532412 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:29:58 crc kubenswrapper[4758]: E0122 16:29:58.532603 4758 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 16:29:58 crc kubenswrapper[4758]: E0122 16:29:58.532711 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 16:30:00.532697069 +0000 UTC m=+22.016036354 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 16:29:58 crc kubenswrapper[4758]: E0122 16:29:58.533147 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 16:29:58 crc kubenswrapper[4758]: E0122 16:29:58.533224 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 16:29:58 crc kubenswrapper[4758]: E0122 16:29:58.533291 4758 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:29:58 crc kubenswrapper[4758]: E0122 16:29:58.533371 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 16:30:00.533362068 +0000 UTC m=+22.016701353 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:29:58 crc kubenswrapper[4758]: E0122 16:29:58.533468 4758 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 16:29:58 crc kubenswrapper[4758]: E0122 16:29:58.533540 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 16:30:00.533531603 +0000 UTC m=+22.016870888 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 16:29:58 crc kubenswrapper[4758]: E0122 16:29:58.533638 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 16:29:58 crc kubenswrapper[4758]: E0122 16:29:58.533703 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 16:29:58 crc kubenswrapper[4758]: E0122 16:29:58.533779 4758 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:29:58 crc kubenswrapper[4758]: E0122 16:29:58.533877 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 16:30:00.533869492 +0000 UTC m=+22.017208777 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.769883 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 11:08:04.24346882 +0000 UTC Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.807833 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.807855 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:29:58 crc kubenswrapper[4758]: E0122 16:29:58.807963 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:29:58 crc kubenswrapper[4758]: E0122 16:29:58.808090 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.808286 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:29:58 crc kubenswrapper[4758]: E0122 16:29:58.808355 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.811803 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.812418 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.813701 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.814415 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.815522 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.816093 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.816727 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.817606 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.818191 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.819362 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.820036 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.821193 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.821301 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9485b50dd3fa712a0f43f04b4d3ae98e0f152d17b5db4b6f214125c1e926a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.821756 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.822255 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.823242 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.823804 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.824688 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.825116 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.825646 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.826623 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.827063 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.828010 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.828443 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.838247 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.850765 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.862179 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.872829 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.885858 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10fc91a9777392383ea1a48bb940f13581052f2aaadce7c2d94588884a8ff832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.888261 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.901255 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f128c8ae-2e32-4884-a296-728579141589\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:29:51.087222 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:29:51.088631 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2674264491/tls.crt::/tmp/serving-cert-2674264491/tls.key\\\\\\\"\\\\nI0122 16:29:56.617863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:29:56.621506 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:29:56.621541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:29:56.621606 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:29:56.621634 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:29:56.631508 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:29:56.631550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631559 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:29:56.631576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0122 16:29:56.631574 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 16:29:56.631584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:29:56.631610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 16:29:56.634157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.921270 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afc42466-9bb2-4e33-abde-6a09e897045b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.935300 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.936380 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.938341 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 22 16:29:58 crc kubenswrapper[4758]: I0122 16:29:58.939044 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.390667 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.391268 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.391861 4758 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.391970 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.425175 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.426331 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.426993 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.428854 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.430450 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.430952 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.431973 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.432622 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.433641 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.434452 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.435773 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.436970 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.437799 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.438571 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.439620 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.440403 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.441269 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.441790 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.442634 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.443161 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.443695 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.444593 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.770861 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 03:04:44.802340026 +0000 UTC Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.788931 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.790824 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.790850 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.790858 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.790911 4758 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.796716 4758 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.796840 4758 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.797694 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.797715 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.797724 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.797751 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.797772 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:59Z","lastTransitionTime":"2026-01-22T16:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:59 crc kubenswrapper[4758]: E0122 16:29:59.813373 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7288053-8dca-462f-b24f-6a9d8be738b3\\\",\\\"systemUUID\\\":\\\"83805c52-2bba-4705-bdbe-9101a9d1190e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:59Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.816432 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.816478 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.816491 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.816513 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.816526 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:59Z","lastTransitionTime":"2026-01-22T16:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:59 crc kubenswrapper[4758]: E0122 16:29:59.829665 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7288053-8dca-462f-b24f-6a9d8be738b3\\\",\\\"systemUUID\\\":\\\"83805c52-2bba-4705-bdbe-9101a9d1190e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:59Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.832926 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.832949 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.832958 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.832971 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.832980 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:59Z","lastTransitionTime":"2026-01-22T16:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:59 crc kubenswrapper[4758]: E0122 16:29:59.847698 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7288053-8dca-462f-b24f-6a9d8be738b3\\\",\\\"systemUUID\\\":\\\"83805c52-2bba-4705-bdbe-9101a9d1190e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:59Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.851094 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.851145 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.851156 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.851174 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.851185 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:59Z","lastTransitionTime":"2026-01-22T16:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:59 crc kubenswrapper[4758]: E0122 16:29:59.863620 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7288053-8dca-462f-b24f-6a9d8be738b3\\\",\\\"systemUUID\\\":\\\"83805c52-2bba-4705-bdbe-9101a9d1190e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:59Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.866682 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.866828 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.866919 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.867013 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.867101 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:59Z","lastTransitionTime":"2026-01-22T16:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:59 crc kubenswrapper[4758]: E0122 16:29:59.880533 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7288053-8dca-462f-b24f-6a9d8be738b3\\\",\\\"systemUUID\\\":\\\"83805c52-2bba-4705-bdbe-9101a9d1190e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:59Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:59 crc kubenswrapper[4758]: E0122 16:29:59.880645 4758 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.882399 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.882434 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.882444 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.882460 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.882469 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:59Z","lastTransitionTime":"2026-01-22T16:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.919752 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"61dfeba9911630f8c172fab9eee3a107fbc2e24407b0af1f69cd539bac18d47c"} Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.938727 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:59Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.953581 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:59Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.969800 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9485b50dd3fa712a0f43f04b4d3ae98e0f152d17b5db4b6f214125c1e926a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:59Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.984863 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.984919 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.984940 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.984955 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.984965 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:59Z","lastTransitionTime":"2026-01-22T16:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:59 crc kubenswrapper[4758]: I0122 16:29:59.988573 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f128c8ae-2e32-4884-a296-728579141589\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:29:51.087222 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:29:51.088631 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2674264491/tls.crt::/tmp/serving-cert-2674264491/tls.key\\\\\\\"\\\\nI0122 16:29:56.617863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:29:56.621506 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:29:56.621541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:29:56.621606 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:29:56.621634 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:29:56.631508 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:29:56.631550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631559 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:29:56.631576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0122 16:29:56.631574 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 16:29:56.631584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:29:56.631610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 16:29:56.634157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:59Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.007071 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afc42466-9bb2-4e33-abde-6a09e897045b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:59Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.021589 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:00Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.032687 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dfeba9911630f8c172fab9eee3a107fbc2e24407b0af1f69cd539bac18d47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:00Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.044421 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10fc91a9777392383ea1a48bb940f13581052f2aaadce7c2d94588884a8ff832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:00Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.087602 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.087667 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.087686 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.087711 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.087728 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:00Z","lastTransitionTime":"2026-01-22T16:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.190977 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.191020 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.191035 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.191052 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.191063 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:00Z","lastTransitionTime":"2026-01-22T16:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.294199 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.294249 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.294260 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.294277 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.294289 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:00Z","lastTransitionTime":"2026-01-22T16:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.396974 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.397019 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.397028 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.397042 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.397051 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:00Z","lastTransitionTime":"2026-01-22T16:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.483788 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:30:00 crc kubenswrapper[4758]: E0122 16:30:00.483962 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:30:04.483946644 +0000 UTC m=+25.967285929 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.499821 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.499962 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.499993 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.500025 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.500048 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:00Z","lastTransitionTime":"2026-01-22T16:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.584867 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.584929 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.584968 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:00 crc kubenswrapper[4758]: E0122 16:30:00.584975 4758 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.585003 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:00 crc kubenswrapper[4758]: E0122 16:30:00.585066 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 16:30:04.585040008 +0000 UTC m=+26.068379303 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 16:30:00 crc kubenswrapper[4758]: E0122 16:30:00.585118 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 16:30:00 crc kubenswrapper[4758]: E0122 16:30:00.585135 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 16:30:00 crc kubenswrapper[4758]: E0122 16:30:00.585179 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 16:30:00 crc kubenswrapper[4758]: E0122 16:30:00.585204 4758 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:30:00 crc kubenswrapper[4758]: E0122 16:30:00.585280 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 16:30:04.585252974 +0000 UTC m=+26.068592299 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:30:00 crc kubenswrapper[4758]: E0122 16:30:00.585141 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 16:30:00 crc kubenswrapper[4758]: E0122 16:30:00.585321 4758 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:30:00 crc kubenswrapper[4758]: E0122 16:30:00.585318 4758 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 16:30:00 crc kubenswrapper[4758]: E0122 16:30:00.585384 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 16:30:04.585364667 +0000 UTC m=+26.068704002 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:30:00 crc kubenswrapper[4758]: E0122 16:30:00.585433 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 16:30:04.585411038 +0000 UTC m=+26.068750343 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.602694 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.602789 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.602807 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.602845 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.602862 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:00Z","lastTransitionTime":"2026-01-22T16:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.705295 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.705338 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.705346 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.705360 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.705369 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:00Z","lastTransitionTime":"2026-01-22T16:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.772036 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 01:32:43.720747898 +0000 UTC Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.807703 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.807913 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.807938 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.808173 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.808251 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.808278 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:00 crc kubenswrapper[4758]: E0122 16:30:00.808269 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:00 crc kubenswrapper[4758]: E0122 16:30:00.808288 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.808308 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.808355 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:00Z","lastTransitionTime":"2026-01-22T16:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:00 crc kubenswrapper[4758]: E0122 16:30:00.808416 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.911809 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.911871 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.911894 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.911927 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:00 crc kubenswrapper[4758]: I0122 16:30:00.911951 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:00Z","lastTransitionTime":"2026-01-22T16:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.015022 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.015081 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.015090 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.015105 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.015114 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:01Z","lastTransitionTime":"2026-01-22T16:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.119653 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.119724 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.119782 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.119813 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.119836 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:01Z","lastTransitionTime":"2026-01-22T16:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.222898 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.222957 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.222970 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.222990 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.223008 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:01Z","lastTransitionTime":"2026-01-22T16:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.325543 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.325585 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.325597 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.325616 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.325628 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:01Z","lastTransitionTime":"2026-01-22T16:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.428320 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.428476 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.428508 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.428537 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.428560 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:01Z","lastTransitionTime":"2026-01-22T16:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.531054 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.531131 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.531149 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.531171 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.531186 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:01Z","lastTransitionTime":"2026-01-22T16:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.608444 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.623829 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.623948 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.627362 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dfeba9911630f8c172fab9eee3a107fbc2e24407b0af1f69cd539bac18d47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:01Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.633072 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.633117 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.633126 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.633139 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.633149 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:01Z","lastTransitionTime":"2026-01-22T16:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.643821 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10fc91a9777392383ea1a48bb940f13581052f2aaadce7c2d94588884a8ff832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:01Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.667450 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f128c8ae-2e32-4884-a296-728579141589\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:29:51.087222 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:29:51.088631 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2674264491/tls.crt::/tmp/serving-cert-2674264491/tls.key\\\\\\\"\\\\nI0122 16:29:56.617863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:29:56.621506 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:29:56.621541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:29:56.621606 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:29:56.621634 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:29:56.631508 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:29:56.631550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631559 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:29:56.631576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0122 16:29:56.631574 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 16:29:56.631584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:29:56.631610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 16:29:56.634157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:01Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.685283 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afc42466-9bb2-4e33-abde-6a09e897045b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:01Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.698205 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:01Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.709132 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9485b50dd3fa712a0f43f04b4d3ae98e0f152d17b5db4b6f214125c1e926a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:01Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.724573 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:01Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.735724 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.735779 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.735802 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.735820 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.735831 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:01Z","lastTransitionTime":"2026-01-22T16:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.736832 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:01Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.748441 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dfeba9911630f8c172fab9eee3a107fbc2e24407b0af1f69cd539bac18d47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:01Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.759137 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10fc91a9777392383ea1a48bb940f13581052f2aaadce7c2d94588884a8ff832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:01Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.772579 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 05:10:12.980641014 +0000 UTC Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.775103 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9309c6-0336-4a15-8cbf-78178b4e57d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6824555f2019c5b0c92137ccb0a9af419b01ce0c63e1739c1d22b155a97c98a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a945d54b82518c2cda9257528f766444b687693255c50680adafb11651c792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca6e50d3a2acc2a4d43dc4a1fc1ff783ea5cb78978132377b7bb12b0dbd3e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c7268055ac9d7def228857bd8b974a53bb71fa873e1e0495d4691b8ca11902\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb71578e3eba87e91e6f6db0b03669e556cfbf38e2df367d20b6c8c79952f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:01Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.785707 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f128c8ae-2e32-4884-a296-728579141589\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:29:51.087222 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:29:51.088631 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2674264491/tls.crt::/tmp/serving-cert-2674264491/tls.key\\\\\\\"\\\\nI0122 16:29:56.617863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:29:56.621506 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:29:56.621541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:29:56.621606 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:29:56.621634 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:29:56.631508 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:29:56.631550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631559 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:29:56.631576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0122 16:29:56.631574 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 16:29:56.631584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:29:56.631610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 16:29:56.634157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:01Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.798668 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afc42466-9bb2-4e33-abde-6a09e897045b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:01Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.810338 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:01Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.823648 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9485b50dd3fa712a0f43f04b4d3ae98e0f152d17b5db4b6f214125c1e926a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:01Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.833404 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:01Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.837543 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.837598 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.837609 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.837622 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.837632 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:01Z","lastTransitionTime":"2026-01-22T16:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.849553 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:01Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.939528 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.939566 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.939575 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.939589 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:01 crc kubenswrapper[4758]: I0122 16:30:01.939601 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:01Z","lastTransitionTime":"2026-01-22T16:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.042235 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.042274 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.042283 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.042296 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.042306 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:02Z","lastTransitionTime":"2026-01-22T16:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.044367 4758 csr.go:261] certificate signing request csr-zq9lp is approved, waiting to be issued Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.056574 4758 csr.go:257] certificate signing request csr-zq9lp is issued Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.060207 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-g8wjx"] Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.060556 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-g8wjx" Jan 22 16:30:02 crc kubenswrapper[4758]: W0122 16:30:02.061688 4758 reflector.go:561] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": failed to list *v1.Secret: secrets "node-resolver-dockercfg-kz9s7" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Jan 22 16:30:02 crc kubenswrapper[4758]: E0122 16:30:02.061725 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"node-resolver-dockercfg-kz9s7\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"node-resolver-dockercfg-kz9s7\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 16:30:02 crc kubenswrapper[4758]: W0122 16:30:02.061836 4758 reflector.go:561] object-"openshift-dns"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Jan 22 16:30:02 crc kubenswrapper[4758]: E0122 16:30:02.061849 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 16:30:02 crc kubenswrapper[4758]: W0122 16:30:02.061965 4758 reflector.go:561] object-"openshift-dns"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Jan 22 16:30:02 crc kubenswrapper[4758]: E0122 16:30:02.062019 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.073176 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.091548 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dfeba9911630f8c172fab9eee3a107fbc2e24407b0af1f69cd539bac18d47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.097986 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtrsf\" (UniqueName: \"kubernetes.io/projected/425c9f0a-b14e-48d3-bd86-6fc510f22a7f-kube-api-access-mtrsf\") pod \"node-resolver-g8wjx\" (UID: \"425c9f0a-b14e-48d3-bd86-6fc510f22a7f\") " pod="openshift-dns/node-resolver-g8wjx" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.098029 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/425c9f0a-b14e-48d3-bd86-6fc510f22a7f-hosts-file\") pod \"node-resolver-g8wjx\" (UID: \"425c9f0a-b14e-48d3-bd86-6fc510f22a7f\") " pod="openshift-dns/node-resolver-g8wjx" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.108136 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10fc91a9777392383ea1a48bb940f13581052f2aaadce7c2d94588884a8ff832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.126277 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9309c6-0336-4a15-8cbf-78178b4e57d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6824555f2019c5b0c92137ccb0a9af419b01ce0c63e1739c1d22b155a97c98a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a945d54b82518c2cda9257528f766444b687693255c50680adafb11651c792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca6e50d3a2acc2a4d43dc4a1fc1ff783ea5cb78978132377b7bb12b0dbd3e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c7268055ac9d7def228857bd8b974a53bb71fa873e1e0495d4691b8ca11902\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb71578e3eba87e91e6f6db0b03669e556cfbf38e2df367d20b6c8c79952f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.140139 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f128c8ae-2e32-4884-a296-728579141589\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:29:51.087222 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:29:51.088631 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2674264491/tls.crt::/tmp/serving-cert-2674264491/tls.key\\\\\\\"\\\\nI0122 16:29:56.617863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:29:56.621506 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:29:56.621541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:29:56.621606 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:29:56.621634 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:29:56.631508 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:29:56.631550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631559 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:29:56.631576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0122 16:29:56.631574 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 16:29:56.631584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:29:56.631610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 16:29:56.634157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.144516 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.144588 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.144603 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.144629 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.144643 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:02Z","lastTransitionTime":"2026-01-22T16:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.159559 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-zsbtx"] Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.159938 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.162321 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.162331 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.162868 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.162926 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.163096 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.163896 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afc42466-9bb2-4e33-abde-6a09e897045b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.184697 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9485b50dd3fa712a0f43f04b4d3ae98e0f152d17b5db4b6f214125c1e926a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.195971 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.198891 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzkms\" (UniqueName: \"kubernetes.io/projected/a4b5f24a-19df-4969-b547-a5acc323c58a-kube-api-access-gzkms\") pod \"machine-config-daemon-zsbtx\" (UID: \"a4b5f24a-19df-4969-b547-a5acc323c58a\") " pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.198950 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a4b5f24a-19df-4969-b547-a5acc323c58a-proxy-tls\") pod \"machine-config-daemon-zsbtx\" (UID: \"a4b5f24a-19df-4969-b547-a5acc323c58a\") " pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.199013 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a4b5f24a-19df-4969-b547-a5acc323c58a-rootfs\") pod \"machine-config-daemon-zsbtx\" (UID: \"a4b5f24a-19df-4969-b547-a5acc323c58a\") " pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.199039 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a4b5f24a-19df-4969-b547-a5acc323c58a-mcd-auth-proxy-config\") pod \"machine-config-daemon-zsbtx\" (UID: \"a4b5f24a-19df-4969-b547-a5acc323c58a\") " pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.199092 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtrsf\" (UniqueName: \"kubernetes.io/projected/425c9f0a-b14e-48d3-bd86-6fc510f22a7f-kube-api-access-mtrsf\") pod \"node-resolver-g8wjx\" (UID: \"425c9f0a-b14e-48d3-bd86-6fc510f22a7f\") " pod="openshift-dns/node-resolver-g8wjx" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.199184 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/425c9f0a-b14e-48d3-bd86-6fc510f22a7f-hosts-file\") pod \"node-resolver-g8wjx\" (UID: \"425c9f0a-b14e-48d3-bd86-6fc510f22a7f\") " pod="openshift-dns/node-resolver-g8wjx" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.199338 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/425c9f0a-b14e-48d3-bd86-6fc510f22a7f-hosts-file\") pod \"node-resolver-g8wjx\" (UID: \"425c9f0a-b14e-48d3-bd86-6fc510f22a7f\") " pod="openshift-dns/node-resolver-g8wjx" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.211916 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.220116 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g8wjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425c9f0a-b14e-48d3-bd86-6fc510f22a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtrsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g8wjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.234596 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10fc91a9777392383ea1a48bb940f13581052f2aaadce7c2d94588884a8ff832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.246920 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.246967 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.246978 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.246995 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.247006 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:02Z","lastTransitionTime":"2026-01-22T16:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.270822 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9309c6-0336-4a15-8cbf-78178b4e57d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6824555f2019c5b0c92137ccb0a9af419b01ce0c63e1739c1d22b155a97c98a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a945d54b82518c2cda9257528f766444b687693255c50680adafb11651c792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca6e50d3a2acc2a4d43dc4a1fc1ff783ea5cb78978132377b7bb12b0dbd3e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c7268055ac9d7def228857bd8b974a53bb71fa873e1e0495d4691b8ca11902\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb71578e3eba87e91e6f6db0b03669e556cfbf38e2df367d20b6c8c79952f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.295649 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f128c8ae-2e32-4884-a296-728579141589\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:29:51.087222 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:29:51.088631 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2674264491/tls.crt::/tmp/serving-cert-2674264491/tls.key\\\\\\\"\\\\nI0122 16:29:56.617863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:29:56.621506 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:29:56.621541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:29:56.621606 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:29:56.621634 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:29:56.631508 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:29:56.631550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631559 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:29:56.631576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0122 16:29:56.631574 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 16:29:56.631584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:29:56.631610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 16:29:56.634157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.299881 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzkms\" (UniqueName: \"kubernetes.io/projected/a4b5f24a-19df-4969-b547-a5acc323c58a-kube-api-access-gzkms\") pod \"machine-config-daemon-zsbtx\" (UID: \"a4b5f24a-19df-4969-b547-a5acc323c58a\") " pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.299926 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a4b5f24a-19df-4969-b547-a5acc323c58a-proxy-tls\") pod \"machine-config-daemon-zsbtx\" (UID: \"a4b5f24a-19df-4969-b547-a5acc323c58a\") " pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.299949 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a4b5f24a-19df-4969-b547-a5acc323c58a-rootfs\") pod \"machine-config-daemon-zsbtx\" (UID: \"a4b5f24a-19df-4969-b547-a5acc323c58a\") " pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.299966 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a4b5f24a-19df-4969-b547-a5acc323c58a-mcd-auth-proxy-config\") pod \"machine-config-daemon-zsbtx\" (UID: \"a4b5f24a-19df-4969-b547-a5acc323c58a\") " pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.300759 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a4b5f24a-19df-4969-b547-a5acc323c58a-rootfs\") pod \"machine-config-daemon-zsbtx\" (UID: \"a4b5f24a-19df-4969-b547-a5acc323c58a\") " pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.301150 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a4b5f24a-19df-4969-b547-a5acc323c58a-mcd-auth-proxy-config\") pod \"machine-config-daemon-zsbtx\" (UID: \"a4b5f24a-19df-4969-b547-a5acc323c58a\") " pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.306123 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a4b5f24a-19df-4969-b547-a5acc323c58a-proxy-tls\") pod \"machine-config-daemon-zsbtx\" (UID: \"a4b5f24a-19df-4969-b547-a5acc323c58a\") " pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.317120 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afc42466-9bb2-4e33-abde-6a09e897045b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.326160 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzkms\" (UniqueName: \"kubernetes.io/projected/a4b5f24a-19df-4969-b547-a5acc323c58a-kube-api-access-gzkms\") pod \"machine-config-daemon-zsbtx\" (UID: \"a4b5f24a-19df-4969-b547-a5acc323c58a\") " pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.336354 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.349999 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.350056 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.350072 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.350095 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.350110 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:02Z","lastTransitionTime":"2026-01-22T16:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.376318 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dfeba9911630f8c172fab9eee3a107fbc2e24407b0af1f69cd539bac18d47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.424649 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9485b50dd3fa712a0f43f04b4d3ae98e0f152d17b5db4b6f214125c1e926a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.436604 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.449062 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.453002 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.453102 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.453172 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.453256 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.453321 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:02Z","lastTransitionTime":"2026-01-22T16:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.460962 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g8wjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425c9f0a-b14e-48d3-bd86-6fc510f22a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtrsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g8wjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.471354 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.480436 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b5f24a-19df-4969-b547-a5acc323c58a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zsbtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:02 crc kubenswrapper[4758]: W0122 16:30:02.486215 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4b5f24a_19df_4969_b547_a5acc323c58a.slice/crio-94e80eaf4dff507a24157e2374775421ce351f1b90706bd4f799ef59008ff930 WatchSource:0}: Error finding container 94e80eaf4dff507a24157e2374775421ce351f1b90706bd4f799ef59008ff930: Status 404 returned error can't find the container with id 94e80eaf4dff507a24157e2374775421ce351f1b90706bd4f799ef59008ff930 Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.540711 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-fqfn9"] Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.541368 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-7dvfg"] Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.541577 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.541616 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.546158 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.546279 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.546292 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.546374 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.546486 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.546546 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.546560 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.555199 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.555230 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.555238 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.555251 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.555263 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:02Z","lastTransitionTime":"2026-01-22T16:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.556572 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.578915 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afc42466-9bb2-4e33-abde-6a09e897045b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.591214 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.603016 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/97853b38-352d-42df-ad31-639c0e58093a-cni-binary-copy\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.603067 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c9182510-5fc6-4717-b94c-de8ca4fb7c54-cnibin\") pod \"multus-additional-cni-plugins-fqfn9\" (UID: \"c9182510-5fc6-4717-b94c-de8ca4fb7c54\") " pod="openshift-multus/multus-additional-cni-plugins-fqfn9" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.603091 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/97853b38-352d-42df-ad31-639c0e58093a-hostroot\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.603120 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c9182510-5fc6-4717-b94c-de8ca4fb7c54-cni-binary-copy\") pod \"multus-additional-cni-plugins-fqfn9\" (UID: \"c9182510-5fc6-4717-b94c-de8ca4fb7c54\") " pod="openshift-multus/multus-additional-cni-plugins-fqfn9" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.603137 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c9182510-5fc6-4717-b94c-de8ca4fb7c54-os-release\") pod \"multus-additional-cni-plugins-fqfn9\" (UID: \"c9182510-5fc6-4717-b94c-de8ca4fb7c54\") " pod="openshift-multus/multus-additional-cni-plugins-fqfn9" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.603156 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/97853b38-352d-42df-ad31-639c0e58093a-host-run-netns\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.603177 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/97853b38-352d-42df-ad31-639c0e58093a-cnibin\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.603202 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c9182510-5fc6-4717-b94c-de8ca4fb7c54-tuning-conf-dir\") pod \"multus-additional-cni-plugins-fqfn9\" (UID: \"c9182510-5fc6-4717-b94c-de8ca4fb7c54\") " pod="openshift-multus/multus-additional-cni-plugins-fqfn9" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.603216 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcrsz\" (UniqueName: \"kubernetes.io/projected/97853b38-352d-42df-ad31-639c0e58093a-kube-api-access-wcrsz\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.603238 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/97853b38-352d-42df-ad31-639c0e58093a-os-release\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.603251 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/97853b38-352d-42df-ad31-639c0e58093a-multus-socket-dir-parent\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.603352 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/97853b38-352d-42df-ad31-639c0e58093a-multus-conf-dir\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.603388 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/97853b38-352d-42df-ad31-639c0e58093a-etc-kubernetes\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.603415 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c9182510-5fc6-4717-b94c-de8ca4fb7c54-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-fqfn9\" (UID: \"c9182510-5fc6-4717-b94c-de8ca4fb7c54\") " pod="openshift-multus/multus-additional-cni-plugins-fqfn9" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.603439 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/97853b38-352d-42df-ad31-639c0e58093a-host-var-lib-cni-bin\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.603461 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/97853b38-352d-42df-ad31-639c0e58093a-host-run-multus-certs\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.603493 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/97853b38-352d-42df-ad31-639c0e58093a-host-run-k8s-cni-cncf-io\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.603513 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/97853b38-352d-42df-ad31-639c0e58093a-host-var-lib-cni-multus\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.603545 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/97853b38-352d-42df-ad31-639c0e58093a-multus-daemon-config\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.603572 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/97853b38-352d-42df-ad31-639c0e58093a-multus-cni-dir\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.603593 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c9182510-5fc6-4717-b94c-de8ca4fb7c54-system-cni-dir\") pod \"multus-additional-cni-plugins-fqfn9\" (UID: \"c9182510-5fc6-4717-b94c-de8ca4fb7c54\") " pod="openshift-multus/multus-additional-cni-plugins-fqfn9" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.603619 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mxd2\" (UniqueName: \"kubernetes.io/projected/c9182510-5fc6-4717-b94c-de8ca4fb7c54-kube-api-access-2mxd2\") pod \"multus-additional-cni-plugins-fqfn9\" (UID: \"c9182510-5fc6-4717-b94c-de8ca4fb7c54\") " pod="openshift-multus/multus-additional-cni-plugins-fqfn9" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.603637 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/97853b38-352d-42df-ad31-639c0e58093a-host-var-lib-kubelet\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.603674 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/97853b38-352d-42df-ad31-639c0e58093a-system-cni-dir\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.604490 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10fc91a9777392383ea1a48bb940f13581052f2aaadce7c2d94588884a8ff832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.618452 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9485b50dd3fa712a0f43f04b4d3ae98e0f152d17b5db4b6f214125c1e926a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.631547 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.640564 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g8wjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425c9f0a-b14e-48d3-bd86-6fc510f22a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtrsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g8wjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.654543 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b5f24a-19df-4969-b547-a5acc323c58a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zsbtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.657047 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.657110 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.657122 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.657139 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.657153 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:02Z","lastTransitionTime":"2026-01-22T16:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.673623 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9182510-5fc6-4717-b94c-de8ca4fb7c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqfn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.691145 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9309c6-0336-4a15-8cbf-78178b4e57d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6824555f2019c5b0c92137ccb0a9af419b01ce0c63e1739c1d22b155a97c98a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a945d54b82518c2cda9257528f766444b687693255c50680adafb11651c792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca6e50d3a2acc2a4d43dc4a1fc1ff783ea5cb78978132377b7bb12b0dbd3e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c7268055ac9d7def228857bd8b974a53bb71fa873e1e0495d4691b8ca11902\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb71578e3eba87e91e6f6db0b03669e556cfbf38e2df367d20b6c8c79952f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.704771 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/97853b38-352d-42df-ad31-639c0e58093a-system-cni-dir\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.704804 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/97853b38-352d-42df-ad31-639c0e58093a-host-var-lib-kubelet\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.704820 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/97853b38-352d-42df-ad31-639c0e58093a-cni-binary-copy\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.704843 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/97853b38-352d-42df-ad31-639c0e58093a-hostroot\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.704858 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c9182510-5fc6-4717-b94c-de8ca4fb7c54-cnibin\") pod \"multus-additional-cni-plugins-fqfn9\" (UID: \"c9182510-5fc6-4717-b94c-de8ca4fb7c54\") " pod="openshift-multus/multus-additional-cni-plugins-fqfn9" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.704871 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c9182510-5fc6-4717-b94c-de8ca4fb7c54-cni-binary-copy\") pod \"multus-additional-cni-plugins-fqfn9\" (UID: \"c9182510-5fc6-4717-b94c-de8ca4fb7c54\") " pod="openshift-multus/multus-additional-cni-plugins-fqfn9" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.704893 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/97853b38-352d-42df-ad31-639c0e58093a-host-run-netns\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.704907 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c9182510-5fc6-4717-b94c-de8ca4fb7c54-os-release\") pod \"multus-additional-cni-plugins-fqfn9\" (UID: \"c9182510-5fc6-4717-b94c-de8ca4fb7c54\") " pod="openshift-multus/multus-additional-cni-plugins-fqfn9" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.704921 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/97853b38-352d-42df-ad31-639c0e58093a-cnibin\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.704936 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c9182510-5fc6-4717-b94c-de8ca4fb7c54-tuning-conf-dir\") pod \"multus-additional-cni-plugins-fqfn9\" (UID: \"c9182510-5fc6-4717-b94c-de8ca4fb7c54\") " pod="openshift-multus/multus-additional-cni-plugins-fqfn9" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.704951 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/97853b38-352d-42df-ad31-639c0e58093a-os-release\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.704966 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/97853b38-352d-42df-ad31-639c0e58093a-multus-socket-dir-parent\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.704882 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f128c8ae-2e32-4884-a296-728579141589\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:29:51.087222 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:29:51.088631 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2674264491/tls.crt::/tmp/serving-cert-2674264491/tls.key\\\\\\\"\\\\nI0122 16:29:56.617863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:29:56.621506 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:29:56.621541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:29:56.621606 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:29:56.621634 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:29:56.631508 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:29:56.631550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631559 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:29:56.631576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0122 16:29:56.631574 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 16:29:56.631584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:29:56.631610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 16:29:56.634157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.704980 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcrsz\" (UniqueName: \"kubernetes.io/projected/97853b38-352d-42df-ad31-639c0e58093a-kube-api-access-wcrsz\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.705099 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/97853b38-352d-42df-ad31-639c0e58093a-system-cni-dir\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.705142 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/97853b38-352d-42df-ad31-639c0e58093a-multus-conf-dir\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.705150 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/97853b38-352d-42df-ad31-639c0e58093a-hostroot\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.705117 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/97853b38-352d-42df-ad31-639c0e58093a-multus-conf-dir\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.705177 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c9182510-5fc6-4717-b94c-de8ca4fb7c54-cnibin\") pod \"multus-additional-cni-plugins-fqfn9\" (UID: \"c9182510-5fc6-4717-b94c-de8ca4fb7c54\") " pod="openshift-multus/multus-additional-cni-plugins-fqfn9" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.705213 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/97853b38-352d-42df-ad31-639c0e58093a-cnibin\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.705207 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c9182510-5fc6-4717-b94c-de8ca4fb7c54-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-fqfn9\" (UID: \"c9182510-5fc6-4717-b94c-de8ca4fb7c54\") " pod="openshift-multus/multus-additional-cni-plugins-fqfn9" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.705233 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/97853b38-352d-42df-ad31-639c0e58093a-host-run-netns\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.705247 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/97853b38-352d-42df-ad31-639c0e58093a-host-var-lib-cni-bin\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.705280 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c9182510-5fc6-4717-b94c-de8ca4fb7c54-os-release\") pod \"multus-additional-cni-plugins-fqfn9\" (UID: \"c9182510-5fc6-4717-b94c-de8ca4fb7c54\") " pod="openshift-multus/multus-additional-cni-plugins-fqfn9" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.705282 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/97853b38-352d-42df-ad31-639c0e58093a-etc-kubernetes\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.705308 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/97853b38-352d-42df-ad31-639c0e58093a-etc-kubernetes\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.705310 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/97853b38-352d-42df-ad31-639c0e58093a-host-run-multus-certs\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.705347 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/97853b38-352d-42df-ad31-639c0e58093a-host-var-lib-cni-multus\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.705372 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/97853b38-352d-42df-ad31-639c0e58093a-multus-daemon-config\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.705398 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/97853b38-352d-42df-ad31-639c0e58093a-multus-cni-dir\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.705419 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/97853b38-352d-42df-ad31-639c0e58093a-host-run-k8s-cni-cncf-io\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.705439 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c9182510-5fc6-4717-b94c-de8ca4fb7c54-system-cni-dir\") pod \"multus-additional-cni-plugins-fqfn9\" (UID: \"c9182510-5fc6-4717-b94c-de8ca4fb7c54\") " pod="openshift-multus/multus-additional-cni-plugins-fqfn9" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.705482 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mxd2\" (UniqueName: \"kubernetes.io/projected/c9182510-5fc6-4717-b94c-de8ca4fb7c54-kube-api-access-2mxd2\") pod \"multus-additional-cni-plugins-fqfn9\" (UID: \"c9182510-5fc6-4717-b94c-de8ca4fb7c54\") " pod="openshift-multus/multus-additional-cni-plugins-fqfn9" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.705723 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/97853b38-352d-42df-ad31-639c0e58093a-cni-binary-copy\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.705809 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/97853b38-352d-42df-ad31-639c0e58093a-host-var-lib-kubelet\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.705868 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/97853b38-352d-42df-ad31-639c0e58093a-os-release\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.705901 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c9182510-5fc6-4717-b94c-de8ca4fb7c54-cni-binary-copy\") pod \"multus-additional-cni-plugins-fqfn9\" (UID: \"c9182510-5fc6-4717-b94c-de8ca4fb7c54\") " pod="openshift-multus/multus-additional-cni-plugins-fqfn9" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.705938 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c9182510-5fc6-4717-b94c-de8ca4fb7c54-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-fqfn9\" (UID: \"c9182510-5fc6-4717-b94c-de8ca4fb7c54\") " pod="openshift-multus/multus-additional-cni-plugins-fqfn9" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.705982 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/97853b38-352d-42df-ad31-639c0e58093a-multus-socket-dir-parent\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.706020 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/97853b38-352d-42df-ad31-639c0e58093a-host-run-k8s-cni-cncf-io\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.706159 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/97853b38-352d-42df-ad31-639c0e58093a-multus-cni-dir\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.705326 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/97853b38-352d-42df-ad31-639c0e58093a-host-run-multus-certs\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.706154 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c9182510-5fc6-4717-b94c-de8ca4fb7c54-system-cni-dir\") pod \"multus-additional-cni-plugins-fqfn9\" (UID: \"c9182510-5fc6-4717-b94c-de8ca4fb7c54\") " pod="openshift-multus/multus-additional-cni-plugins-fqfn9" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.706189 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/97853b38-352d-42df-ad31-639c0e58093a-host-var-lib-cni-bin\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.706214 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/97853b38-352d-42df-ad31-639c0e58093a-host-var-lib-cni-multus\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.706452 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/97853b38-352d-42df-ad31-639c0e58093a-multus-daemon-config\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.717536 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dfeba9911630f8c172fab9eee3a107fbc2e24407b0af1f69cd539bac18d47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.723524 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcrsz\" (UniqueName: \"kubernetes.io/projected/97853b38-352d-42df-ad31-639c0e58093a-kube-api-access-wcrsz\") pod \"multus-7dvfg\" (UID: \"97853b38-352d-42df-ad31-639c0e58093a\") " pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.723548 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mxd2\" (UniqueName: \"kubernetes.io/projected/c9182510-5fc6-4717-b94c-de8ca4fb7c54-kube-api-access-2mxd2\") pod \"multus-additional-cni-plugins-fqfn9\" (UID: \"c9182510-5fc6-4717-b94c-de8ca4fb7c54\") " pod="openshift-multus/multus-additional-cni-plugins-fqfn9" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.731516 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10fc91a9777392383ea1a48bb940f13581052f2aaadce7c2d94588884a8ff832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.743085 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afc42466-9bb2-4e33-abde-6a09e897045b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.752585 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.759780 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.759824 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.759841 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.759855 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.759866 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:02Z","lastTransitionTime":"2026-01-22T16:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.766133 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b5f24a-19df-4969-b547-a5acc323c58a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zsbtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.772828 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 14:49:12.542600817 +0000 UTC Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.782328 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9182510-5fc6-4717-b94c-de8ca4fb7c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqfn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.799572 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9485b50dd3fa712a0f43f04b4d3ae98e0f152d17b5db4b6f214125c1e926a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.802727 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c9182510-5fc6-4717-b94c-de8ca4fb7c54-tuning-conf-dir\") pod \"multus-additional-cni-plugins-fqfn9\" (UID: \"c9182510-5fc6-4717-b94c-de8ca4fb7c54\") " pod="openshift-multus/multus-additional-cni-plugins-fqfn9" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.807649 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.807731 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:02 crc kubenswrapper[4758]: E0122 16:30:02.807752 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:02 crc kubenswrapper[4758]: E0122 16:30:02.807868 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.808051 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:02 crc kubenswrapper[4758]: E0122 16:30:02.808253 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.824473 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.853363 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g8wjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425c9f0a-b14e-48d3-bd86-6fc510f22a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtrsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g8wjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.861489 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.861763 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.861875 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.861995 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.862062 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:02Z","lastTransitionTime":"2026-01-22T16:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.873335 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.879482 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-7dvfg" Jan 22 16:30:02 crc kubenswrapper[4758]: W0122 16:30:02.887694 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9182510_5fc6_4717_b94c_de8ca4fb7c54.slice/crio-ad1b0d11717bb89c49e7c6f6eca710906e0f171d870c377974b3a82b37c3664b WatchSource:0}: Error finding container ad1b0d11717bb89c49e7c6f6eca710906e0f171d870c377974b3a82b37c3664b: Status 404 returned error can't find the container with id ad1b0d11717bb89c49e7c6f6eca710906e0f171d870c377974b3a82b37c3664b Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.890191 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dfeba9911630f8c172fab9eee3a107fbc2e24407b0af1f69cd539bac18d47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.914691 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7dvfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97853b38-352d-42df-ad31-639c0e58093a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7dvfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.946099 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7dvfg" event={"ID":"97853b38-352d-42df-ad31-639c0e58093a","Type":"ContainerStarted","Data":"7b81a244320235350083ad41570edfb5ac7db0f112c4090ef91e0b5462ab856f"} Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.946936 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" event={"ID":"c9182510-5fc6-4717-b94c-de8ca4fb7c54","Type":"ContainerStarted","Data":"ad1b0d11717bb89c49e7c6f6eca710906e0f171d870c377974b3a82b37c3664b"} Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.947849 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerStarted","Data":"4fbf5569b30ec6397014b282bf67eca77930756b413c7554ab366d2d31a4f548"} Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.947870 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerStarted","Data":"94e80eaf4dff507a24157e2374775421ce351f1b90706bd4f799ef59008ff930"} Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.953690 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jdpck"] Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.956174 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:02 crc kubenswrapper[4758]: W0122 16:30:02.965194 4758 reflector.go:561] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 22 16:30:02 crc kubenswrapper[4758]: E0122 16:30:02.965256 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 16:30:02 crc kubenswrapper[4758]: W0122 16:30:02.965322 4758 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": failed to list *v1.Secret: secrets "ovn-kubernetes-node-dockercfg-pwtwl" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 22 16:30:02 crc kubenswrapper[4758]: E0122 16:30:02.965341 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-pwtwl\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-kubernetes-node-dockercfg-pwtwl\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.971155 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.971193 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.971203 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.971218 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.971231 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:02Z","lastTransitionTime":"2026-01-22T16:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.978721 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.978795 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.978944 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.985766 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 22 16:30:02 crc kubenswrapper[4758]: I0122 16:30:02.985907 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.009600 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.010118 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-run-systemd\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.010088 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9309c6-0336-4a15-8cbf-78178b4e57d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6824555f2019c5b0c92137ccb0a9af419b01ce0c63e1739c1d22b155a97c98a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a945d54b82518c2cda9257528f766444b687693255c50680adafb11651c792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca6e50d3a2acc2a4d43dc4a1fc1ff783ea5cb78978132377b7bb12b0dbd3e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c7268055ac9d7def228857bd8b974a53bb71fa873e1e0495d4691b8ca11902\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb71578e3eba87e91e6f6db0b03669e556cfbf38e2df367d20b6c8c79952f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:03Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.010171 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-systemd-units\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.010198 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-log-socket\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.010218 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-host-cni-netd\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.010234 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-ovnkube-script-lib\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.010253 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-etc-openvswitch\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.010268 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.010299 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-host-run-ovn-kubernetes\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.010349 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-host-cni-bin\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.010380 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-run-openvswitch\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.010399 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-ovn-node-metrics-cert\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.010431 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-ovnkube-config\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.010454 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-node-log\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.010476 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-host-slash\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.010514 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-host-kubelet\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.010532 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-host-run-netns\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.010551 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-var-lib-openvswitch\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.010570 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-run-ovn\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.010586 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-env-overrides\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.010618 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96qwj\" (UniqueName: \"kubernetes.io/projected/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-kube-api-access-96qwj\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.029385 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f128c8ae-2e32-4884-a296-728579141589\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:29:51.087222 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:29:51.088631 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2674264491/tls.crt::/tmp/serving-cert-2674264491/tls.key\\\\\\\"\\\\nI0122 16:29:56.617863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:29:56.621506 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:29:56.621541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:29:56.621606 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:29:56.621634 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:29:56.631508 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:29:56.631550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631559 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:29:56.631576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0122 16:29:56.631574 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 16:29:56.631584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:29:56.631610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 16:29:56.634157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:03Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.058069 4758 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-22 16:25:02 +0000 UTC, rotation deadline is 2026-10-11 19:15:29.965889095 +0000 UTC Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.058126 4758 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6290h45m26.907765652s for next certificate rotation Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.059392 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:03Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.071984 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afc42466-9bb2-4e33-abde-6a09e897045b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:03Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.077102 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.077132 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.077140 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.077201 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.077212 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:03Z","lastTransitionTime":"2026-01-22T16:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.089218 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:03Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.099975 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10fc91a9777392383ea1a48bb940f13581052f2aaadce7c2d94588884a8ff832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:03Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.111959 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96qwj\" (UniqueName: \"kubernetes.io/projected/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-kube-api-access-96qwj\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.112012 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-run-systemd\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.112030 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-systemd-units\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.112050 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-log-socket\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.112066 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-host-cni-netd\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.112081 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-ovnkube-script-lib\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.112095 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-etc-openvswitch\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.112113 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.112141 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-host-run-ovn-kubernetes\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.112145 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-log-socket\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.112157 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-systemd-units\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.112161 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-run-systemd\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.112195 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.112178 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-host-cni-bin\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.112195 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-etc-openvswitch\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.112216 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-host-cni-netd\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.112154 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-host-cni-bin\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.112317 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-host-run-ovn-kubernetes\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.112319 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-run-openvswitch\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.112334 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-run-openvswitch\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.112343 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-ovn-node-metrics-cert\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.112359 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-ovnkube-config\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.112376 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-node-log\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.112390 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-host-slash\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.112412 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-host-kubelet\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.112424 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-node-log\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.112432 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-host-run-netns\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.112445 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-var-lib-openvswitch\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.112458 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-run-ovn\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.112470 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-env-overrides\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.113009 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-env-overrides\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.112447 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-host-slash\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.113066 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-run-ovn\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.113067 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-var-lib-openvswitch\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.113105 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-host-run-netns\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.113108 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-ovnkube-config\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.113089 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-host-kubelet\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.113228 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-ovnkube-script-lib\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.114725 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-ovn-node-metrics-cert\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.119249 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jdpck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:03Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.131924 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9485b50dd3fa712a0f43f04b4d3ae98e0f152d17b5db4b6f214125c1e926a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:03Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.143309 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:03Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.154097 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g8wjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425c9f0a-b14e-48d3-bd86-6fc510f22a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtrsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g8wjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:03Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.166476 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b5f24a-19df-4969-b547-a5acc323c58a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zsbtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:03Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.180099 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.180156 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.180172 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.180195 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.180211 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:03Z","lastTransitionTime":"2026-01-22T16:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.193145 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9182510-5fc6-4717-b94c-de8ca4fb7c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqfn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:03Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:03 crc kubenswrapper[4758]: E0122 16:30:03.210695 4758 projected.go:288] Couldn't get configMap openshift-dns/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.243623 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9309c6-0336-4a15-8cbf-78178b4e57d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6824555f2019c5b0c92137ccb0a9af419b01ce0c63e1739c1d22b155a97c98a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a945d54b82518c2cda9257528f766444b687693255c50680adafb11651c792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca6e50d3a2acc2a4d43dc4a1fc1ff783ea5cb78978132377b7bb12b0dbd3e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c7268055ac9d7def228857bd8b974a53bb71fa873e1e0495d4691b8ca11902\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb71578e3eba87e91e6f6db0b03669e556cfbf38e2df367d20b6c8c79952f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:03Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.259042 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f128c8ae-2e32-4884-a296-728579141589\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:29:51.087222 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:29:51.088631 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2674264491/tls.crt::/tmp/serving-cert-2674264491/tls.key\\\\\\\"\\\\nI0122 16:29:56.617863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:29:56.621506 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:29:56.621541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:29:56.621606 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:29:56.621634 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:29:56.631508 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:29:56.631550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631559 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:29:56.631576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0122 16:29:56.631574 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 16:29:56.631584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:29:56.631610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 16:29:56.634157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:03Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.264153 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 22 16:30:03 crc kubenswrapper[4758]: E0122 16:30:03.271660 4758 projected.go:194] Error preparing data for projected volume kube-api-access-mtrsf for pod openshift-dns/node-resolver-g8wjx: failed to sync configmap cache: timed out waiting for the condition Jan 22 16:30:03 crc kubenswrapper[4758]: E0122 16:30:03.271766 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/425c9f0a-b14e-48d3-bd86-6fc510f22a7f-kube-api-access-mtrsf podName:425c9f0a-b14e-48d3-bd86-6fc510f22a7f nodeName:}" failed. No retries permitted until 2026-01-22 16:30:03.771723553 +0000 UTC m=+25.255062838 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mtrsf" (UniqueName: "kubernetes.io/projected/425c9f0a-b14e-48d3-bd86-6fc510f22a7f-kube-api-access-mtrsf") pod "node-resolver-g8wjx" (UID: "425c9f0a-b14e-48d3-bd86-6fc510f22a7f") : failed to sync configmap cache: timed out waiting for the condition Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.274659 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dfeba9911630f8c172fab9eee3a107fbc2e24407b0af1f69cd539bac18d47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:03Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.282638 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.282664 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.282673 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.282685 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.282697 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:03Z","lastTransitionTime":"2026-01-22T16:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.284789 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7dvfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97853b38-352d-42df-ad31-639c0e58093a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7dvfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:03Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.297214 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:03Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.356189 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.385776 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.385808 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.385818 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.385833 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.385842 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:03Z","lastTransitionTime":"2026-01-22T16:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.491655 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.491695 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.491704 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.491718 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.491728 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:03Z","lastTransitionTime":"2026-01-22T16:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.594288 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.594728 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.594784 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.594810 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.594826 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:03Z","lastTransitionTime":"2026-01-22T16:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.697635 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.697661 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.697668 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.697681 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.697689 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:03Z","lastTransitionTime":"2026-01-22T16:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.773491 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 05:23:52.326649196 +0000 UTC Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.799610 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.799652 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.799665 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.799682 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.799691 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:03Z","lastTransitionTime":"2026-01-22T16:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.823982 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtrsf\" (UniqueName: \"kubernetes.io/projected/425c9f0a-b14e-48d3-bd86-6fc510f22a7f-kube-api-access-mtrsf\") pod \"node-resolver-g8wjx\" (UID: \"425c9f0a-b14e-48d3-bd86-6fc510f22a7f\") " pod="openshift-dns/node-resolver-g8wjx" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.828772 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtrsf\" (UniqueName: \"kubernetes.io/projected/425c9f0a-b14e-48d3-bd86-6fc510f22a7f-kube-api-access-mtrsf\") pod \"node-resolver-g8wjx\" (UID: \"425c9f0a-b14e-48d3-bd86-6fc510f22a7f\") " pod="openshift-dns/node-resolver-g8wjx" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.872052 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-g8wjx" Jan 22 16:30:03 crc kubenswrapper[4758]: W0122 16:30:03.887274 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod425c9f0a_b14e_48d3_bd86_6fc510f22a7f.slice/crio-0660efe220def840bc82adde7596eec3fff9929976048490f769cb2310432819 WatchSource:0}: Error finding container 0660efe220def840bc82adde7596eec3fff9929976048490f769cb2310432819: Status 404 returned error can't find the container with id 0660efe220def840bc82adde7596eec3fff9929976048490f769cb2310432819 Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.905487 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.905528 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.905537 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.905553 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.905565 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:03Z","lastTransitionTime":"2026-01-22T16:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.952910 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-g8wjx" event={"ID":"425c9f0a-b14e-48d3-bd86-6fc510f22a7f","Type":"ContainerStarted","Data":"0660efe220def840bc82adde7596eec3fff9929976048490f769cb2310432819"} Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.955255 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7dvfg" event={"ID":"97853b38-352d-42df-ad31-639c0e58093a","Type":"ContainerStarted","Data":"12409cad6bedda3da41a11ce209dd58b7d15e3fc0dde575d70b3aa6c64435144"} Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.958474 4758 generic.go:334] "Generic (PLEG): container finished" podID="c9182510-5fc6-4717-b94c-de8ca4fb7c54" containerID="66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098" exitCode=0 Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.958774 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" event={"ID":"c9182510-5fc6-4717-b94c-de8ca4fb7c54","Type":"ContainerDied","Data":"66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098"} Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.963919 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerStarted","Data":"208979f8d30765fcfd45650c760741d72bd7119bfe62ebf4d7c1554d6c6d56e7"} Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.970970 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afc42466-9bb2-4e33-abde-6a09e897045b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:03Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.983607 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:03Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:03 crc kubenswrapper[4758]: I0122 16:30:03.997280 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10fc91a9777392383ea1a48bb940f13581052f2aaadce7c2d94588884a8ff832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:03Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.008186 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.008215 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.008356 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.008370 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.008382 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:04Z","lastTransitionTime":"2026-01-22T16:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.015435 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jdpck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.028384 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9485b50dd3fa712a0f43f04b4d3ae98e0f152d17b5db4b6f214125c1e926a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.039383 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.050552 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g8wjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425c9f0a-b14e-48d3-bd86-6fc510f22a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtrsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g8wjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.063431 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b5f24a-19df-4969-b547-a5acc323c58a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zsbtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.078665 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9182510-5fc6-4717-b94c-de8ca4fb7c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqfn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.098320 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9309c6-0336-4a15-8cbf-78178b4e57d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6824555f2019c5b0c92137ccb0a9af419b01ce0c63e1739c1d22b155a97c98a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a945d54b82518c2cda9257528f766444b687693255c50680adafb11651c792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca6e50d3a2acc2a4d43dc4a1fc1ff783ea5cb78978132377b7bb12b0dbd3e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c7268055ac9d7def228857bd8b974a53bb71fa873e1e0495d4691b8ca11902\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb71578e3eba87e91e6f6db0b03669e556cfbf38e2df367d20b6c8c79952f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.112165 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f128c8ae-2e32-4884-a296-728579141589\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:29:51.087222 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:29:51.088631 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2674264491/tls.crt::/tmp/serving-cert-2674264491/tls.key\\\\\\\"\\\\nI0122 16:29:56.617863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:29:56.621506 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:29:56.621541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:29:56.621606 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:29:56.621634 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:29:56.631508 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:29:56.631550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631559 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:29:56.631576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0122 16:29:56.631574 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 16:29:56.631584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:29:56.631610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 16:29:56.634157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.114908 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.114956 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.114964 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.114977 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.114987 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:04Z","lastTransitionTime":"2026-01-22T16:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.125705 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dfeba9911630f8c172fab9eee3a107fbc2e24407b0af1f69cd539bac18d47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:04 crc kubenswrapper[4758]: E0122 16:30:04.127861 4758 projected.go:288] Couldn't get configMap openshift-ovn-kubernetes/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 22 16:30:04 crc kubenswrapper[4758]: E0122 16:30:04.127880 4758 projected.go:194] Error preparing data for projected volume kube-api-access-96qwj for pod openshift-ovn-kubernetes/ovnkube-node-jdpck: failed to sync configmap cache: timed out waiting for the condition Jan 22 16:30:04 crc kubenswrapper[4758]: E0122 16:30:04.127920 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-kube-api-access-96qwj podName:9b60a09e-8bfa-4d2e-998d-e1db5dec0faa nodeName:}" failed. No retries permitted until 2026-01-22 16:30:04.62790571 +0000 UTC m=+26.111244995 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-96qwj" (UniqueName: "kubernetes.io/projected/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-kube-api-access-96qwj") pod "ovnkube-node-jdpck" (UID: "9b60a09e-8bfa-4d2e-998d-e1db5dec0faa") : failed to sync configmap cache: timed out waiting for the condition Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.141316 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7dvfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97853b38-352d-42df-ad31-639c0e58093a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12409cad6bedda3da41a11ce209dd58b7d15e3fc0dde575d70b3aa6c64435144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7dvfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.154327 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.172990 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jdpck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.184629 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afc42466-9bb2-4e33-abde-6a09e897045b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.196772 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.208943 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10fc91a9777392383ea1a48bb940f13581052f2aaadce7c2d94588884a8ff832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.217297 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.217324 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.217342 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.217414 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.217454 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:04Z","lastTransitionTime":"2026-01-22T16:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.221328 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9182510-5fc6-4717-b94c-de8ca4fb7c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqfn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.231703 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9485b50dd3fa712a0f43f04b4d3ae98e0f152d17b5db4b6f214125c1e926a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.242903 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.260840 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g8wjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425c9f0a-b14e-48d3-bd86-6fc510f22a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtrsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g8wjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.271015 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b5f24a-19df-4969-b547-a5acc323c58a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208979f8d30765fcfd45650c760741d72bd7119bfe62ebf4d7c1554d6c6d56e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf5569b30ec6397014b282bf67eca77930756b413c7554ab366d2d31a4f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zsbtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.290868 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7dvfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97853b38-352d-42df-ad31-639c0e58093a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12409cad6bedda3da41a11ce209dd58b7d15e3fc0dde575d70b3aa6c64435144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7dvfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.322462 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.322504 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.322518 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.322535 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.322546 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:04Z","lastTransitionTime":"2026-01-22T16:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.326119 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9309c6-0336-4a15-8cbf-78178b4e57d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6824555f2019c5b0c92137ccb0a9af419b01ce0c63e1739c1d22b155a97c98a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a945d54b82518c2cda9257528f766444b687693255c50680adafb11651c792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca6e50d3a2acc2a4d43dc4a1fc1ff783ea5cb78978132377b7bb12b0dbd3e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c7268055ac9d7def228857bd8b974a53bb71fa873e1e0495d4691b8ca11902\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb71578e3eba87e91e6f6db0b03669e556cfbf38e2df367d20b6c8c79952f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.364718 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f128c8ae-2e32-4884-a296-728579141589\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:29:51.087222 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:29:51.088631 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2674264491/tls.crt::/tmp/serving-cert-2674264491/tls.key\\\\\\\"\\\\nI0122 16:29:56.617863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:29:56.621506 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:29:56.621541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:29:56.621606 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:29:56.621634 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:29:56.631508 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:29:56.631550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631559 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:29:56.631576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0122 16:29:56.631574 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 16:29:56.631584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:29:56.631610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 16:29:56.634157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.399708 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dfeba9911630f8c172fab9eee3a107fbc2e24407b0af1f69cd539bac18d47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.425762 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.425802 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.425810 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.425827 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.425836 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:04Z","lastTransitionTime":"2026-01-22T16:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.431128 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-lt6tl"] Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.431706 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-lt6tl" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.441812 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.450562 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.470592 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.490827 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.509917 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.527643 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.527676 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.527687 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.527702 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.527714 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:04Z","lastTransitionTime":"2026-01-22T16:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.529664 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.529772 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhkzn\" (UniqueName: \"kubernetes.io/projected/090f3014-3d99-49d5-8a9d-9719b4efbcf8-kube-api-access-bhkzn\") pod \"node-ca-lt6tl\" (UID: \"090f3014-3d99-49d5-8a9d-9719b4efbcf8\") " pod="openshift-image-registry/node-ca-lt6tl" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.529809 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/090f3014-3d99-49d5-8a9d-9719b4efbcf8-host\") pod \"node-ca-lt6tl\" (UID: \"090f3014-3d99-49d5-8a9d-9719b4efbcf8\") " pod="openshift-image-registry/node-ca-lt6tl" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.529851 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/090f3014-3d99-49d5-8a9d-9719b4efbcf8-serviceca\") pod \"node-ca-lt6tl\" (UID: \"090f3014-3d99-49d5-8a9d-9719b4efbcf8\") " pod="openshift-image-registry/node-ca-lt6tl" Jan 22 16:30:04 crc kubenswrapper[4758]: E0122 16:30:04.529960 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:30:12.52994365 +0000 UTC m=+34.013282935 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.557800 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9485b50dd3fa712a0f43f04b4d3ae98e0f152d17b5db4b6f214125c1e926a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.570035 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.590973 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.629736 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.629783 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.629794 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.629810 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.629822 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:04Z","lastTransitionTime":"2026-01-22T16:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.630186 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhkzn\" (UniqueName: \"kubernetes.io/projected/090f3014-3d99-49d5-8a9d-9719b4efbcf8-kube-api-access-bhkzn\") pod \"node-ca-lt6tl\" (UID: \"090f3014-3d99-49d5-8a9d-9719b4efbcf8\") " pod="openshift-image-registry/node-ca-lt6tl" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.630221 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.630247 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/090f3014-3d99-49d5-8a9d-9719b4efbcf8-host\") pod \"node-ca-lt6tl\" (UID: \"090f3014-3d99-49d5-8a9d-9719b4efbcf8\") " pod="openshift-image-registry/node-ca-lt6tl" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.630275 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.630300 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96qwj\" (UniqueName: \"kubernetes.io/projected/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-kube-api-access-96qwj\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.630320 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/090f3014-3d99-49d5-8a9d-9719b4efbcf8-serviceca\") pod \"node-ca-lt6tl\" (UID: \"090f3014-3d99-49d5-8a9d-9719b4efbcf8\") " pod="openshift-image-registry/node-ca-lt6tl" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.630341 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.630375 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:04 crc kubenswrapper[4758]: E0122 16:30:04.630482 4758 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 16:30:04 crc kubenswrapper[4758]: E0122 16:30:04.630534 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 16:30:12.63051843 +0000 UTC m=+34.113857715 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 16:30:04 crc kubenswrapper[4758]: E0122 16:30:04.630584 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 16:30:04 crc kubenswrapper[4758]: E0122 16:30:04.630601 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 16:30:04 crc kubenswrapper[4758]: E0122 16:30:04.630611 4758 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:30:04 crc kubenswrapper[4758]: E0122 16:30:04.630627 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 16:30:04 crc kubenswrapper[4758]: E0122 16:30:04.630639 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 16:30:12.630629543 +0000 UTC m=+34.113968828 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:30:04 crc kubenswrapper[4758]: E0122 16:30:04.630642 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 16:30:04 crc kubenswrapper[4758]: E0122 16:30:04.630655 4758 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:30:04 crc kubenswrapper[4758]: E0122 16:30:04.630683 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 16:30:12.630673764 +0000 UTC m=+34.114013049 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.630720 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/090f3014-3d99-49d5-8a9d-9719b4efbcf8-host\") pod \"node-ca-lt6tl\" (UID: \"090f3014-3d99-49d5-8a9d-9719b4efbcf8\") " pod="openshift-image-registry/node-ca-lt6tl" Jan 22 16:30:04 crc kubenswrapper[4758]: E0122 16:30:04.630793 4758 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 16:30:04 crc kubenswrapper[4758]: E0122 16:30:04.630879 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 16:30:12.630859019 +0000 UTC m=+34.114198314 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.631920 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/090f3014-3d99-49d5-8a9d-9719b4efbcf8-serviceca\") pod \"node-ca-lt6tl\" (UID: \"090f3014-3d99-49d5-8a9d-9719b4efbcf8\") " pod="openshift-image-registry/node-ca-lt6tl" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.636043 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96qwj\" (UniqueName: \"kubernetes.io/projected/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-kube-api-access-96qwj\") pod \"ovnkube-node-jdpck\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.644651 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.662706 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhkzn\" (UniqueName: \"kubernetes.io/projected/090f3014-3d99-49d5-8a9d-9719b4efbcf8-kube-api-access-bhkzn\") pod \"node-ca-lt6tl\" (UID: \"090f3014-3d99-49d5-8a9d-9719b4efbcf8\") " pod="openshift-image-registry/node-ca-lt6tl" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.696812 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g8wjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425c9f0a-b14e-48d3-bd86-6fc510f22a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtrsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g8wjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.733054 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.733100 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.733114 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.733134 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.733149 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:04Z","lastTransitionTime":"2026-01-22T16:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.737468 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b5f24a-19df-4969-b547-a5acc323c58a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208979f8d30765fcfd45650c760741d72bd7119bfe62ebf4d7c1554d6c6d56e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf5569b30ec6397014b282bf67eca77930756b413c7554ab366d2d31a4f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zsbtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.743569 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-lt6tl" Jan 22 16:30:04 crc kubenswrapper[4758]: W0122 16:30:04.761228 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod090f3014_3d99_49d5_8a9d_9719b4efbcf8.slice/crio-2bcef2f0498e8b96ef504c2461e5843ad918c8cb8e2cfe711531f2af137ccbc3 WatchSource:0}: Error finding container 2bcef2f0498e8b96ef504c2461e5843ad918c8cb8e2cfe711531f2af137ccbc3: Status 404 returned error can't find the container with id 2bcef2f0498e8b96ef504c2461e5843ad918c8cb8e2cfe711531f2af137ccbc3 Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.775448 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 16:54:55.061591078 +0000 UTC Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.782673 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9182510-5fc6-4717-b94c-de8ca4fb7c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqfn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.786806 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:04 crc kubenswrapper[4758]: W0122 16:30:04.799172 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b60a09e_8bfa_4d2e_998d_e1db5dec0faa.slice/crio-6be781d7852bbffdd00c288a7b2594b4d11ab247f25a95ae78082f08c77990e7 WatchSource:0}: Error finding container 6be781d7852bbffdd00c288a7b2594b4d11ab247f25a95ae78082f08c77990e7: Status 404 returned error can't find the container with id 6be781d7852bbffdd00c288a7b2594b4d11ab247f25a95ae78082f08c77990e7 Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.816138 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:04 crc kubenswrapper[4758]: E0122 16:30:04.816241 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.816521 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:04 crc kubenswrapper[4758]: E0122 16:30:04.816567 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.816670 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:04 crc kubenswrapper[4758]: E0122 16:30:04.816709 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.834462 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lt6tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"090f3014-3d99-49d5-8a9d-9719b4efbcf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhkzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lt6tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.845772 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.845809 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.845819 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.845833 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.845845 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:04Z","lastTransitionTime":"2026-01-22T16:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.869528 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9309c6-0336-4a15-8cbf-78178b4e57d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6824555f2019c5b0c92137ccb0a9af419b01ce0c63e1739c1d22b155a97c98a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a945d54b82518c2cda9257528f766444b687693255c50680adafb11651c792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca6e50d3a2acc2a4d43dc4a1fc1ff783ea5cb78978132377b7bb12b0dbd3e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c7268055ac9d7def228857bd8b974a53bb71fa873e1e0495d4691b8ca11902\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb71578e3eba87e91e6f6db0b03669e556cfbf38e2df367d20b6c8c79952f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.901245 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f128c8ae-2e32-4884-a296-728579141589\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:29:51.087222 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:29:51.088631 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2674264491/tls.crt::/tmp/serving-cert-2674264491/tls.key\\\\\\\"\\\\nI0122 16:29:56.617863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:29:56.621506 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:29:56.621541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:29:56.621606 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:29:56.621634 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:29:56.631508 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:29:56.631550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631559 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:29:56.631576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0122 16:29:56.631574 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 16:29:56.631584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:29:56.631610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 16:29:56.634157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.937171 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dfeba9911630f8c172fab9eee3a107fbc2e24407b0af1f69cd539bac18d47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.947878 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.947915 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.947925 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.947939 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.947950 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:04Z","lastTransitionTime":"2026-01-22T16:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.971441 4758 generic.go:334] "Generic (PLEG): container finished" podID="c9182510-5fc6-4717-b94c-de8ca4fb7c54" containerID="b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063" exitCode=0 Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.971509 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" event={"ID":"c9182510-5fc6-4717-b94c-de8ca4fb7c54","Type":"ContainerDied","Data":"b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063"} Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.973365 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-lt6tl" event={"ID":"090f3014-3d99-49d5-8a9d-9719b4efbcf8","Type":"ContainerStarted","Data":"2bcef2f0498e8b96ef504c2461e5843ad918c8cb8e2cfe711531f2af137ccbc3"} Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.974414 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-g8wjx" event={"ID":"425c9f0a-b14e-48d3-bd86-6fc510f22a7f","Type":"ContainerStarted","Data":"0f1d22788bf54b1c4a55b0c19222ad6dde207887ab282b97324717333f0280f3"} Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.976038 4758 generic.go:334] "Generic (PLEG): container finished" podID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerID="2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9" exitCode=0 Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.976079 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" event={"ID":"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa","Type":"ContainerDied","Data":"2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9"} Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.976120 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" event={"ID":"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa","Type":"ContainerStarted","Data":"6be781d7852bbffdd00c288a7b2594b4d11ab247f25a95ae78082f08c77990e7"} Jan 22 16:30:04 crc kubenswrapper[4758]: I0122 16:30:04.979690 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7dvfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97853b38-352d-42df-ad31-639c0e58093a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12409cad6bedda3da41a11ce209dd58b7d15e3fc0dde575d70b3aa6c64435144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7dvfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.023377 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:05Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.058134 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.058165 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.058173 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.058187 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.058195 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:05Z","lastTransitionTime":"2026-01-22T16:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.061557 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afc42466-9bb2-4e33-abde-6a09e897045b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:05Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.112216 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:05Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.139535 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10fc91a9777392383ea1a48bb940f13581052f2aaadce7c2d94588884a8ff832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:05Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.159969 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.159996 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.160004 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.160017 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.160026 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:05Z","lastTransitionTime":"2026-01-22T16:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.188050 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jdpck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:05Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.218029 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:05Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.260023 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afc42466-9bb2-4e33-abde-6a09e897045b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:05Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.261695 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.261755 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.261768 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.261785 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.261797 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:05Z","lastTransitionTime":"2026-01-22T16:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.299673 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:05Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.338211 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10fc91a9777392383ea1a48bb940f13581052f2aaadce7c2d94588884a8ff832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:05Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.364180 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.364208 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.364216 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.364228 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.364236 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:05Z","lastTransitionTime":"2026-01-22T16:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.386880 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jdpck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:05Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.421087 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9485b50dd3fa712a0f43f04b4d3ae98e0f152d17b5db4b6f214125c1e926a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:05Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.458819 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:05Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.466458 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.466482 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.466492 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.466506 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.466514 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:05Z","lastTransitionTime":"2026-01-22T16:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.496650 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g8wjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425c9f0a-b14e-48d3-bd86-6fc510f22a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1d22788bf54b1c4a55b0c19222ad6dde207887ab282b97324717333f0280f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtrsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g8wjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:05Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.536354 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b5f24a-19df-4969-b547-a5acc323c58a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208979f8d30765fcfd45650c760741d72bd7119bfe62ebf4d7c1554d6c6d56e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf5569b30ec6397014b282bf67eca77930756b413c7554ab366d2d31a4f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zsbtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:05Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.568823 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.568865 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.568875 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.568889 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.568899 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:05Z","lastTransitionTime":"2026-01-22T16:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.581045 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9182510-5fc6-4717-b94c-de8ca4fb7c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqfn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:05Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.617349 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lt6tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"090f3014-3d99-49d5-8a9d-9719b4efbcf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhkzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lt6tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:05Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.663638 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9309c6-0336-4a15-8cbf-78178b4e57d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6824555f2019c5b0c92137ccb0a9af419b01ce0c63e1739c1d22b155a97c98a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a945d54b82518c2cda9257528f766444b687693255c50680adafb11651c792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca6e50d3a2acc2a4d43dc4a1fc1ff783ea5cb78978132377b7bb12b0dbd3e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c7268055ac9d7def228857bd8b974a53bb71fa873e1e0495d4691b8ca11902\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb71578e3eba87e91e6f6db0b03669e556cfbf38e2df367d20b6c8c79952f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:05Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.670688 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.670732 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.670761 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.670779 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.670798 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:05Z","lastTransitionTime":"2026-01-22T16:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.699342 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f128c8ae-2e32-4884-a296-728579141589\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:29:51.087222 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:29:51.088631 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2674264491/tls.crt::/tmp/serving-cert-2674264491/tls.key\\\\\\\"\\\\nI0122 16:29:56.617863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:29:56.621506 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:29:56.621541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:29:56.621606 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:29:56.621634 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:29:56.631508 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:29:56.631550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631559 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:29:56.631576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0122 16:29:56.631574 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 16:29:56.631584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:29:56.631610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 16:29:56.634157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:05Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.742393 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dfeba9911630f8c172fab9eee3a107fbc2e24407b0af1f69cd539bac18d47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:05Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.772979 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.773029 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.773047 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.773063 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.773075 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:05Z","lastTransitionTime":"2026-01-22T16:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.776158 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 04:29:37.824224402 +0000 UTC Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.781300 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7dvfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97853b38-352d-42df-ad31-639c0e58093a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12409cad6bedda3da41a11ce209dd58b7d15e3fc0dde575d70b3aa6c64435144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7dvfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:05Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.874995 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.875034 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.875042 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.875056 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.875068 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:05Z","lastTransitionTime":"2026-01-22T16:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.977765 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.977891 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.977921 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.977938 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.977950 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:05Z","lastTransitionTime":"2026-01-22T16:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.980979 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" event={"ID":"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa","Type":"ContainerStarted","Data":"596bd59377fe79f228ddda88e07b73a2f24a57ce836d0f0b2ca02d6008363020"} Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.981041 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" event={"ID":"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa","Type":"ContainerStarted","Data":"47ade0d50980af81530f1be5dbb599cf39cd13941d216485b18422f8474a1d8f"} Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.981055 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" event={"ID":"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa","Type":"ContainerStarted","Data":"f98a04a30984aea45235e40edb9801d2939b35a08519d1d63df0d0c6c47131a6"} Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.981068 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" event={"ID":"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa","Type":"ContainerStarted","Data":"385c8e25a62d5dad6aeac43a064397418c85c1b8720414cd44e3e925fa85a04d"} Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.981079 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" event={"ID":"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa","Type":"ContainerStarted","Data":"c2bb807fa30678efaca258ed72a274a7f4e065ce20066caf601177dbc8466409"} Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.981089 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" event={"ID":"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa","Type":"ContainerStarted","Data":"915d9141459dc9d0a72681717513aaef7a876003397a1ed89a62b755bb45dc67"} Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.982871 4758 generic.go:334] "Generic (PLEG): container finished" podID="c9182510-5fc6-4717-b94c-de8ca4fb7c54" containerID="19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02" exitCode=0 Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.982918 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" event={"ID":"c9182510-5fc6-4717-b94c-de8ca4fb7c54","Type":"ContainerDied","Data":"19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02"} Jan 22 16:30:05 crc kubenswrapper[4758]: I0122 16:30:05.984274 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-lt6tl" event={"ID":"090f3014-3d99-49d5-8a9d-9719b4efbcf8","Type":"ContainerStarted","Data":"3a09e0ee71eddb461f883d44293ed63887153350f0f617799e7f360b5d6fdd25"} Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.011335 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jdpck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:06Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.025212 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afc42466-9bb2-4e33-abde-6a09e897045b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:06Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.038228 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:06Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.052209 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10fc91a9777392383ea1a48bb940f13581052f2aaadce7c2d94588884a8ff832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:06Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.065542 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9182510-5fc6-4717-b94c-de8ca4fb7c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqfn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:06Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.074934 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lt6tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"090f3014-3d99-49d5-8a9d-9719b4efbcf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhkzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lt6tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:06Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.088061 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.088106 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.088117 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.088136 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.088149 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:06Z","lastTransitionTime":"2026-01-22T16:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.088231 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9485b50dd3fa712a0f43f04b4d3ae98e0f152d17b5db4b6f214125c1e926a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:06Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.101768 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:06Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.136952 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g8wjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425c9f0a-b14e-48d3-bd86-6fc510f22a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1d22788bf54b1c4a55b0c19222ad6dde207887ab282b97324717333f0280f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtrsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g8wjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:06Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.179322 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b5f24a-19df-4969-b547-a5acc323c58a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208979f8d30765fcfd45650c760741d72bd7119bfe62ebf4d7c1554d6c6d56e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf5569b30ec6397014b282bf67eca77930756b413c7554ab366d2d31a4f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zsbtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:06Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.190710 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.190787 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.190799 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.190817 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.190829 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:06Z","lastTransitionTime":"2026-01-22T16:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.219049 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7dvfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97853b38-352d-42df-ad31-639c0e58093a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12409cad6bedda3da41a11ce209dd58b7d15e3fc0dde575d70b3aa6c64435144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7dvfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:06Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.268525 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9309c6-0336-4a15-8cbf-78178b4e57d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6824555f2019c5b0c92137ccb0a9af419b01ce0c63e1739c1d22b155a97c98a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a945d54b82518c2cda9257528f766444b687693255c50680adafb11651c792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca6e50d3a2acc2a4d43dc4a1fc1ff783ea5cb78978132377b7bb12b0dbd3e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c7268055ac9d7def228857bd8b974a53bb71fa873e1e0495d4691b8ca11902\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb71578e3eba87e91e6f6db0b03669e556cfbf38e2df367d20b6c8c79952f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:06Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.293234 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.293268 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.293277 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.293289 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.293299 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:06Z","lastTransitionTime":"2026-01-22T16:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.300474 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f128c8ae-2e32-4884-a296-728579141589\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:29:51.087222 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:29:51.088631 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2674264491/tls.crt::/tmp/serving-cert-2674264491/tls.key\\\\\\\"\\\\nI0122 16:29:56.617863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:29:56.621506 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:29:56.621541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:29:56.621606 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:29:56.621634 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:29:56.631508 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:29:56.631550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631559 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:29:56.631576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0122 16:29:56.631574 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 16:29:56.631584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:29:56.631610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 16:29:56.634157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:06Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.337930 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dfeba9911630f8c172fab9eee3a107fbc2e24407b0af1f69cd539bac18d47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:06Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.383402 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:06Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.394919 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.394957 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.394969 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.394987 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.394998 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:06Z","lastTransitionTime":"2026-01-22T16:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.419060 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:06Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.455400 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g8wjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425c9f0a-b14e-48d3-bd86-6fc510f22a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1d22788bf54b1c4a55b0c19222ad6dde207887ab282b97324717333f0280f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtrsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g8wjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:06Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.497081 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.497115 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.497123 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.497138 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.497146 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:06Z","lastTransitionTime":"2026-01-22T16:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.498954 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b5f24a-19df-4969-b547-a5acc323c58a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208979f8d30765fcfd45650c760741d72bd7119bfe62ebf4d7c1554d6c6d56e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf5569b30ec6397014b282bf67eca77930756b413c7554ab366d2d31a4f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zsbtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:06Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.540899 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9182510-5fc6-4717-b94c-de8ca4fb7c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqfn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:06Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.576663 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lt6tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"090f3014-3d99-49d5-8a9d-9719b4efbcf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a09e0ee71eddb461f883d44293ed63887153350f0f617799e7f360b5d6fdd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhkzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lt6tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:06Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.599217 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.599280 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.599303 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.599333 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.599356 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:06Z","lastTransitionTime":"2026-01-22T16:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.624251 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9485b50dd3fa712a0f43f04b4d3ae98e0f152d17b5db4b6f214125c1e926a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:06Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.663459 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f128c8ae-2e32-4884-a296-728579141589\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:29:51.087222 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:29:51.088631 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2674264491/tls.crt::/tmp/serving-cert-2674264491/tls.key\\\\\\\"\\\\nI0122 16:29:56.617863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:29:56.621506 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:29:56.621541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:29:56.621606 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:29:56.621634 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:29:56.631508 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:29:56.631550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631559 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:29:56.631576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0122 16:29:56.631574 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 16:29:56.631584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:29:56.631610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 16:29:56.634157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:06Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.700024 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dfeba9911630f8c172fab9eee3a107fbc2e24407b0af1f69cd539bac18d47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:06Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.701392 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.701426 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.701436 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.701449 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.701458 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:06Z","lastTransitionTime":"2026-01-22T16:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.738349 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7dvfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97853b38-352d-42df-ad31-639c0e58093a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12409cad6bedda3da41a11ce209dd58b7d15e3fc0dde575d70b3aa6c64435144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7dvfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:06Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.777084 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 04:34:16.402282468 +0000 UTC Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.789242 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9309c6-0336-4a15-8cbf-78178b4e57d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6824555f2019c5b0c92137ccb0a9af419b01ce0c63e1739c1d22b155a97c98a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a945d54b82518c2cda9257528f766444b687693255c50680adafb11651c792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca6e50d3a2acc2a4d43dc4a1fc1ff783ea5cb78978132377b7bb12b0dbd3e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c7268055ac9d7def228857bd8b974a53bb71fa873e1e0495d4691b8ca11902\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb71578e3eba87e91e6f6db0b03669e556cfbf38e2df367d20b6c8c79952f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:06Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.804597 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.804644 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.804656 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.804672 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.804684 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:06Z","lastTransitionTime":"2026-01-22T16:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.807015 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:06 crc kubenswrapper[4758]: E0122 16:30:06.807294 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.807401 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:06 crc kubenswrapper[4758]: E0122 16:30:06.807495 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.807555 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:06 crc kubenswrapper[4758]: E0122 16:30:06.807675 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.818839 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:06Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.859112 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afc42466-9bb2-4e33-abde-6a09e897045b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:06Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.897064 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:06Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.906451 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.906468 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.906477 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.906489 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.906498 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:06Z","lastTransitionTime":"2026-01-22T16:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.938424 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10fc91a9777392383ea1a48bb940f13581052f2aaadce7c2d94588884a8ff832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:06Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.989866 4758 generic.go:334] "Generic (PLEG): container finished" podID="c9182510-5fc6-4717-b94c-de8ca4fb7c54" containerID="fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28" exitCode=0 Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.990266 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" event={"ID":"c9182510-5fc6-4717-b94c-de8ca4fb7c54","Type":"ContainerDied","Data":"fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28"} Jan 22 16:30:06 crc kubenswrapper[4758]: I0122 16:30:06.994912 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jdpck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:06Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.010017 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.010054 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.010065 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.010081 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.010094 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:07Z","lastTransitionTime":"2026-01-22T16:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.024209 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f128c8ae-2e32-4884-a296-728579141589\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:29:51.087222 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:29:51.088631 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2674264491/tls.crt::/tmp/serving-cert-2674264491/tls.key\\\\\\\"\\\\nI0122 16:29:56.617863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:29:56.621506 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:29:56.621541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:29:56.621606 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:29:56.621634 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:29:56.631508 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:29:56.631550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631559 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:29:56.631576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0122 16:29:56.631574 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 16:29:56.631584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:29:56.631610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 16:29:56.634157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:07Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.064846 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dfeba9911630f8c172fab9eee3a107fbc2e24407b0af1f69cd539bac18d47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:07Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.100288 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7dvfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97853b38-352d-42df-ad31-639c0e58093a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12409cad6bedda3da41a11ce209dd58b7d15e3fc0dde575d70b3aa6c64435144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7dvfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:07Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.112972 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.113002 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.113010 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.113022 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.113032 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:07Z","lastTransitionTime":"2026-01-22T16:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.146173 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9309c6-0336-4a15-8cbf-78178b4e57d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6824555f2019c5b0c92137ccb0a9af419b01ce0c63e1739c1d22b155a97c98a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a945d54b82518c2cda9257528f766444b687693255c50680adafb11651c792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca6e50d3a2acc2a4d43dc4a1fc1ff783ea5cb78978132377b7bb12b0dbd3e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c7268055ac9d7def228857bd8b974a53bb71fa873e1e0495d4691b8ca11902\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb71578e3eba87e91e6f6db0b03669e556cfbf38e2df367d20b6c8c79952f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:07Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.178893 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:07Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.215098 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.215136 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.215148 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.215162 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.215171 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:07Z","lastTransitionTime":"2026-01-22T16:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.217392 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afc42466-9bb2-4e33-abde-6a09e897045b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:07Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.258359 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:07Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.301846 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10fc91a9777392383ea1a48bb940f13581052f2aaadce7c2d94588884a8ff832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:07Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.330443 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.330539 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.330583 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.330639 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.330688 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:07Z","lastTransitionTime":"2026-01-22T16:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.376042 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jdpck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:07Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.393968 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:07Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.415435 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g8wjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425c9f0a-b14e-48d3-bd86-6fc510f22a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1d22788bf54b1c4a55b0c19222ad6dde207887ab282b97324717333f0280f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtrsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g8wjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:07Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.432854 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.432896 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.432908 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.432924 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.432935 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:07Z","lastTransitionTime":"2026-01-22T16:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.462331 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b5f24a-19df-4969-b547-a5acc323c58a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208979f8d30765fcfd45650c760741d72bd7119bfe62ebf4d7c1554d6c6d56e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf5569b30ec6397014b282bf67eca77930756b413c7554ab366d2d31a4f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zsbtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:07Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.508948 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9182510-5fc6-4717-b94c-de8ca4fb7c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqfn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:07Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.535631 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.535676 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.535689 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.535705 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.536188 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:07Z","lastTransitionTime":"2026-01-22T16:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.539261 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lt6tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"090f3014-3d99-49d5-8a9d-9719b4efbcf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a09e0ee71eddb461f883d44293ed63887153350f0f617799e7f360b5d6fdd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhkzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lt6tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:07Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.580996 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9485b50dd3fa712a0f43f04b4d3ae98e0f152d17b5db4b6f214125c1e926a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:07Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.638101 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.638239 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.638343 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.638461 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.638551 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:07Z","lastTransitionTime":"2026-01-22T16:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.741067 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.741100 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.741115 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.741136 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.741144 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:07Z","lastTransitionTime":"2026-01-22T16:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.777383 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 17:00:02.362043601 +0000 UTC Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.843788 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.843829 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.843840 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.843854 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.843866 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:07Z","lastTransitionTime":"2026-01-22T16:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.946422 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.947346 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.947453 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.947564 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.947651 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:07Z","lastTransitionTime":"2026-01-22T16:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.995331 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" event={"ID":"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa","Type":"ContainerStarted","Data":"9cfdd5744f9e8afe2a851b86ac85473f44fb49066784a282306ca8c1d621974b"} Jan 22 16:30:07 crc kubenswrapper[4758]: I0122 16:30:07.998963 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" event={"ID":"c9182510-5fc6-4717-b94c-de8ca4fb7c54","Type":"ContainerStarted","Data":"a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263"} Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.012477 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afc42466-9bb2-4e33-abde-6a09e897045b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:08Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.024915 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:08Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.039561 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10fc91a9777392383ea1a48bb940f13581052f2aaadce7c2d94588884a8ff832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:08Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.050323 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.050362 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.050377 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.050400 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.050415 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:08Z","lastTransitionTime":"2026-01-22T16:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.060886 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jdpck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:08Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.074634 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9485b50dd3fa712a0f43f04b4d3ae98e0f152d17b5db4b6f214125c1e926a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:08Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.084024 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:08Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.092333 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g8wjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425c9f0a-b14e-48d3-bd86-6fc510f22a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1d22788bf54b1c4a55b0c19222ad6dde207887ab282b97324717333f0280f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtrsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g8wjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:08Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.101941 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b5f24a-19df-4969-b547-a5acc323c58a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208979f8d30765fcfd45650c760741d72bd7119bfe62ebf4d7c1554d6c6d56e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf5569b30ec6397014b282bf67eca77930756b413c7554ab366d2d31a4f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zsbtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:08Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.121535 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9182510-5fc6-4717-b94c-de8ca4fb7c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqfn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:08Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.137483 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lt6tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"090f3014-3d99-49d5-8a9d-9719b4efbcf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a09e0ee71eddb461f883d44293ed63887153350f0f617799e7f360b5d6fdd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhkzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lt6tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:08Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.159300 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.159333 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.159346 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.159381 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.159393 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:08Z","lastTransitionTime":"2026-01-22T16:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.159410 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9309c6-0336-4a15-8cbf-78178b4e57d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6824555f2019c5b0c92137ccb0a9af419b01ce0c63e1739c1d22b155a97c98a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a945d54b82518c2cda9257528f766444b687693255c50680adafb11651c792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca6e50d3a2acc2a4d43dc4a1fc1ff783ea5cb78978132377b7bb12b0dbd3e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c7268055ac9d7def228857bd8b974a53bb71fa873e1e0495d4691b8ca11902\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb71578e3eba87e91e6f6db0b03669e556cfbf38e2df367d20b6c8c79952f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:08Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.180009 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f128c8ae-2e32-4884-a296-728579141589\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:29:51.087222 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:29:51.088631 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2674264491/tls.crt::/tmp/serving-cert-2674264491/tls.key\\\\\\\"\\\\nI0122 16:29:56.617863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:29:56.621506 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:29:56.621541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:29:56.621606 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:29:56.621634 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:29:56.631508 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:29:56.631550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631559 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:29:56.631576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0122 16:29:56.631574 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 16:29:56.631584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:29:56.631610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 16:29:56.634157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:08Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.192105 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dfeba9911630f8c172fab9eee3a107fbc2e24407b0af1f69cd539bac18d47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:08Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.206933 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7dvfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97853b38-352d-42df-ad31-639c0e58093a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12409cad6bedda3da41a11ce209dd58b7d15e3fc0dde575d70b3aa6c64435144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7dvfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:08Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.218015 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:08Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.261330 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.261360 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.261368 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.261385 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.261394 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:08Z","lastTransitionTime":"2026-01-22T16:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.364633 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.364939 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.365014 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.365081 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.365167 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:08Z","lastTransitionTime":"2026-01-22T16:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.468636 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.468684 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.468700 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.468725 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.468788 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:08Z","lastTransitionTime":"2026-01-22T16:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.571327 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.571582 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.571682 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.572490 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.572545 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:08Z","lastTransitionTime":"2026-01-22T16:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.675815 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.675851 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.675863 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.675879 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.675891 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:08Z","lastTransitionTime":"2026-01-22T16:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.692188 4758 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.778085 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 20:14:46.120939741 +0000 UTC Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.778452 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.778474 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.778483 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.778495 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.778503 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:08Z","lastTransitionTime":"2026-01-22T16:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.807728 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.807827 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:08 crc kubenswrapper[4758]: E0122 16:30:08.807846 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:08 crc kubenswrapper[4758]: E0122 16:30:08.807975 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.808166 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:08 crc kubenswrapper[4758]: E0122 16:30:08.808264 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.838144 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9309c6-0336-4a15-8cbf-78178b4e57d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6824555f2019c5b0c92137ccb0a9af419b01ce0c63e1739c1d22b155a97c98a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a945d54b82518c2cda9257528f766444b687693255c50680adafb11651c792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca6e50d3a2acc2a4d43dc4a1fc1ff783ea5cb78978132377b7bb12b0dbd3e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c7268055ac9d7def228857bd8b974a53bb71fa873e1e0495d4691b8ca11902\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb71578e3eba87e91e6f6db0b03669e556cfbf38e2df367d20b6c8c79952f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:08Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.856493 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f128c8ae-2e32-4884-a296-728579141589\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:29:51.087222 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:29:51.088631 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2674264491/tls.crt::/tmp/serving-cert-2674264491/tls.key\\\\\\\"\\\\nI0122 16:29:56.617863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:29:56.621506 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:29:56.621541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:29:56.621606 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:29:56.621634 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:29:56.631508 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:29:56.631550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631559 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:29:56.631576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0122 16:29:56.631574 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 16:29:56.631584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:29:56.631610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 16:29:56.634157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:08Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.878023 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dfeba9911630f8c172fab9eee3a107fbc2e24407b0af1f69cd539bac18d47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:08Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.880374 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.880400 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.880410 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.880424 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.880433 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:08Z","lastTransitionTime":"2026-01-22T16:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.904307 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7dvfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97853b38-352d-42df-ad31-639c0e58093a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12409cad6bedda3da41a11ce209dd58b7d15e3fc0dde575d70b3aa6c64435144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7dvfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:08Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.922021 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:08Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.939201 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afc42466-9bb2-4e33-abde-6a09e897045b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:08Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.952161 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:08Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.969786 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10fc91a9777392383ea1a48bb940f13581052f2aaadce7c2d94588884a8ff832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:08Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.983265 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.983300 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.983309 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.983324 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.983334 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:08Z","lastTransitionTime":"2026-01-22T16:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:08 crc kubenswrapper[4758]: I0122 16:30:08.988454 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jdpck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:08Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.006379 4758 generic.go:334] "Generic (PLEG): container finished" podID="c9182510-5fc6-4717-b94c-de8ca4fb7c54" containerID="a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263" exitCode=0 Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.006371 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9485b50dd3fa712a0f43f04b4d3ae98e0f152d17b5db4b6f214125c1e926a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:09Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.006435 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" event={"ID":"c9182510-5fc6-4717-b94c-de8ca4fb7c54","Type":"ContainerDied","Data":"a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263"} Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.021197 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:09Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.040524 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g8wjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425c9f0a-b14e-48d3-bd86-6fc510f22a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1d22788bf54b1c4a55b0c19222ad6dde207887ab282b97324717333f0280f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtrsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g8wjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:09Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.055064 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b5f24a-19df-4969-b547-a5acc323c58a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208979f8d30765fcfd45650c760741d72bd7119bfe62ebf4d7c1554d6c6d56e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf5569b30ec6397014b282bf67eca77930756b413c7554ab366d2d31a4f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zsbtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:09Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.069312 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9182510-5fc6-4717-b94c-de8ca4fb7c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqfn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:09Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.081173 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lt6tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"090f3014-3d99-49d5-8a9d-9719b4efbcf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a09e0ee71eddb461f883d44293ed63887153350f0f617799e7f360b5d6fdd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhkzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lt6tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:09Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.085962 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.086004 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.086013 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.086028 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.086040 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:09Z","lastTransitionTime":"2026-01-22T16:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.092213 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b5f24a-19df-4969-b547-a5acc323c58a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208979f8d30765fcfd45650c760741d72bd7119bfe62ebf4d7c1554d6c6d56e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf5569b30ec6397014b282bf67eca77930756b413c7554ab366d2d31a4f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zsbtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:09Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.107664 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9182510-5fc6-4717-b94c-de8ca4fb7c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqfn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:09Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.119559 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lt6tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"090f3014-3d99-49d5-8a9d-9719b4efbcf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a09e0ee71eddb461f883d44293ed63887153350f0f617799e7f360b5d6fdd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhkzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lt6tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:09Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.131973 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9485b50dd3fa712a0f43f04b4d3ae98e0f152d17b5db4b6f214125c1e926a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:09Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.145133 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:09Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.166481 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g8wjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425c9f0a-b14e-48d3-bd86-6fc510f22a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1d22788bf54b1c4a55b0c19222ad6dde207887ab282b97324717333f0280f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtrsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g8wjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:09Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.180052 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dfeba9911630f8c172fab9eee3a107fbc2e24407b0af1f69cd539bac18d47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:09Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.189172 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.189206 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.189217 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.189232 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.189242 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:09Z","lastTransitionTime":"2026-01-22T16:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.193034 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7dvfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97853b38-352d-42df-ad31-639c0e58093a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12409cad6bedda3da41a11ce209dd58b7d15e3fc0dde575d70b3aa6c64435144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7dvfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:09Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.212479 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9309c6-0336-4a15-8cbf-78178b4e57d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6824555f2019c5b0c92137ccb0a9af419b01ce0c63e1739c1d22b155a97c98a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a945d54b82518c2cda9257528f766444b687693255c50680adafb11651c792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca6e50d3a2acc2a4d43dc4a1fc1ff783ea5cb78978132377b7bb12b0dbd3e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c7268055ac9d7def228857bd8b974a53bb71fa873e1e0495d4691b8ca11902\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb71578e3eba87e91e6f6db0b03669e556cfbf38e2df367d20b6c8c79952f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:09Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.224297 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f128c8ae-2e32-4884-a296-728579141589\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:29:51.087222 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:29:51.088631 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2674264491/tls.crt::/tmp/serving-cert-2674264491/tls.key\\\\\\\"\\\\nI0122 16:29:56.617863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:29:56.621506 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:29:56.621541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:29:56.621606 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:29:56.621634 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:29:56.631508 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:29:56.631550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631559 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:29:56.631576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0122 16:29:56.631574 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 16:29:56.631584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:29:56.631610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 16:29:56.634157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:09Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.237115 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:09Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.258587 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10fc91a9777392383ea1a48bb940f13581052f2aaadce7c2d94588884a8ff832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:09Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.291470 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.291506 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.291517 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.291536 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.291547 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:09Z","lastTransitionTime":"2026-01-22T16:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.303516 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jdpck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:09Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.343532 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afc42466-9bb2-4e33-abde-6a09e897045b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:09Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.378884 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:09Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.393298 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.393323 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.393331 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.393343 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.393352 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:09Z","lastTransitionTime":"2026-01-22T16:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.495532 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.495572 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.495585 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.495601 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.495611 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:09Z","lastTransitionTime":"2026-01-22T16:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.598234 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.598281 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.598294 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.598312 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.598325 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:09Z","lastTransitionTime":"2026-01-22T16:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.700268 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.700314 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.700325 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.700343 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.700355 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:09Z","lastTransitionTime":"2026-01-22T16:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.778557 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 10:56:49.329050641 +0000 UTC Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.803166 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.803209 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.803224 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.803243 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.803259 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:09Z","lastTransitionTime":"2026-01-22T16:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.907298 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.907343 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.907359 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.907381 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.907399 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:09Z","lastTransitionTime":"2026-01-22T16:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.978094 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.978141 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.978153 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.978171 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.978183 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:09Z","lastTransitionTime":"2026-01-22T16:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:09 crc kubenswrapper[4758]: E0122 16:30:09.992292 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7288053-8dca-462f-b24f-6a9d8be738b3\\\",\\\"systemUUID\\\":\\\"83805c52-2bba-4705-bdbe-9101a9d1190e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:09Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.996629 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.996660 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.996668 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.996682 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:09 crc kubenswrapper[4758]: I0122 16:30:09.996691 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:09Z","lastTransitionTime":"2026-01-22T16:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:10 crc kubenswrapper[4758]: E0122 16:30:10.014947 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7288053-8dca-462f-b24f-6a9d8be738b3\\\",\\\"systemUUID\\\":\\\"83805c52-2bba-4705-bdbe-9101a9d1190e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:10Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.018548 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.018615 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.018719 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.018825 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.019177 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:10Z","lastTransitionTime":"2026-01-22T16:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.025526 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" event={"ID":"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa","Type":"ContainerStarted","Data":"49acb04b625fa7a5eac407a46db0479dd9498d15612a10b91587eb27ab1b92e1"} Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.026297 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.026336 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.035773 4758 generic.go:334] "Generic (PLEG): container finished" podID="c9182510-5fc6-4717-b94c-de8ca4fb7c54" containerID="c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43" exitCode=0 Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.035844 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" event={"ID":"c9182510-5fc6-4717-b94c-de8ca4fb7c54","Type":"ContainerDied","Data":"c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43"} Jan 22 16:30:10 crc kubenswrapper[4758]: E0122 16:30:10.039450 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7288053-8dca-462f-b24f-6a9d8be738b3\\\",\\\"systemUUID\\\":\\\"83805c52-2bba-4705-bdbe-9101a9d1190e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:10Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.044411 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:10Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.049226 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.049256 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.049269 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.049286 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.049299 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:10Z","lastTransitionTime":"2026-01-22T16:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.058955 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10fc91a9777392383ea1a48bb940f13581052f2aaadce7c2d94588884a8ff832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:10Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.073418 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.073604 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:10 crc kubenswrapper[4758]: E0122 16:30:10.073541 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7288053-8dca-462f-b24f-6a9d8be738b3\\\",\\\"systemUUID\\\":\\\"83805c52-2bba-4705-bdbe-9101a9d1190e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:10Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.077907 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.077929 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.077939 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.077954 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.077964 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:10Z","lastTransitionTime":"2026-01-22T16:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.078048 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385c8e25a62d5dad6aeac43a064397418c85c1b8720414cd44e3e925fa85a04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f98a04a30984aea45235e40edb9801d2939b35a08519d1d63df0d0c6c47131a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://596bd59377fe79f228ddda88e07b73a2f24a57ce836d0f0b2ca02d6008363020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ade0d50980af81530f1be5dbb599cf39cd13941d216485b18422f8474a1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2bb807fa30678efaca258ed72a274a7f4e065ce20066caf601177dbc8466409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915d9141459dc9d0a72681717513aaef7a876003397a1ed89a62b755bb45dc67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49acb04b625fa7a5eac407a46db0479dd9498d15612a10b91587eb27ab1b92e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cfdd5744f9e8afe2a851b86ac85473f44fb49066784a282306ca8c1d621974b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jdpck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:10Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.091669 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afc42466-9bb2-4e33-abde-6a09e897045b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:10Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:10 crc kubenswrapper[4758]: E0122 16:30:10.096148 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7288053-8dca-462f-b24f-6a9d8be738b3\\\",\\\"systemUUID\\\":\\\"83805c52-2bba-4705-bdbe-9101a9d1190e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:10Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:10 crc kubenswrapper[4758]: E0122 16:30:10.096308 4758 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.101162 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.101190 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.101201 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.101217 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.101228 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:10Z","lastTransitionTime":"2026-01-22T16:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.106514 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:10Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.117245 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b5f24a-19df-4969-b547-a5acc323c58a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208979f8d30765fcfd45650c760741d72bd7119bfe62ebf4d7c1554d6c6d56e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf5569b30ec6397014b282bf67eca77930756b413c7554ab366d2d31a4f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zsbtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:10Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.137822 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9182510-5fc6-4717-b94c-de8ca4fb7c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqfn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:10Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.158828 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lt6tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"090f3014-3d99-49d5-8a9d-9719b4efbcf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a09e0ee71eddb461f883d44293ed63887153350f0f617799e7f360b5d6fdd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhkzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lt6tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:10Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.171980 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9485b50dd3fa712a0f43f04b4d3ae98e0f152d17b5db4b6f214125c1e926a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:10Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.184016 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:10Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.194409 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g8wjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425c9f0a-b14e-48d3-bd86-6fc510f22a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1d22788bf54b1c4a55b0c19222ad6dde207887ab282b97324717333f0280f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtrsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g8wjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:10Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.204563 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.204601 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.204613 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.204627 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.204636 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:10Z","lastTransitionTime":"2026-01-22T16:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.209114 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dfeba9911630f8c172fab9eee3a107fbc2e24407b0af1f69cd539bac18d47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:10Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.221590 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7dvfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97853b38-352d-42df-ad31-639c0e58093a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12409cad6bedda3da41a11ce209dd58b7d15e3fc0dde575d70b3aa6c64435144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7dvfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:10Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.242301 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9309c6-0336-4a15-8cbf-78178b4e57d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6824555f2019c5b0c92137ccb0a9af419b01ce0c63e1739c1d22b155a97c98a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a945d54b82518c2cda9257528f766444b687693255c50680adafb11651c792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca6e50d3a2acc2a4d43dc4a1fc1ff783ea5cb78978132377b7bb12b0dbd3e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c7268055ac9d7def228857bd8b974a53bb71fa873e1e0495d4691b8ca11902\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb71578e3eba87e91e6f6db0b03669e556cfbf38e2df367d20b6c8c79952f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:10Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.258847 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f128c8ae-2e32-4884-a296-728579141589\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:29:51.087222 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:29:51.088631 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2674264491/tls.crt::/tmp/serving-cert-2674264491/tls.key\\\\\\\"\\\\nI0122 16:29:56.617863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:29:56.621506 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:29:56.621541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:29:56.621606 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:29:56.621634 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:29:56.631508 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:29:56.631550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631559 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:29:56.631576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0122 16:29:56.631574 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 16:29:56.631584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:29:56.631610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 16:29:56.634157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:10Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.272052 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9485b50dd3fa712a0f43f04b4d3ae98e0f152d17b5db4b6f214125c1e926a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:10Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.283712 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:10Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.293384 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g8wjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425c9f0a-b14e-48d3-bd86-6fc510f22a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1d22788bf54b1c4a55b0c19222ad6dde207887ab282b97324717333f0280f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtrsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g8wjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:10Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.303653 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b5f24a-19df-4969-b547-a5acc323c58a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208979f8d30765fcfd45650c760741d72bd7119bfe62ebf4d7c1554d6c6d56e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf5569b30ec6397014b282bf67eca77930756b413c7554ab366d2d31a4f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zsbtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:10Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.307979 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.308010 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.308019 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.308033 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.308042 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:10Z","lastTransitionTime":"2026-01-22T16:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.317737 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9182510-5fc6-4717-b94c-de8ca4fb7c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqfn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:10Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.327442 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lt6tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"090f3014-3d99-49d5-8a9d-9719b4efbcf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a09e0ee71eddb461f883d44293ed63887153350f0f617799e7f360b5d6fdd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhkzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lt6tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:10Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.354342 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9309c6-0336-4a15-8cbf-78178b4e57d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6824555f2019c5b0c92137ccb0a9af419b01ce0c63e1739c1d22b155a97c98a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a945d54b82518c2cda9257528f766444b687693255c50680adafb11651c792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca6e50d3a2acc2a4d43dc4a1fc1ff783ea5cb78978132377b7bb12b0dbd3e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c7268055ac9d7def228857bd8b974a53bb71fa873e1e0495d4691b8ca11902\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb71578e3eba87e91e6f6db0b03669e556cfbf38e2df367d20b6c8c79952f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:10Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.368019 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f128c8ae-2e32-4884-a296-728579141589\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:29:51.087222 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:29:51.088631 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2674264491/tls.crt::/tmp/serving-cert-2674264491/tls.key\\\\\\\"\\\\nI0122 16:29:56.617863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:29:56.621506 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:29:56.621541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:29:56.621606 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:29:56.621634 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:29:56.631508 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:29:56.631550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631559 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:29:56.631576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0122 16:29:56.631574 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 16:29:56.631584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:29:56.631610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 16:29:56.634157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:10Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.383821 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dfeba9911630f8c172fab9eee3a107fbc2e24407b0af1f69cd539bac18d47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:10Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.396293 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7dvfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97853b38-352d-42df-ad31-639c0e58093a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12409cad6bedda3da41a11ce209dd58b7d15e3fc0dde575d70b3aa6c64435144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7dvfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:10Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.409970 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.410071 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.410089 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.410341 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.410354 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:10Z","lastTransitionTime":"2026-01-22T16:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.417060 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:10Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.461211 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afc42466-9bb2-4e33-abde-6a09e897045b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:10Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.499637 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:10Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.512672 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.512706 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.512716 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.512732 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.512758 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:10Z","lastTransitionTime":"2026-01-22T16:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.541230 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10fc91a9777392383ea1a48bb940f13581052f2aaadce7c2d94588884a8ff832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:10Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.585534 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385c8e25a62d5dad6aeac43a064397418c85c1b8720414cd44e3e925fa85a04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f98a04a30984aea45235e40edb9801d2939b35a08519d1d63df0d0c6c47131a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://596bd59377fe79f228ddda88e07b73a2f24a57ce836d0f0b2ca02d6008363020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ade0d50980af81530f1be5dbb599cf39cd13941d216485b18422f8474a1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2bb807fa30678efaca258ed72a274a7f4e065ce20066caf601177dbc8466409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915d9141459dc9d0a72681717513aaef7a876003397a1ed89a62b755bb45dc67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49acb04b625fa7a5eac407a46db0479dd9498d15612a10b91587eb27ab1b92e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cfdd5744f9e8afe2a851b86ac85473f44fb49066784a282306ca8c1d621974b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jdpck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:10Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.615727 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.615784 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.615795 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.615810 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.615821 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:10Z","lastTransitionTime":"2026-01-22T16:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.719082 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.719122 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.719132 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.719147 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.719156 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:10Z","lastTransitionTime":"2026-01-22T16:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.778835 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 06:38:54.833341012 +0000 UTC Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.807425 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.807477 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.807482 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:10 crc kubenswrapper[4758]: E0122 16:30:10.807603 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:10 crc kubenswrapper[4758]: E0122 16:30:10.807816 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:10 crc kubenswrapper[4758]: E0122 16:30:10.807913 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.820865 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.820915 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.820930 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.820949 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.820967 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:10Z","lastTransitionTime":"2026-01-22T16:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.923970 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.924281 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.924389 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.924496 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:10 crc kubenswrapper[4758]: I0122 16:30:10.924593 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:10Z","lastTransitionTime":"2026-01-22T16:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.028080 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.028113 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.028122 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.028138 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.028150 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:11Z","lastTransitionTime":"2026-01-22T16:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.044141 4758 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.045875 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" event={"ID":"c9182510-5fc6-4717-b94c-de8ca4fb7c54","Type":"ContainerStarted","Data":"fb1b80316bb1f3b27668a5ff6e547c13c4f84ae30f40fc6d0407849fb59fb9c3"} Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.065148 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afc42466-9bb2-4e33-abde-6a09e897045b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:11Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.077886 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:11Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.089951 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10fc91a9777392383ea1a48bb940f13581052f2aaadce7c2d94588884a8ff832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:11Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.115465 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385c8e25a62d5dad6aeac43a064397418c85c1b8720414cd44e3e925fa85a04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f98a04a30984aea45235e40edb9801d2939b35a08519d1d63df0d0c6c47131a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://596bd59377fe79f228ddda88e07b73a2f24a57ce836d0f0b2ca02d6008363020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ade0d50980af81530f1be5dbb599cf39cd13941d216485b18422f8474a1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2bb807fa30678efaca258ed72a274a7f4e065ce20066caf601177dbc8466409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915d9141459dc9d0a72681717513aaef7a876003397a1ed89a62b755bb45dc67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49acb04b625fa7a5eac407a46db0479dd9498d15612a10b91587eb27ab1b92e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cfdd5744f9e8afe2a851b86ac85473f44fb49066784a282306ca8c1d621974b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jdpck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:11Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.124232 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lt6tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"090f3014-3d99-49d5-8a9d-9719b4efbcf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a09e0ee71eddb461f883d44293ed63887153350f0f617799e7f360b5d6fdd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhkzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lt6tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:11Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.129909 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.129969 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.130007 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.130023 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.130032 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:11Z","lastTransitionTime":"2026-01-22T16:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.135439 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9485b50dd3fa712a0f43f04b4d3ae98e0f152d17b5db4b6f214125c1e926a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:11Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.145781 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:11Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.157697 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g8wjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425c9f0a-b14e-48d3-bd86-6fc510f22a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1d22788bf54b1c4a55b0c19222ad6dde207887ab282b97324717333f0280f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtrsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g8wjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:11Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.167014 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b5f24a-19df-4969-b547-a5acc323c58a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208979f8d30765fcfd45650c760741d72bd7119bfe62ebf4d7c1554d6c6d56e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf5569b30ec6397014b282bf67eca77930756b413c7554ab366d2d31a4f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zsbtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:11Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.178665 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9182510-5fc6-4717-b94c-de8ca4fb7c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1b80316bb1f3b27668a5ff6e547c13c4f84ae30f40fc6d0407849fb59fb9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqfn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:11Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.197858 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9309c6-0336-4a15-8cbf-78178b4e57d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6824555f2019c5b0c92137ccb0a9af419b01ce0c63e1739c1d22b155a97c98a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a945d54b82518c2cda9257528f766444b687693255c50680adafb11651c792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca6e50d3a2acc2a4d43dc4a1fc1ff783ea5cb78978132377b7bb12b0dbd3e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c7268055ac9d7def228857bd8b974a53bb71fa873e1e0495d4691b8ca11902\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb71578e3eba87e91e6f6db0b03669e556cfbf38e2df367d20b6c8c79952f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:11Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.210822 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f128c8ae-2e32-4884-a296-728579141589\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:29:51.087222 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:29:51.088631 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2674264491/tls.crt::/tmp/serving-cert-2674264491/tls.key\\\\\\\"\\\\nI0122 16:29:56.617863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:29:56.621506 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:29:56.621541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:29:56.621606 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:29:56.621634 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:29:56.631508 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:29:56.631550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631559 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:29:56.631576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0122 16:29:56.631574 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 16:29:56.631584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:29:56.631610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 16:29:56.634157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:11Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.221356 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dfeba9911630f8c172fab9eee3a107fbc2e24407b0af1f69cd539bac18d47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:11Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.231810 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.231838 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.231847 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.231859 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.231868 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:11Z","lastTransitionTime":"2026-01-22T16:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.234759 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7dvfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97853b38-352d-42df-ad31-639c0e58093a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12409cad6bedda3da41a11ce209dd58b7d15e3fc0dde575d70b3aa6c64435144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7dvfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:11Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.246588 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:11Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.334239 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.334552 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.334620 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.334681 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.334738 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:11Z","lastTransitionTime":"2026-01-22T16:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.437949 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.437988 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.437996 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.438010 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.438020 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:11Z","lastTransitionTime":"2026-01-22T16:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.541511 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.541824 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.541977 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.542105 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.542308 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:11Z","lastTransitionTime":"2026-01-22T16:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.645229 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.645294 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.645310 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.645333 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.645348 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:11Z","lastTransitionTime":"2026-01-22T16:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.747869 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.747905 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.747916 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.747931 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.747940 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:11Z","lastTransitionTime":"2026-01-22T16:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.779556 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 10:49:45.939728198 +0000 UTC Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.849838 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.850090 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.850160 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.850241 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.850316 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:11Z","lastTransitionTime":"2026-01-22T16:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.953522 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.953614 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.953660 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.953694 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:11 crc kubenswrapper[4758]: I0122 16:30:11.953723 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:11Z","lastTransitionTime":"2026-01-22T16:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.047654 4758 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.055816 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.055878 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.055894 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.055911 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.055921 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:12Z","lastTransitionTime":"2026-01-22T16:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.158476 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.158879 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.159093 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.159311 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.159543 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:12Z","lastTransitionTime":"2026-01-22T16:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.270534 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.270795 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.270871 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.270948 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.271012 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:12Z","lastTransitionTime":"2026-01-22T16:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.373542 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.373571 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.373580 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.373594 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.373603 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:12Z","lastTransitionTime":"2026-01-22T16:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.476492 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.476523 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.476531 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.476552 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.476562 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:12Z","lastTransitionTime":"2026-01-22T16:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.579264 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.579327 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.579347 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.579376 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.579410 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:12Z","lastTransitionTime":"2026-01-22T16:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.613133 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:30:12 crc kubenswrapper[4758]: E0122 16:30:12.613331 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:30:28.613311432 +0000 UTC m=+50.096650717 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.681696 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.681774 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.681788 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.681814 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.681828 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:12Z","lastTransitionTime":"2026-01-22T16:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.713837 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.713885 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.713911 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.713933 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:12 crc kubenswrapper[4758]: E0122 16:30:12.714051 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 16:30:12 crc kubenswrapper[4758]: E0122 16:30:12.714068 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 16:30:12 crc kubenswrapper[4758]: E0122 16:30:12.714081 4758 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:30:12 crc kubenswrapper[4758]: E0122 16:30:12.714128 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 16:30:28.714114327 +0000 UTC m=+50.197453612 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:30:12 crc kubenswrapper[4758]: E0122 16:30:12.714181 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 16:30:12 crc kubenswrapper[4758]: E0122 16:30:12.714193 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 16:30:12 crc kubenswrapper[4758]: E0122 16:30:12.714201 4758 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:30:12 crc kubenswrapper[4758]: E0122 16:30:12.714245 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 16:30:28.714235561 +0000 UTC m=+50.197574846 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:30:12 crc kubenswrapper[4758]: E0122 16:30:12.714295 4758 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 16:30:12 crc kubenswrapper[4758]: E0122 16:30:12.714317 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 16:30:28.714310383 +0000 UTC m=+50.197649668 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 16:30:12 crc kubenswrapper[4758]: E0122 16:30:12.714343 4758 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 16:30:12 crc kubenswrapper[4758]: E0122 16:30:12.714361 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 16:30:28.714356764 +0000 UTC m=+50.197696049 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.780315 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 21:33:36.241178558 +0000 UTC Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.783873 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.783914 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.783925 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.783939 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.783949 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:12Z","lastTransitionTime":"2026-01-22T16:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.807129 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.807146 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:12 crc kubenswrapper[4758]: E0122 16:30:12.807244 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:12 crc kubenswrapper[4758]: E0122 16:30:12.807342 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.807375 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:12 crc kubenswrapper[4758]: E0122 16:30:12.807414 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.886864 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.886937 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.886949 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.886965 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.886977 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:12Z","lastTransitionTime":"2026-01-22T16:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.990846 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.990918 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.990927 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.990940 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:12 crc kubenswrapper[4758]: I0122 16:30:12.990948 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:12Z","lastTransitionTime":"2026-01-22T16:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:13 crc kubenswrapper[4758]: I0122 16:30:13.093257 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:13 crc kubenswrapper[4758]: I0122 16:30:13.093287 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:13 crc kubenswrapper[4758]: I0122 16:30:13.093296 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:13 crc kubenswrapper[4758]: I0122 16:30:13.093310 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:13 crc kubenswrapper[4758]: I0122 16:30:13.093321 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:13Z","lastTransitionTime":"2026-01-22T16:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:13 crc kubenswrapper[4758]: I0122 16:30:13.195452 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:13 crc kubenswrapper[4758]: I0122 16:30:13.195502 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:13 crc kubenswrapper[4758]: I0122 16:30:13.195521 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:13 crc kubenswrapper[4758]: I0122 16:30:13.195545 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:13 crc kubenswrapper[4758]: I0122 16:30:13.195564 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:13Z","lastTransitionTime":"2026-01-22T16:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:13 crc kubenswrapper[4758]: I0122 16:30:13.297820 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:13 crc kubenswrapper[4758]: I0122 16:30:13.297855 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:13 crc kubenswrapper[4758]: I0122 16:30:13.297864 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:13 crc kubenswrapper[4758]: I0122 16:30:13.297894 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:13 crc kubenswrapper[4758]: I0122 16:30:13.297906 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:13Z","lastTransitionTime":"2026-01-22T16:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:13 crc kubenswrapper[4758]: I0122 16:30:13.401067 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:13 crc kubenswrapper[4758]: I0122 16:30:13.401120 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:13 crc kubenswrapper[4758]: I0122 16:30:13.401140 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:13 crc kubenswrapper[4758]: I0122 16:30:13.401163 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:13 crc kubenswrapper[4758]: I0122 16:30:13.401181 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:13Z","lastTransitionTime":"2026-01-22T16:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:13 crc kubenswrapper[4758]: I0122 16:30:13.504039 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:13 crc kubenswrapper[4758]: I0122 16:30:13.504081 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:13 crc kubenswrapper[4758]: I0122 16:30:13.504094 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:13 crc kubenswrapper[4758]: I0122 16:30:13.504110 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:13 crc kubenswrapper[4758]: I0122 16:30:13.504119 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:13Z","lastTransitionTime":"2026-01-22T16:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:13 crc kubenswrapper[4758]: I0122 16:30:13.607015 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:13 crc kubenswrapper[4758]: I0122 16:30:13.607070 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:13 crc kubenswrapper[4758]: I0122 16:30:13.607092 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:13 crc kubenswrapper[4758]: I0122 16:30:13.607120 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:13 crc kubenswrapper[4758]: I0122 16:30:13.607143 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:13Z","lastTransitionTime":"2026-01-22T16:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:13 crc kubenswrapper[4758]: I0122 16:30:13.709944 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:13 crc kubenswrapper[4758]: I0122 16:30:13.709986 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:13 crc kubenswrapper[4758]: I0122 16:30:13.710002 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:13 crc kubenswrapper[4758]: I0122 16:30:13.710035 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:13 crc kubenswrapper[4758]: I0122 16:30:13.710051 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:13Z","lastTransitionTime":"2026-01-22T16:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:13 crc kubenswrapper[4758]: I0122 16:30:13.837917 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 18:47:20.609021929 +0000 UTC Jan 22 16:30:13 crc kubenswrapper[4758]: I0122 16:30:13.838701 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:13 crc kubenswrapper[4758]: I0122 16:30:13.838791 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:13 crc kubenswrapper[4758]: I0122 16:30:13.838812 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:13 crc kubenswrapper[4758]: I0122 16:30:13.838837 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:13 crc kubenswrapper[4758]: I0122 16:30:13.838857 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:13Z","lastTransitionTime":"2026-01-22T16:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:13 crc kubenswrapper[4758]: I0122 16:30:13.941261 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:13 crc kubenswrapper[4758]: I0122 16:30:13.941299 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:13 crc kubenswrapper[4758]: I0122 16:30:13.941310 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:13 crc kubenswrapper[4758]: I0122 16:30:13.941329 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:13 crc kubenswrapper[4758]: I0122 16:30:13.941346 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:13Z","lastTransitionTime":"2026-01-22T16:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.057627 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.057692 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.057701 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.057714 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.057735 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:14Z","lastTransitionTime":"2026-01-22T16:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.160234 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.160278 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.160288 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.160303 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.160314 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:14Z","lastTransitionTime":"2026-01-22T16:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.277929 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.277985 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.277996 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.278017 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.278026 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:14Z","lastTransitionTime":"2026-01-22T16:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.380675 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.380711 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.380719 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.380735 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.380784 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:14Z","lastTransitionTime":"2026-01-22T16:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.483402 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.483425 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.483436 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.483457 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.483467 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:14Z","lastTransitionTime":"2026-01-22T16:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.585814 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.585871 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.585882 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.585899 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.585911 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:14Z","lastTransitionTime":"2026-01-22T16:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.688803 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.688874 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.688903 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.688927 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.688946 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:14Z","lastTransitionTime":"2026-01-22T16:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.791192 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.791225 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.791234 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.791249 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.791259 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:14Z","lastTransitionTime":"2026-01-22T16:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.807482 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:14 crc kubenswrapper[4758]: E0122 16:30:14.807675 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.807906 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.807982 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:14 crc kubenswrapper[4758]: E0122 16:30:14.808045 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:14 crc kubenswrapper[4758]: E0122 16:30:14.808204 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.838326 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 02:12:44.038058944 +0000 UTC Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.893704 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.893760 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.893771 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.893787 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.893799 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:14Z","lastTransitionTime":"2026-01-22T16:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.902695 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cbszh"] Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.903184 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cbszh" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.905220 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.906540 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.920625 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385c8e25a62d5dad6aeac43a064397418c85c1b8720414cd44e3e925fa85a04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f98a04a30984aea45235e40edb9801d2939b35a08519d1d63df0d0c6c47131a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://596bd59377fe79f228ddda88e07b73a2f24a57ce836d0f0b2ca02d6008363020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ade0d50980af81530f1be5dbb599cf39cd13941d216485b18422f8474a1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2bb807fa30678efaca258ed72a274a7f4e065ce20066caf601177dbc8466409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915d9141459dc9d0a72681717513aaef7a876003397a1ed89a62b755bb45dc67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49acb04b625fa7a5eac407a46db0479dd9498d15612a10b91587eb27ab1b92e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cfdd5744f9e8afe2a851b86ac85473f44fb49066784a282306ca8c1d621974b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jdpck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:14Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.934461 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cbszh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b21f81e8-3f11-43f9-abdb-09e8d25aeb73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5lx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5lx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cbszh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:14Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.946320 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5lx7\" (UniqueName: \"kubernetes.io/projected/b21f81e8-3f11-43f9-abdb-09e8d25aeb73-kube-api-access-w5lx7\") pod \"ovnkube-control-plane-749d76644c-cbszh\" (UID: \"b21f81e8-3f11-43f9-abdb-09e8d25aeb73\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cbszh" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.946358 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b21f81e8-3f11-43f9-abdb-09e8d25aeb73-env-overrides\") pod \"ovnkube-control-plane-749d76644c-cbszh\" (UID: \"b21f81e8-3f11-43f9-abdb-09e8d25aeb73\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cbszh" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.946392 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b21f81e8-3f11-43f9-abdb-09e8d25aeb73-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-cbszh\" (UID: \"b21f81e8-3f11-43f9-abdb-09e8d25aeb73\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cbszh" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.946409 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b21f81e8-3f11-43f9-abdb-09e8d25aeb73-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-cbszh\" (UID: \"b21f81e8-3f11-43f9-abdb-09e8d25aeb73\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cbszh" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.947181 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afc42466-9bb2-4e33-abde-6a09e897045b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:14Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.961646 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:14Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.977167 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10fc91a9777392383ea1a48bb940f13581052f2aaadce7c2d94588884a8ff832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:14Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.993204 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9182510-5fc6-4717-b94c-de8ca4fb7c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1b80316bb1f3b27668a5ff6e547c13c4f84ae30f40fc6d0407849fb59fb9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqfn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:14Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.996152 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.996224 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.996248 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.996296 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:14 crc kubenswrapper[4758]: I0122 16:30:14.996319 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:14Z","lastTransitionTime":"2026-01-22T16:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.005222 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lt6tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"090f3014-3d99-49d5-8a9d-9719b4efbcf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a09e0ee71eddb461f883d44293ed63887153350f0f617799e7f360b5d6fdd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhkzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lt6tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:15Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.020577 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9485b50dd3fa712a0f43f04b4d3ae98e0f152d17b5db4b6f214125c1e926a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:15Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.031905 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:15Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.043333 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g8wjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425c9f0a-b14e-48d3-bd86-6fc510f22a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1d22788bf54b1c4a55b0c19222ad6dde207887ab282b97324717333f0280f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtrsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g8wjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:15Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.047055 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5lx7\" (UniqueName: \"kubernetes.io/projected/b21f81e8-3f11-43f9-abdb-09e8d25aeb73-kube-api-access-w5lx7\") pod \"ovnkube-control-plane-749d76644c-cbszh\" (UID: \"b21f81e8-3f11-43f9-abdb-09e8d25aeb73\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cbszh" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.047111 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b21f81e8-3f11-43f9-abdb-09e8d25aeb73-env-overrides\") pod \"ovnkube-control-plane-749d76644c-cbszh\" (UID: \"b21f81e8-3f11-43f9-abdb-09e8d25aeb73\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cbszh" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.047173 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b21f81e8-3f11-43f9-abdb-09e8d25aeb73-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-cbszh\" (UID: \"b21f81e8-3f11-43f9-abdb-09e8d25aeb73\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cbszh" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.047212 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b21f81e8-3f11-43f9-abdb-09e8d25aeb73-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-cbszh\" (UID: \"b21f81e8-3f11-43f9-abdb-09e8d25aeb73\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cbszh" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.048032 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b21f81e8-3f11-43f9-abdb-09e8d25aeb73-env-overrides\") pod \"ovnkube-control-plane-749d76644c-cbszh\" (UID: \"b21f81e8-3f11-43f9-abdb-09e8d25aeb73\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cbszh" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.048159 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b21f81e8-3f11-43f9-abdb-09e8d25aeb73-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-cbszh\" (UID: \"b21f81e8-3f11-43f9-abdb-09e8d25aeb73\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cbszh" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.055175 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b5f24a-19df-4969-b547-a5acc323c58a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208979f8d30765fcfd45650c760741d72bd7119bfe62ebf4d7c1554d6c6d56e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf5569b30ec6397014b282bf67eca77930756b413c7554ab366d2d31a4f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zsbtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:15Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.055529 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b21f81e8-3f11-43f9-abdb-09e8d25aeb73-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-cbszh\" (UID: \"b21f81e8-3f11-43f9-abdb-09e8d25aeb73\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cbszh" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.066987 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5lx7\" (UniqueName: \"kubernetes.io/projected/b21f81e8-3f11-43f9-abdb-09e8d25aeb73-kube-api-access-w5lx7\") pod \"ovnkube-control-plane-749d76644c-cbszh\" (UID: \"b21f81e8-3f11-43f9-abdb-09e8d25aeb73\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cbszh" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.072111 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7dvfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97853b38-352d-42df-ad31-639c0e58093a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12409cad6bedda3da41a11ce209dd58b7d15e3fc0dde575d70b3aa6c64435144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7dvfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:15Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.096503 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9309c6-0336-4a15-8cbf-78178b4e57d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6824555f2019c5b0c92137ccb0a9af419b01ce0c63e1739c1d22b155a97c98a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a945d54b82518c2cda9257528f766444b687693255c50680adafb11651c792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca6e50d3a2acc2a4d43dc4a1fc1ff783ea5cb78978132377b7bb12b0dbd3e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c7268055ac9d7def228857bd8b974a53bb71fa873e1e0495d4691b8ca11902\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb71578e3eba87e91e6f6db0b03669e556cfbf38e2df367d20b6c8c79952f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:15Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.098812 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.098880 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.098906 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.098937 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.098962 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:15Z","lastTransitionTime":"2026-01-22T16:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.112268 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f128c8ae-2e32-4884-a296-728579141589\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:29:51.087222 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:29:51.088631 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2674264491/tls.crt::/tmp/serving-cert-2674264491/tls.key\\\\\\\"\\\\nI0122 16:29:56.617863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:29:56.621506 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:29:56.621541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:29:56.621606 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:29:56.621634 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:29:56.631508 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:29:56.631550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631559 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:29:56.631576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0122 16:29:56.631574 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 16:29:56.631584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:29:56.631610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 16:29:56.634157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:15Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.127400 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dfeba9911630f8c172fab9eee3a107fbc2e24407b0af1f69cd539bac18d47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:15Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.128302 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.145554 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:15Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.159399 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:15Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.173133 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10fc91a9777392383ea1a48bb940f13581052f2aaadce7c2d94588884a8ff832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:15Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.197599 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385c8e25a62d5dad6aeac43a064397418c85c1b8720414cd44e3e925fa85a04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f98a04a30984aea45235e40edb9801d2939b35a08519d1d63df0d0c6c47131a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://596bd59377fe79f228ddda88e07b73a2f24a57ce836d0f0b2ca02d6008363020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ade0d50980af81530f1be5dbb599cf39cd13941d216485b18422f8474a1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2bb807fa30678efaca258ed72a274a7f4e065ce20066caf601177dbc8466409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915d9141459dc9d0a72681717513aaef7a876003397a1ed89a62b755bb45dc67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49acb04b625fa7a5eac407a46db0479dd9498d15612a10b91587eb27ab1b92e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cfdd5744f9e8afe2a851b86ac85473f44fb49066784a282306ca8c1d621974b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jdpck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:15Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.201861 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.201898 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.201909 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.201925 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.201935 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:15Z","lastTransitionTime":"2026-01-22T16:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.211059 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cbszh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b21f81e8-3f11-43f9-abdb-09e8d25aeb73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5lx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5lx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cbszh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:15Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.215066 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cbszh" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.229278 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afc42466-9bb2-4e33-abde-6a09e897045b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:15Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:15 crc kubenswrapper[4758]: W0122 16:30:15.232986 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb21f81e8_3f11_43f9_abdb_09e8d25aeb73.slice/crio-b3b9cdbe208c34955c5e159ccec23927d53dd21e6203a5d4f809104e6d6bfa8a WatchSource:0}: Error finding container b3b9cdbe208c34955c5e159ccec23927d53dd21e6203a5d4f809104e6d6bfa8a: Status 404 returned error can't find the container with id b3b9cdbe208c34955c5e159ccec23927d53dd21e6203a5d4f809104e6d6bfa8a Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.242411 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:15Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.257873 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b5f24a-19df-4969-b547-a5acc323c58a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208979f8d30765fcfd45650c760741d72bd7119bfe62ebf4d7c1554d6c6d56e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf5569b30ec6397014b282bf67eca77930756b413c7554ab366d2d31a4f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zsbtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:15Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.275347 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9182510-5fc6-4717-b94c-de8ca4fb7c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1b80316bb1f3b27668a5ff6e547c13c4f84ae30f40fc6d0407849fb59fb9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqfn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:15Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.285565 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lt6tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"090f3014-3d99-49d5-8a9d-9719b4efbcf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a09e0ee71eddb461f883d44293ed63887153350f0f617799e7f360b5d6fdd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhkzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lt6tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:15Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.297567 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9485b50dd3fa712a0f43f04b4d3ae98e0f152d17b5db4b6f214125c1e926a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:15Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.304066 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.304105 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.304116 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.304133 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.304175 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:15Z","lastTransitionTime":"2026-01-22T16:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.308980 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:15Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.318805 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g8wjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425c9f0a-b14e-48d3-bd86-6fc510f22a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1d22788bf54b1c4a55b0c19222ad6dde207887ab282b97324717333f0280f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtrsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g8wjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:15Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.330370 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dfeba9911630f8c172fab9eee3a107fbc2e24407b0af1f69cd539bac18d47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:15Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.343840 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7dvfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97853b38-352d-42df-ad31-639c0e58093a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12409cad6bedda3da41a11ce209dd58b7d15e3fc0dde575d70b3aa6c64435144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7dvfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:15Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.365102 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9309c6-0336-4a15-8cbf-78178b4e57d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6824555f2019c5b0c92137ccb0a9af419b01ce0c63e1739c1d22b155a97c98a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a945d54b82518c2cda9257528f766444b687693255c50680adafb11651c792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca6e50d3a2acc2a4d43dc4a1fc1ff783ea5cb78978132377b7bb12b0dbd3e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c7268055ac9d7def228857bd8b974a53bb71fa873e1e0495d4691b8ca11902\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb71578e3eba87e91e6f6db0b03669e556cfbf38e2df367d20b6c8c79952f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:15Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.385620 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f128c8ae-2e32-4884-a296-728579141589\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:29:51.087222 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:29:51.088631 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2674264491/tls.crt::/tmp/serving-cert-2674264491/tls.key\\\\\\\"\\\\nI0122 16:29:56.617863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:29:56.621506 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:29:56.621541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:29:56.621606 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:29:56.621634 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:29:56.631508 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:29:56.631550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631559 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:29:56.631576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0122 16:29:56.631574 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 16:29:56.631584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:29:56.631610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 16:29:56.634157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:15Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.407062 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.407094 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.407105 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.407119 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.407129 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:15Z","lastTransitionTime":"2026-01-22T16:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.509777 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.509841 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.509876 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.509914 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.509938 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:15Z","lastTransitionTime":"2026-01-22T16:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.613100 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.613180 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.613197 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.613220 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.613237 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:15Z","lastTransitionTime":"2026-01-22T16:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.716793 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.716853 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.716878 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.716911 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.716934 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:15Z","lastTransitionTime":"2026-01-22T16:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.824967 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.825016 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.825027 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.825045 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.825062 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:15Z","lastTransitionTime":"2026-01-22T16:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.839350 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 04:46:34.359568535 +0000 UTC Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.928473 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.928556 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.928581 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.928615 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:15 crc kubenswrapper[4758]: I0122 16:30:15.928640 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:15Z","lastTransitionTime":"2026-01-22T16:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.031453 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.031503 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.031520 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.031541 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.031556 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:16Z","lastTransitionTime":"2026-01-22T16:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.064885 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cbszh" event={"ID":"b21f81e8-3f11-43f9-abdb-09e8d25aeb73","Type":"ContainerStarted","Data":"b3b9cdbe208c34955c5e159ccec23927d53dd21e6203a5d4f809104e6d6bfa8a"} Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.134960 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.135041 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.135059 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.135082 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.135099 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:16Z","lastTransitionTime":"2026-01-22T16:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.238537 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.238622 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.238662 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.238694 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.238715 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:16Z","lastTransitionTime":"2026-01-22T16:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.340711 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.340778 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.340791 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.340810 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.340821 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:16Z","lastTransitionTime":"2026-01-22T16:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.444435 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.444472 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.444481 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.444494 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.444503 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:16Z","lastTransitionTime":"2026-01-22T16:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.546722 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.547111 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.547123 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.547141 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.547153 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:16Z","lastTransitionTime":"2026-01-22T16:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.649793 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.649830 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.649839 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.649856 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.649866 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:16Z","lastTransitionTime":"2026-01-22T16:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.752247 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.752293 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.752304 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.752321 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.752333 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:16Z","lastTransitionTime":"2026-01-22T16:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.769532 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-2xqns"] Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.770049 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:30:16 crc kubenswrapper[4758]: E0122 16:30:16.770117 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.780725 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g8wjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425c9f0a-b14e-48d3-bd86-6fc510f22a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1d22788bf54b1c4a55b0c19222ad6dde207887ab282b97324717333f0280f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtrsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g8wjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:16Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.795238 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b5f24a-19df-4969-b547-a5acc323c58a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208979f8d30765fcfd45650c760741d72bd7119bfe62ebf4d7c1554d6c6d56e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf5569b30ec6397014b282bf67eca77930756b413c7554ab366d2d31a4f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zsbtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:16Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.808095 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.808100 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:16 crc kubenswrapper[4758]: E0122 16:30:16.808252 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:16 crc kubenswrapper[4758]: E0122 16:30:16.808356 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.808100 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:16 crc kubenswrapper[4758]: E0122 16:30:16.808427 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.812156 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9182510-5fc6-4717-b94c-de8ca4fb7c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1b80316bb1f3b27668a5ff6e547c13c4f84ae30f40fc6d0407849fb59fb9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqfn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:16Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.822850 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lt6tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"090f3014-3d99-49d5-8a9d-9719b4efbcf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a09e0ee71eddb461f883d44293ed63887153350f0f617799e7f360b5d6fdd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhkzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lt6tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:16Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.837494 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9485b50dd3fa712a0f43f04b4d3ae98e0f152d17b5db4b6f214125c1e926a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:16Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.839715 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 01:20:58.835732923 +0000 UTC Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.848679 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:16Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.853900 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.854125 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.854186 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.854256 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.854336 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:16Z","lastTransitionTime":"2026-01-22T16:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.860478 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dfeba9911630f8c172fab9eee3a107fbc2e24407b0af1f69cd539bac18d47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:16Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.863458 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3ef1c490-d5f9-458d-8b3e-8580a5f07df6-metrics-certs\") pod \"network-metrics-daemon-2xqns\" (UID: \"3ef1c490-d5f9-458d-8b3e-8580a5f07df6\") " pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.863500 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8br2\" (UniqueName: \"kubernetes.io/projected/3ef1c490-d5f9-458d-8b3e-8580a5f07df6-kube-api-access-k8br2\") pod \"network-metrics-daemon-2xqns\" (UID: \"3ef1c490-d5f9-458d-8b3e-8580a5f07df6\") " pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.873828 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7dvfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97853b38-352d-42df-ad31-639c0e58093a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12409cad6bedda3da41a11ce209dd58b7d15e3fc0dde575d70b3aa6c64435144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7dvfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:16Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.884628 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xqns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ef1c490-d5f9-458d-8b3e-8580a5f07df6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8br2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8br2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xqns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:16Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.904972 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9309c6-0336-4a15-8cbf-78178b4e57d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6824555f2019c5b0c92137ccb0a9af419b01ce0c63e1739c1d22b155a97c98a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a945d54b82518c2cda9257528f766444b687693255c50680adafb11651c792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca6e50d3a2acc2a4d43dc4a1fc1ff783ea5cb78978132377b7bb12b0dbd3e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c7268055ac9d7def228857bd8b974a53bb71fa873e1e0495d4691b8ca11902\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb71578e3eba87e91e6f6db0b03669e556cfbf38e2df367d20b6c8c79952f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:16Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.920898 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f128c8ae-2e32-4884-a296-728579141589\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:29:51.087222 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:29:51.088631 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2674264491/tls.crt::/tmp/serving-cert-2674264491/tls.key\\\\\\\"\\\\nI0122 16:29:56.617863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:29:56.621506 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:29:56.621541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:29:56.621606 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:29:56.621634 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:29:56.631508 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:29:56.631550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631559 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:29:56.631576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0122 16:29:56.631574 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 16:29:56.631584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:29:56.631610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 16:29:56.634157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:16Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.932584 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:16Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.945159 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:16Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.957255 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.957298 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.957309 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.957325 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.957340 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:16Z","lastTransitionTime":"2026-01-22T16:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.958996 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10fc91a9777392383ea1a48bb940f13581052f2aaadce7c2d94588884a8ff832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:16Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.964072 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3ef1c490-d5f9-458d-8b3e-8580a5f07df6-metrics-certs\") pod \"network-metrics-daemon-2xqns\" (UID: \"3ef1c490-d5f9-458d-8b3e-8580a5f07df6\") " pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.964113 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8br2\" (UniqueName: \"kubernetes.io/projected/3ef1c490-d5f9-458d-8b3e-8580a5f07df6-kube-api-access-k8br2\") pod \"network-metrics-daemon-2xqns\" (UID: \"3ef1c490-d5f9-458d-8b3e-8580a5f07df6\") " pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:30:16 crc kubenswrapper[4758]: E0122 16:30:16.964203 4758 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 16:30:16 crc kubenswrapper[4758]: E0122 16:30:16.964251 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ef1c490-d5f9-458d-8b3e-8580a5f07df6-metrics-certs podName:3ef1c490-d5f9-458d-8b3e-8580a5f07df6 nodeName:}" failed. No retries permitted until 2026-01-22 16:30:17.464237697 +0000 UTC m=+38.947576972 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3ef1c490-d5f9-458d-8b3e-8580a5f07df6-metrics-certs") pod "network-metrics-daemon-2xqns" (UID: "3ef1c490-d5f9-458d-8b3e-8580a5f07df6") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.976287 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385c8e25a62d5dad6aeac43a064397418c85c1b8720414cd44e3e925fa85a04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f98a04a30984aea45235e40edb9801d2939b35a08519d1d63df0d0c6c47131a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://596bd59377fe79f228ddda88e07b73a2f24a57ce836d0f0b2ca02d6008363020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ade0d50980af81530f1be5dbb599cf39cd13941d216485b18422f8474a1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2bb807fa30678efaca258ed72a274a7f4e065ce20066caf601177dbc8466409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915d9141459dc9d0a72681717513aaef7a876003397a1ed89a62b755bb45dc67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49acb04b625fa7a5eac407a46db0479dd9498d15612a10b91587eb27ab1b92e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cfdd5744f9e8afe2a851b86ac85473f44fb49066784a282306ca8c1d621974b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jdpck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:16Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.982488 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8br2\" (UniqueName: \"kubernetes.io/projected/3ef1c490-d5f9-458d-8b3e-8580a5f07df6-kube-api-access-k8br2\") pod \"network-metrics-daemon-2xqns\" (UID: \"3ef1c490-d5f9-458d-8b3e-8580a5f07df6\") " pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.986096 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cbszh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b21f81e8-3f11-43f9-abdb-09e8d25aeb73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5lx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5lx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cbszh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:16Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:16 crc kubenswrapper[4758]: I0122 16:30:16.996535 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afc42466-9bb2-4e33-abde-6a09e897045b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:16Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.059038 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.059067 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.059075 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.059089 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.059098 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:17Z","lastTransitionTime":"2026-01-22T16:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.068974 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jdpck_9b60a09e-8bfa-4d2e-998d-e1db5dec0faa/ovnkube-controller/0.log" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.074835 4758 generic.go:334] "Generic (PLEG): container finished" podID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerID="49acb04b625fa7a5eac407a46db0479dd9498d15612a10b91587eb27ab1b92e1" exitCode=1 Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.074979 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" event={"ID":"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa","Type":"ContainerDied","Data":"49acb04b625fa7a5eac407a46db0479dd9498d15612a10b91587eb27ab1b92e1"} Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.076075 4758 scope.go:117] "RemoveContainer" containerID="49acb04b625fa7a5eac407a46db0479dd9498d15612a10b91587eb27ab1b92e1" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.078564 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cbszh" event={"ID":"b21f81e8-3f11-43f9-abdb-09e8d25aeb73","Type":"ContainerStarted","Data":"e25bfe191c79389160e8c25e97ebd3bf2782cccecf01aac06c459041e083a793"} Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.078643 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cbszh" event={"ID":"b21f81e8-3f11-43f9-abdb-09e8d25aeb73","Type":"ContainerStarted","Data":"0004ca3184c4311fd606fb18d3c4657d88f6212a1ac49a882c1a8ec5162c314b"} Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.087100 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g8wjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425c9f0a-b14e-48d3-bd86-6fc510f22a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1d22788bf54b1c4a55b0c19222ad6dde207887ab282b97324717333f0280f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtrsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g8wjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.099509 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b5f24a-19df-4969-b547-a5acc323c58a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208979f8d30765fcfd45650c760741d72bd7119bfe62ebf4d7c1554d6c6d56e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf5569b30ec6397014b282bf67eca77930756b413c7554ab366d2d31a4f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zsbtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.113426 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9182510-5fc6-4717-b94c-de8ca4fb7c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1b80316bb1f3b27668a5ff6e547c13c4f84ae30f40fc6d0407849fb59fb9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqfn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.123851 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lt6tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"090f3014-3d99-49d5-8a9d-9719b4efbcf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a09e0ee71eddb461f883d44293ed63887153350f0f617799e7f360b5d6fdd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhkzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lt6tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.136506 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9485b50dd3fa712a0f43f04b4d3ae98e0f152d17b5db4b6f214125c1e926a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.149938 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.162833 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.162891 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.162906 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.163268 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.163306 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:17Z","lastTransitionTime":"2026-01-22T16:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.165214 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dfeba9911630f8c172fab9eee3a107fbc2e24407b0af1f69cd539bac18d47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.178627 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7dvfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97853b38-352d-42df-ad31-639c0e58093a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12409cad6bedda3da41a11ce209dd58b7d15e3fc0dde575d70b3aa6c64435144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7dvfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.188236 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xqns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ef1c490-d5f9-458d-8b3e-8580a5f07df6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8br2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8br2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xqns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.208802 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9309c6-0336-4a15-8cbf-78178b4e57d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6824555f2019c5b0c92137ccb0a9af419b01ce0c63e1739c1d22b155a97c98a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a945d54b82518c2cda9257528f766444b687693255c50680adafb11651c792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca6e50d3a2acc2a4d43dc4a1fc1ff783ea5cb78978132377b7bb12b0dbd3e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c7268055ac9d7def228857bd8b974a53bb71fa873e1e0495d4691b8ca11902\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb71578e3eba87e91e6f6db0b03669e556cfbf38e2df367d20b6c8c79952f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.224295 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f128c8ae-2e32-4884-a296-728579141589\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:29:51.087222 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:29:51.088631 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2674264491/tls.crt::/tmp/serving-cert-2674264491/tls.key\\\\\\\"\\\\nI0122 16:29:56.617863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:29:56.621506 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:29:56.621541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:29:56.621606 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:29:56.621634 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:29:56.631508 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:29:56.631550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631559 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:29:56.631576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0122 16:29:56.631574 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 16:29:56.631584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:29:56.631610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 16:29:56.634157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.236682 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.252040 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.266215 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10fc91a9777392383ea1a48bb940f13581052f2aaadce7c2d94588884a8ff832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.268598 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.268643 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.268654 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.268690 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.268701 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:17Z","lastTransitionTime":"2026-01-22T16:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.286574 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385c8e25a62d5dad6aeac43a064397418c85c1b8720414cd44e3e925fa85a04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f98a04a30984aea45235e40edb9801d2939b35a08519d1d63df0d0c6c47131a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://596bd59377fe79f228ddda88e07b73a2f24a57ce836d0f0b2ca02d6008363020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ade0d50980af81530f1be5dbb599cf39cd13941d216485b18422f8474a1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2bb807fa30678efaca258ed72a274a7f4e065ce20066caf601177dbc8466409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915d9141459dc9d0a72681717513aaef7a876003397a1ed89a62b755bb45dc67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49acb04b625fa7a5eac407a46db0479dd9498d15612a10b91587eb27ab1b92e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49acb04b625fa7a5eac407a46db0479dd9498d15612a10b91587eb27ab1b92e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"oval\\\\nI0122 16:30:14.741381 6045 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 16:30:14.741409 6045 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 16:30:14.741416 6045 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 16:30:14.741429 6045 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 16:30:14.741420 6045 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 16:30:14.741448 6045 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0122 16:30:14.741468 6045 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0122 16:30:14.741479 6045 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0122 16:30:14.741462 6045 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 16:30:14.741484 6045 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 16:30:14.741541 6045 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 16:30:14.741536 6045 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0122 16:30:14.741494 6045 factory.go:656] Stopping watch factory\\\\nI0122 16:30:14.741600 6045 ovnkube.go:599] Stopped ovnkube\\\\nI0122 16:30:14.741644 6045 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 16:30:14.741507 6045 handler.go:208] Removed *v1.Node event handler 7\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cfdd5744f9e8afe2a851b86ac85473f44fb49066784a282306ca8c1d621974b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jdpck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.297260 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cbszh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b21f81e8-3f11-43f9-abdb-09e8d25aeb73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5lx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5lx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cbszh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.310305 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afc42466-9bb2-4e33-abde-6a09e897045b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.322614 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lt6tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"090f3014-3d99-49d5-8a9d-9719b4efbcf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a09e0ee71eddb461f883d44293ed63887153350f0f617799e7f360b5d6fdd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhkzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lt6tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.336781 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9485b50dd3fa712a0f43f04b4d3ae98e0f152d17b5db4b6f214125c1e926a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.349177 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.360345 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g8wjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425c9f0a-b14e-48d3-bd86-6fc510f22a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1d22788bf54b1c4a55b0c19222ad6dde207887ab282b97324717333f0280f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtrsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g8wjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.372751 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.372779 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.372787 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.372800 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.372810 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:17Z","lastTransitionTime":"2026-01-22T16:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.373986 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b5f24a-19df-4969-b547-a5acc323c58a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208979f8d30765fcfd45650c760741d72bd7119bfe62ebf4d7c1554d6c6d56e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf5569b30ec6397014b282bf67eca77930756b413c7554ab366d2d31a4f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zsbtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.390211 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9182510-5fc6-4717-b94c-de8ca4fb7c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1b80316bb1f3b27668a5ff6e547c13c4f84ae30f40fc6d0407849fb59fb9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqfn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.402070 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xqns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ef1c490-d5f9-458d-8b3e-8580a5f07df6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8br2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8br2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xqns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.425409 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9309c6-0336-4a15-8cbf-78178b4e57d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6824555f2019c5b0c92137ccb0a9af419b01ce0c63e1739c1d22b155a97c98a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a945d54b82518c2cda9257528f766444b687693255c50680adafb11651c792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca6e50d3a2acc2a4d43dc4a1fc1ff783ea5cb78978132377b7bb12b0dbd3e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c7268055ac9d7def228857bd8b974a53bb71fa873e1e0495d4691b8ca11902\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb71578e3eba87e91e6f6db0b03669e556cfbf38e2df367d20b6c8c79952f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.439732 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f128c8ae-2e32-4884-a296-728579141589\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:29:51.087222 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:29:51.088631 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2674264491/tls.crt::/tmp/serving-cert-2674264491/tls.key\\\\\\\"\\\\nI0122 16:29:56.617863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:29:56.621506 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:29:56.621541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:29:56.621606 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:29:56.621634 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:29:56.631508 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:29:56.631550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631559 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:29:56.631576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0122 16:29:56.631574 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 16:29:56.631584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:29:56.631610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 16:29:56.634157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.460055 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dfeba9911630f8c172fab9eee3a107fbc2e24407b0af1f69cd539bac18d47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.468824 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3ef1c490-d5f9-458d-8b3e-8580a5f07df6-metrics-certs\") pod \"network-metrics-daemon-2xqns\" (UID: \"3ef1c490-d5f9-458d-8b3e-8580a5f07df6\") " pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:30:17 crc kubenswrapper[4758]: E0122 16:30:17.468976 4758 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 16:30:17 crc kubenswrapper[4758]: E0122 16:30:17.469040 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ef1c490-d5f9-458d-8b3e-8580a5f07df6-metrics-certs podName:3ef1c490-d5f9-458d-8b3e-8580a5f07df6 nodeName:}" failed. No retries permitted until 2026-01-22 16:30:18.469023018 +0000 UTC m=+39.952362303 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3ef1c490-d5f9-458d-8b3e-8580a5f07df6-metrics-certs") pod "network-metrics-daemon-2xqns" (UID: "3ef1c490-d5f9-458d-8b3e-8580a5f07df6") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.472679 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7dvfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97853b38-352d-42df-ad31-639c0e58093a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12409cad6bedda3da41a11ce209dd58b7d15e3fc0dde575d70b3aa6c64435144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7dvfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.474300 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.474331 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.474344 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.474359 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.474370 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:17Z","lastTransitionTime":"2026-01-22T16:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.483524 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.493660 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cbszh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b21f81e8-3f11-43f9-abdb-09e8d25aeb73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0004ca3184c4311fd606fb18d3c4657d88f6212a1ac49a882c1a8ec5162c314b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5lx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e25bfe191c79389160e8c25e97ebd3bf2782cccecf01aac06c459041e083a793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5lx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cbszh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.505258 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afc42466-9bb2-4e33-abde-6a09e897045b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.516834 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.530480 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10fc91a9777392383ea1a48bb940f13581052f2aaadce7c2d94588884a8ff832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.549381 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385c8e25a62d5dad6aeac43a064397418c85c1b8720414cd44e3e925fa85a04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f98a04a30984aea45235e40edb9801d2939b35a08519d1d63df0d0c6c47131a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://596bd59377fe79f228ddda88e07b73a2f24a57ce836d0f0b2ca02d6008363020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ade0d50980af81530f1be5dbb599cf39cd13941d216485b18422f8474a1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2bb807fa30678efaca258ed72a274a7f4e065ce20066caf601177dbc8466409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915d9141459dc9d0a72681717513aaef7a876003397a1ed89a62b755bb45dc67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49acb04b625fa7a5eac407a46db0479dd9498d15612a10b91587eb27ab1b92e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49acb04b625fa7a5eac407a46db0479dd9498d15612a10b91587eb27ab1b92e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"oval\\\\nI0122 16:30:14.741381 6045 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 16:30:14.741409 6045 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 16:30:14.741416 6045 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 16:30:14.741429 6045 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 16:30:14.741420 6045 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 16:30:14.741448 6045 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0122 16:30:14.741468 6045 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0122 16:30:14.741479 6045 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0122 16:30:14.741462 6045 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 16:30:14.741484 6045 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 16:30:14.741541 6045 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 16:30:14.741536 6045 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0122 16:30:14.741494 6045 factory.go:656] Stopping watch factory\\\\nI0122 16:30:14.741600 6045 ovnkube.go:599] Stopped ovnkube\\\\nI0122 16:30:14.741644 6045 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 16:30:14.741507 6045 handler.go:208] Removed *v1.Node event handler 7\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cfdd5744f9e8afe2a851b86ac85473f44fb49066784a282306ca8c1d621974b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jdpck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.576651 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.576684 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.576692 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.576706 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.576957 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:17Z","lastTransitionTime":"2026-01-22T16:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.679650 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.679701 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.679711 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.679726 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.679735 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:17Z","lastTransitionTime":"2026-01-22T16:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.782568 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.782624 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.782637 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.782653 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.783028 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:17Z","lastTransitionTime":"2026-01-22T16:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.839895 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 18:27:38.817391402 +0000 UTC Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.885362 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.885394 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.885403 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.885417 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.885426 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:17Z","lastTransitionTime":"2026-01-22T16:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.987367 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.987416 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.987429 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.987446 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:17 crc kubenswrapper[4758]: I0122 16:30:17.987457 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:17Z","lastTransitionTime":"2026-01-22T16:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.083159 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jdpck_9b60a09e-8bfa-4d2e-998d-e1db5dec0faa/ovnkube-controller/0.log" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.086603 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" event={"ID":"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa","Type":"ContainerStarted","Data":"84b6f8539a00b1a414508970ff366bcdcd904ecebe9f020c607ccb7311fcfa86"} Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.086806 4758 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.089411 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.089478 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.089495 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.089516 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.089530 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:18Z","lastTransitionTime":"2026-01-22T16:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.101466 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b5f24a-19df-4969-b547-a5acc323c58a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208979f8d30765fcfd45650c760741d72bd7119bfe62ebf4d7c1554d6c6d56e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf5569b30ec6397014b282bf67eca77930756b413c7554ab366d2d31a4f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zsbtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.120202 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9182510-5fc6-4717-b94c-de8ca4fb7c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1b80316bb1f3b27668a5ff6e547c13c4f84ae30f40fc6d0407849fb59fb9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqfn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.136431 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lt6tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"090f3014-3d99-49d5-8a9d-9719b4efbcf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a09e0ee71eddb461f883d44293ed63887153350f0f617799e7f360b5d6fdd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhkzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lt6tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.151013 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9485b50dd3fa712a0f43f04b4d3ae98e0f152d17b5db4b6f214125c1e926a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.163945 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.176081 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g8wjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425c9f0a-b14e-48d3-bd86-6fc510f22a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1d22788bf54b1c4a55b0c19222ad6dde207887ab282b97324717333f0280f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtrsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g8wjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.188386 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dfeba9911630f8c172fab9eee3a107fbc2e24407b0af1f69cd539bac18d47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.191886 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.191911 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.191919 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.191931 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.191940 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:18Z","lastTransitionTime":"2026-01-22T16:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.203170 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7dvfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97853b38-352d-42df-ad31-639c0e58093a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12409cad6bedda3da41a11ce209dd58b7d15e3fc0dde575d70b3aa6c64435144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7dvfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.217336 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xqns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ef1c490-d5f9-458d-8b3e-8580a5f07df6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8br2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8br2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xqns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.238100 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9309c6-0336-4a15-8cbf-78178b4e57d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6824555f2019c5b0c92137ccb0a9af419b01ce0c63e1739c1d22b155a97c98a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a945d54b82518c2cda9257528f766444b687693255c50680adafb11651c792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca6e50d3a2acc2a4d43dc4a1fc1ff783ea5cb78978132377b7bb12b0dbd3e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c7268055ac9d7def228857bd8b974a53bb71fa873e1e0495d4691b8ca11902\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb71578e3eba87e91e6f6db0b03669e556cfbf38e2df367d20b6c8c79952f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.256146 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f128c8ae-2e32-4884-a296-728579141589\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:29:51.087222 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:29:51.088631 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2674264491/tls.crt::/tmp/serving-cert-2674264491/tls.key\\\\\\\"\\\\nI0122 16:29:56.617863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:29:56.621506 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:29:56.621541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:29:56.621606 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:29:56.621634 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:29:56.631508 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:29:56.631550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631559 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:29:56.631576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0122 16:29:56.631574 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 16:29:56.631584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:29:56.631610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 16:29:56.634157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.272073 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.286731 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10fc91a9777392383ea1a48bb940f13581052f2aaadce7c2d94588884a8ff832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.293693 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.293734 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.293757 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.293773 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.293783 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:18Z","lastTransitionTime":"2026-01-22T16:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.308475 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385c8e25a62d5dad6aeac43a064397418c85c1b8720414cd44e3e925fa85a04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f98a04a30984aea45235e40edb9801d2939b35a08519d1d63df0d0c6c47131a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://596bd59377fe79f228ddda88e07b73a2f24a57ce836d0f0b2ca02d6008363020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ade0d50980af81530f1be5dbb599cf39cd13941d216485b18422f8474a1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2bb807fa30678efaca258ed72a274a7f4e065ce20066caf601177dbc8466409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915d9141459dc9d0a72681717513aaef7a876003397a1ed89a62b755bb45dc67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b6f8539a00b1a414508970ff366bcdcd904ecebe9f020c607ccb7311fcfa86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49acb04b625fa7a5eac407a46db0479dd9498d15612a10b91587eb27ab1b92e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"oval\\\\nI0122 16:30:14.741381 6045 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 16:30:14.741409 6045 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 16:30:14.741416 6045 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 16:30:14.741429 6045 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 16:30:14.741420 6045 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 16:30:14.741448 6045 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0122 16:30:14.741468 6045 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0122 16:30:14.741479 6045 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0122 16:30:14.741462 6045 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 16:30:14.741484 6045 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 16:30:14.741541 6045 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 16:30:14.741536 6045 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0122 16:30:14.741494 6045 factory.go:656] Stopping watch factory\\\\nI0122 16:30:14.741600 6045 ovnkube.go:599] Stopped ovnkube\\\\nI0122 16:30:14.741644 6045 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 16:30:14.741507 6045 handler.go:208] Removed *v1.Node event handler 7\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cfdd5744f9e8afe2a851b86ac85473f44fb49066784a282306ca8c1d621974b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jdpck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.320694 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cbszh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b21f81e8-3f11-43f9-abdb-09e8d25aeb73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0004ca3184c4311fd606fb18d3c4657d88f6212a1ac49a882c1a8ec5162c314b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5lx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e25bfe191c79389160e8c25e97ebd3bf2782cccecf01aac06c459041e083a793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5lx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cbszh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.332428 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afc42466-9bb2-4e33-abde-6a09e897045b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.344605 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.395993 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.396029 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.396039 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.396054 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.396063 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:18Z","lastTransitionTime":"2026-01-22T16:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.479148 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3ef1c490-d5f9-458d-8b3e-8580a5f07df6-metrics-certs\") pod \"network-metrics-daemon-2xqns\" (UID: \"3ef1c490-d5f9-458d-8b3e-8580a5f07df6\") " pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:30:18 crc kubenswrapper[4758]: E0122 16:30:18.479273 4758 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 16:30:18 crc kubenswrapper[4758]: E0122 16:30:18.479326 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ef1c490-d5f9-458d-8b3e-8580a5f07df6-metrics-certs podName:3ef1c490-d5f9-458d-8b3e-8580a5f07df6 nodeName:}" failed. No retries permitted until 2026-01-22 16:30:20.47931187 +0000 UTC m=+41.962651145 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3ef1c490-d5f9-458d-8b3e-8580a5f07df6-metrics-certs") pod "network-metrics-daemon-2xqns" (UID: "3ef1c490-d5f9-458d-8b3e-8580a5f07df6") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.498359 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.498398 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.498407 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.498420 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.498428 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:18Z","lastTransitionTime":"2026-01-22T16:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.601617 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.601681 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.601698 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.601719 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.601736 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:18Z","lastTransitionTime":"2026-01-22T16:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.704026 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.704085 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.704102 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.704125 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.704141 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:18Z","lastTransitionTime":"2026-01-22T16:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.807080 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.807106 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.807130 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.807150 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.807161 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.807177 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.807188 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:18Z","lastTransitionTime":"2026-01-22T16:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:18 crc kubenswrapper[4758]: E0122 16:30:18.807234 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.807337 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:18 crc kubenswrapper[4758]: E0122 16:30:18.807171 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:18 crc kubenswrapper[4758]: E0122 16:30:18.807453 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.807697 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:30:18 crc kubenswrapper[4758]: E0122 16:30:18.807876 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.821142 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.832822 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.840514 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 03:23:55.832628134 +0000 UTC Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.844254 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10fc91a9777392383ea1a48bb940f13581052f2aaadce7c2d94588884a8ff832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.861187 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385c8e25a62d5dad6aeac43a064397418c85c1b8720414cd44e3e925fa85a04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f98a04a30984aea45235e40edb9801d2939b35a08519d1d63df0d0c6c47131a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://596bd59377fe79f228ddda88e07b73a2f24a57ce836d0f0b2ca02d6008363020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ade0d50980af81530f1be5dbb599cf39cd13941d216485b18422f8474a1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2bb807fa30678efaca258ed72a274a7f4e065ce20066caf601177dbc8466409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915d9141459dc9d0a72681717513aaef7a876003397a1ed89a62b755bb45dc67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b6f8539a00b1a414508970ff366bcdcd904ecebe9f020c607ccb7311fcfa86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49acb04b625fa7a5eac407a46db0479dd9498d15612a10b91587eb27ab1b92e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"oval\\\\nI0122 16:30:14.741381 6045 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 16:30:14.741409 6045 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 16:30:14.741416 6045 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 16:30:14.741429 6045 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 16:30:14.741420 6045 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 16:30:14.741448 6045 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0122 16:30:14.741468 6045 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0122 16:30:14.741479 6045 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0122 16:30:14.741462 6045 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 16:30:14.741484 6045 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 16:30:14.741541 6045 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 16:30:14.741536 6045 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0122 16:30:14.741494 6045 factory.go:656] Stopping watch factory\\\\nI0122 16:30:14.741600 6045 ovnkube.go:599] Stopped ovnkube\\\\nI0122 16:30:14.741644 6045 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 16:30:14.741507 6045 handler.go:208] Removed *v1.Node event handler 7\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cfdd5744f9e8afe2a851b86ac85473f44fb49066784a282306ca8c1d621974b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jdpck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.871695 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cbszh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b21f81e8-3f11-43f9-abdb-09e8d25aeb73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0004ca3184c4311fd606fb18d3c4657d88f6212a1ac49a882c1a8ec5162c314b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5lx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e25bfe191c79389160e8c25e97ebd3bf2782cccecf01aac06c459041e083a793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5lx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cbszh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.889556 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afc42466-9bb2-4e33-abde-6a09e897045b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.900971 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g8wjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425c9f0a-b14e-48d3-bd86-6fc510f22a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1d22788bf54b1c4a55b0c19222ad6dde207887ab282b97324717333f0280f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtrsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g8wjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.910295 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.910337 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.910345 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.910361 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.910371 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:18Z","lastTransitionTime":"2026-01-22T16:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.910661 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b5f24a-19df-4969-b547-a5acc323c58a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208979f8d30765fcfd45650c760741d72bd7119bfe62ebf4d7c1554d6c6d56e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf5569b30ec6397014b282bf67eca77930756b413c7554ab366d2d31a4f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zsbtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.924498 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9182510-5fc6-4717-b94c-de8ca4fb7c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1b80316bb1f3b27668a5ff6e547c13c4f84ae30f40fc6d0407849fb59fb9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqfn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.935659 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lt6tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"090f3014-3d99-49d5-8a9d-9719b4efbcf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a09e0ee71eddb461f883d44293ed63887153350f0f617799e7f360b5d6fdd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhkzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lt6tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.948165 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9485b50dd3fa712a0f43f04b4d3ae98e0f152d17b5db4b6f214125c1e926a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.962526 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.973491 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dfeba9911630f8c172fab9eee3a107fbc2e24407b0af1f69cd539bac18d47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.986077 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7dvfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97853b38-352d-42df-ad31-639c0e58093a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12409cad6bedda3da41a11ce209dd58b7d15e3fc0dde575d70b3aa6c64435144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7dvfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:18 crc kubenswrapper[4758]: I0122 16:30:18.997713 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xqns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ef1c490-d5f9-458d-8b3e-8580a5f07df6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8br2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8br2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xqns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.012553 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.012603 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.012616 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.012633 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.012645 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:19Z","lastTransitionTime":"2026-01-22T16:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.017188 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9309c6-0336-4a15-8cbf-78178b4e57d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6824555f2019c5b0c92137ccb0a9af419b01ce0c63e1739c1d22b155a97c98a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a945d54b82518c2cda9257528f766444b687693255c50680adafb11651c792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca6e50d3a2acc2a4d43dc4a1fc1ff783ea5cb78978132377b7bb12b0dbd3e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c7268055ac9d7def228857bd8b974a53bb71fa873e1e0495d4691b8ca11902\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb71578e3eba87e91e6f6db0b03669e556cfbf38e2df367d20b6c8c79952f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:19Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.030128 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f128c8ae-2e32-4884-a296-728579141589\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:29:51.087222 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:29:51.088631 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2674264491/tls.crt::/tmp/serving-cert-2674264491/tls.key\\\\\\\"\\\\nI0122 16:29:56.617863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:29:56.621506 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:29:56.621541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:29:56.621606 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:29:56.621634 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:29:56.631508 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:29:56.631550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631559 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:29:56.631576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0122 16:29:56.631574 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 16:29:56.631584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:29:56.631610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 16:29:56.634157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:19Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.091490 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jdpck_9b60a09e-8bfa-4d2e-998d-e1db5dec0faa/ovnkube-controller/1.log" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.092343 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jdpck_9b60a09e-8bfa-4d2e-998d-e1db5dec0faa/ovnkube-controller/0.log" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.095857 4758 generic.go:334] "Generic (PLEG): container finished" podID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerID="84b6f8539a00b1a414508970ff366bcdcd904ecebe9f020c607ccb7311fcfa86" exitCode=1 Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.095921 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" event={"ID":"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa","Type":"ContainerDied","Data":"84b6f8539a00b1a414508970ff366bcdcd904ecebe9f020c607ccb7311fcfa86"} Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.096001 4758 scope.go:117] "RemoveContainer" containerID="49acb04b625fa7a5eac407a46db0479dd9498d15612a10b91587eb27ab1b92e1" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.096904 4758 scope.go:117] "RemoveContainer" containerID="84b6f8539a00b1a414508970ff366bcdcd904ecebe9f020c607ccb7311fcfa86" Jan 22 16:30:19 crc kubenswrapper[4758]: E0122 16:30:19.097132 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jdpck_openshift-ovn-kubernetes(9b60a09e-8bfa-4d2e-998d-e1db5dec0faa)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.115346 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.115406 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.115424 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.115447 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.115464 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:19Z","lastTransitionTime":"2026-01-22T16:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.148343 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9309c6-0336-4a15-8cbf-78178b4e57d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6824555f2019c5b0c92137ccb0a9af419b01ce0c63e1739c1d22b155a97c98a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a945d54b82518c2cda9257528f766444b687693255c50680adafb11651c792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca6e50d3a2acc2a4d43dc4a1fc1ff783ea5cb78978132377b7bb12b0dbd3e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c7268055ac9d7def228857bd8b974a53bb71fa873e1e0495d4691b8ca11902\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb71578e3eba87e91e6f6db0b03669e556cfbf38e2df367d20b6c8c79952f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:19Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.176753 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f128c8ae-2e32-4884-a296-728579141589\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:29:51.087222 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:29:51.088631 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2674264491/tls.crt::/tmp/serving-cert-2674264491/tls.key\\\\\\\"\\\\nI0122 16:29:56.617863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:29:56.621506 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:29:56.621541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:29:56.621606 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:29:56.621634 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:29:56.631508 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:29:56.631550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631559 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:29:56.631576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0122 16:29:56.631574 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 16:29:56.631584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:29:56.631610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 16:29:56.634157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:19Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.196148 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dfeba9911630f8c172fab9eee3a107fbc2e24407b0af1f69cd539bac18d47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:19Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.215344 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7dvfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97853b38-352d-42df-ad31-639c0e58093a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12409cad6bedda3da41a11ce209dd58b7d15e3fc0dde575d70b3aa6c64435144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7dvfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:19Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.217842 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.217876 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.217890 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.217905 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.217915 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:19Z","lastTransitionTime":"2026-01-22T16:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.227985 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xqns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ef1c490-d5f9-458d-8b3e-8580a5f07df6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8br2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8br2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xqns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:19Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.240476 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:19Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.251244 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afc42466-9bb2-4e33-abde-6a09e897045b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:19Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.261849 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:19Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.272802 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10fc91a9777392383ea1a48bb940f13581052f2aaadce7c2d94588884a8ff832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:19Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.290634 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385c8e25a62d5dad6aeac43a064397418c85c1b8720414cd44e3e925fa85a04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f98a04a30984aea45235e40edb9801d2939b35a08519d1d63df0d0c6c47131a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://596bd59377fe79f228ddda88e07b73a2f24a57ce836d0f0b2ca02d6008363020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ade0d50980af81530f1be5dbb599cf39cd13941d216485b18422f8474a1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2bb807fa30678efaca258ed72a274a7f4e065ce20066caf601177dbc8466409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915d9141459dc9d0a72681717513aaef7a876003397a1ed89a62b755bb45dc67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b6f8539a00b1a414508970ff366bcdcd904ecebe9f020c607ccb7311fcfa86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49acb04b625fa7a5eac407a46db0479dd9498d15612a10b91587eb27ab1b92e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"oval\\\\nI0122 16:30:14.741381 6045 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 16:30:14.741409 6045 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 16:30:14.741416 6045 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 16:30:14.741429 6045 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 16:30:14.741420 6045 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 16:30:14.741448 6045 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0122 16:30:14.741468 6045 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0122 16:30:14.741479 6045 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0122 16:30:14.741462 6045 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 16:30:14.741484 6045 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 16:30:14.741541 6045 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 16:30:14.741536 6045 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0122 16:30:14.741494 6045 factory.go:656] Stopping watch factory\\\\nI0122 16:30:14.741600 6045 ovnkube.go:599] Stopped ovnkube\\\\nI0122 16:30:14.741644 6045 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 16:30:14.741507 6045 handler.go:208] Removed *v1.Node event handler 7\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b6f8539a00b1a414508970ff366bcdcd904ecebe9f020c607ccb7311fcfa86\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:30:18Z\\\",\\\"message\\\":\\\"0122 16:30:17.825337 6251 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0122 16:30:17.825358 6251 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI0122 16:30:17.825368 6251 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-2xqns] creating logical port openshift-multus_network-metrics-daemon-2xqns for pod on switch crc\\\\nI0122 16:30:17.825368 6251 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI0122 16:30:17.825377 6251 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nF0122 16:30:17.825374 6251 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cfdd5744f9e8afe2a851b86ac85473f44fb49066784a282306ca8c1d621974b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jdpck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:19Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.301593 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cbszh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b21f81e8-3f11-43f9-abdb-09e8d25aeb73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0004ca3184c4311fd606fb18d3c4657d88f6212a1ac49a882c1a8ec5162c314b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5lx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e25bfe191c79389160e8c25e97ebd3bf2782cccecf01aac06c459041e083a793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5lx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cbszh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:19Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.314170 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9485b50dd3fa712a0f43f04b4d3ae98e0f152d17b5db4b6f214125c1e926a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:19Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.319966 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.320006 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.320016 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.320032 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.320044 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:19Z","lastTransitionTime":"2026-01-22T16:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.326162 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:19Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.335858 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g8wjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425c9f0a-b14e-48d3-bd86-6fc510f22a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1d22788bf54b1c4a55b0c19222ad6dde207887ab282b97324717333f0280f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtrsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g8wjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:19Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.344709 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b5f24a-19df-4969-b547-a5acc323c58a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208979f8d30765fcfd45650c760741d72bd7119bfe62ebf4d7c1554d6c6d56e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf5569b30ec6397014b282bf67eca77930756b413c7554ab366d2d31a4f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zsbtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:19Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.359411 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9182510-5fc6-4717-b94c-de8ca4fb7c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1b80316bb1f3b27668a5ff6e547c13c4f84ae30f40fc6d0407849fb59fb9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqfn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:19Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.369623 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lt6tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"090f3014-3d99-49d5-8a9d-9719b4efbcf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a09e0ee71eddb461f883d44293ed63887153350f0f617799e7f360b5d6fdd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhkzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lt6tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:19Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.422133 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.422164 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.422176 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.422191 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.422201 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:19Z","lastTransitionTime":"2026-01-22T16:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.524869 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.524933 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.524953 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.524977 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.524994 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:19Z","lastTransitionTime":"2026-01-22T16:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.627925 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.628265 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.628276 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.628292 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.628305 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:19Z","lastTransitionTime":"2026-01-22T16:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.731829 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.731883 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.731892 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.731910 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.731919 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:19Z","lastTransitionTime":"2026-01-22T16:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.834670 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.834709 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.834718 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.834762 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.834772 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:19Z","lastTransitionTime":"2026-01-22T16:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.840869 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 19:47:10.184380771 +0000 UTC Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.937043 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.937087 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.937097 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.937112 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:19 crc kubenswrapper[4758]: I0122 16:30:19.937124 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:19Z","lastTransitionTime":"2026-01-22T16:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.040133 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.040169 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.040178 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.040192 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.040200 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:20Z","lastTransitionTime":"2026-01-22T16:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.101325 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jdpck_9b60a09e-8bfa-4d2e-998d-e1db5dec0faa/ovnkube-controller/1.log" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.142701 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.142794 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.142813 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.142835 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.142849 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:20Z","lastTransitionTime":"2026-01-22T16:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.245840 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.245889 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.245904 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.245925 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.245939 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:20Z","lastTransitionTime":"2026-01-22T16:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.348193 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.348250 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.348260 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.348276 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.348285 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:20Z","lastTransitionTime":"2026-01-22T16:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.411653 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.411824 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.411854 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.411878 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.411896 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:20Z","lastTransitionTime":"2026-01-22T16:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:20 crc kubenswrapper[4758]: E0122 16:30:20.430701 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7288053-8dca-462f-b24f-6a9d8be738b3\\\",\\\"systemUUID\\\":\\\"83805c52-2bba-4705-bdbe-9101a9d1190e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:20Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.434628 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.434663 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.434672 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.434687 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.434697 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:20Z","lastTransitionTime":"2026-01-22T16:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:20 crc kubenswrapper[4758]: E0122 16:30:20.445504 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7288053-8dca-462f-b24f-6a9d8be738b3\\\",\\\"systemUUID\\\":\\\"83805c52-2bba-4705-bdbe-9101a9d1190e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:20Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.448938 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.449033 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.449054 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.449113 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.449133 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:20Z","lastTransitionTime":"2026-01-22T16:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:20 crc kubenswrapper[4758]: E0122 16:30:20.468139 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7288053-8dca-462f-b24f-6a9d8be738b3\\\",\\\"systemUUID\\\":\\\"83805c52-2bba-4705-bdbe-9101a9d1190e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:20Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.472482 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.472520 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.472529 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.472544 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.472553 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:20Z","lastTransitionTime":"2026-01-22T16:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:20 crc kubenswrapper[4758]: E0122 16:30:20.484567 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7288053-8dca-462f-b24f-6a9d8be738b3\\\",\\\"systemUUID\\\":\\\"83805c52-2bba-4705-bdbe-9101a9d1190e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:20Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.488706 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.488769 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.488781 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.488799 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.488812 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:20Z","lastTransitionTime":"2026-01-22T16:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.495374 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3ef1c490-d5f9-458d-8b3e-8580a5f07df6-metrics-certs\") pod \"network-metrics-daemon-2xqns\" (UID: \"3ef1c490-d5f9-458d-8b3e-8580a5f07df6\") " pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:30:20 crc kubenswrapper[4758]: E0122 16:30:20.495486 4758 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 16:30:20 crc kubenswrapper[4758]: E0122 16:30:20.495528 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ef1c490-d5f9-458d-8b3e-8580a5f07df6-metrics-certs podName:3ef1c490-d5f9-458d-8b3e-8580a5f07df6 nodeName:}" failed. No retries permitted until 2026-01-22 16:30:24.495514852 +0000 UTC m=+45.978854137 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3ef1c490-d5f9-458d-8b3e-8580a5f07df6-metrics-certs") pod "network-metrics-daemon-2xqns" (UID: "3ef1c490-d5f9-458d-8b3e-8580a5f07df6") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 16:30:20 crc kubenswrapper[4758]: E0122 16:30:20.499799 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7288053-8dca-462f-b24f-6a9d8be738b3\\\",\\\"systemUUID\\\":\\\"83805c52-2bba-4705-bdbe-9101a9d1190e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:20Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:20 crc kubenswrapper[4758]: E0122 16:30:20.499945 4758 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.501498 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.501604 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.501663 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.501724 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.501797 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:20Z","lastTransitionTime":"2026-01-22T16:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.604228 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.604259 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.604268 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.604284 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.604295 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:20Z","lastTransitionTime":"2026-01-22T16:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.707583 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.707642 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.707653 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.707668 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.707677 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:20Z","lastTransitionTime":"2026-01-22T16:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.808161 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.808259 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.808289 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.808324 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:20 crc kubenswrapper[4758]: E0122 16:30:20.808316 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:30:20 crc kubenswrapper[4758]: E0122 16:30:20.808427 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:20 crc kubenswrapper[4758]: E0122 16:30:20.808485 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:20 crc kubenswrapper[4758]: E0122 16:30:20.808543 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.809873 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.809918 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.809926 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.809942 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.809952 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:20Z","lastTransitionTime":"2026-01-22T16:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.841259 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 15:19:23.17718704 +0000 UTC Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.912522 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.912582 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.912594 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.912613 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:20 crc kubenswrapper[4758]: I0122 16:30:20.912625 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:20Z","lastTransitionTime":"2026-01-22T16:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.014664 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.014702 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.014714 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.014758 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.014773 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:21Z","lastTransitionTime":"2026-01-22T16:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.117372 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.117398 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.117407 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.117420 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.117428 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:21Z","lastTransitionTime":"2026-01-22T16:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.220862 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.220935 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.220953 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.220978 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.220996 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:21Z","lastTransitionTime":"2026-01-22T16:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.323698 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.323790 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.323808 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.323832 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.323850 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:21Z","lastTransitionTime":"2026-01-22T16:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.426717 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.426845 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.426871 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.426901 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.426925 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:21Z","lastTransitionTime":"2026-01-22T16:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.530191 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.530239 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.530251 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.530268 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.530279 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:21Z","lastTransitionTime":"2026-01-22T16:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.632453 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.632492 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.632506 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.632520 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.632532 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:21Z","lastTransitionTime":"2026-01-22T16:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.735695 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.735803 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.735839 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.735875 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.735912 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:21Z","lastTransitionTime":"2026-01-22T16:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.838575 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.838602 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.838775 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.838790 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.838800 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:21Z","lastTransitionTime":"2026-01-22T16:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.841918 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 12:10:08.531382158 +0000 UTC Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.941946 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.942006 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.942022 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.942048 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:21 crc kubenswrapper[4758]: I0122 16:30:21.942064 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:21Z","lastTransitionTime":"2026-01-22T16:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.044938 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.044983 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.044999 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.045032 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.045050 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:22Z","lastTransitionTime":"2026-01-22T16:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.147701 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.147817 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.147834 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.147857 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.147911 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:22Z","lastTransitionTime":"2026-01-22T16:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.249503 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.249557 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.249574 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.249595 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.249611 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:22Z","lastTransitionTime":"2026-01-22T16:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.351619 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.351684 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.351696 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.351713 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.351724 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:22Z","lastTransitionTime":"2026-01-22T16:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.454303 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.454340 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.454350 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.454366 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.454375 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:22Z","lastTransitionTime":"2026-01-22T16:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.557428 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.557485 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.557501 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.557524 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.557542 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:22Z","lastTransitionTime":"2026-01-22T16:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.659887 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.659934 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.659947 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.659965 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.659977 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:22Z","lastTransitionTime":"2026-01-22T16:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.762849 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.763180 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.763279 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.763381 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.763474 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:22Z","lastTransitionTime":"2026-01-22T16:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.807021 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.807033 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:22 crc kubenswrapper[4758]: E0122 16:30:22.807165 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.807228 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:22 crc kubenswrapper[4758]: E0122 16:30:22.807252 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:22 crc kubenswrapper[4758]: E0122 16:30:22.807430 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.807571 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:22 crc kubenswrapper[4758]: E0122 16:30:22.807724 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.842492 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 14:59:37.584777729 +0000 UTC Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.866384 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.866420 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.866430 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.866446 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.866457 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:22Z","lastTransitionTime":"2026-01-22T16:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.968968 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.969229 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.969303 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.969368 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:22 crc kubenswrapper[4758]: I0122 16:30:22.969429 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:22Z","lastTransitionTime":"2026-01-22T16:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.071910 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.072182 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.072285 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.072385 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.072469 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:23Z","lastTransitionTime":"2026-01-22T16:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.175510 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.175631 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.175651 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.175668 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.175680 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:23Z","lastTransitionTime":"2026-01-22T16:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.278598 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.278676 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.278700 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.278729 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.278784 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:23Z","lastTransitionTime":"2026-01-22T16:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.381474 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.381798 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.381941 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.382087 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.382222 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:23Z","lastTransitionTime":"2026-01-22T16:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.484387 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.485035 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.485140 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.485237 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.485324 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:23Z","lastTransitionTime":"2026-01-22T16:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.587499 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.587539 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.587550 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.587564 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.587575 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:23Z","lastTransitionTime":"2026-01-22T16:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.690786 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.690826 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.690837 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.690853 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.690864 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:23Z","lastTransitionTime":"2026-01-22T16:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.793031 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.793078 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.793091 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.793109 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.793121 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:23Z","lastTransitionTime":"2026-01-22T16:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.842957 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 20:42:24.299570574 +0000 UTC Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.895400 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.895443 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.895455 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.895472 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.895484 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:23Z","lastTransitionTime":"2026-01-22T16:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.997850 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.997900 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.997914 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.997934 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:23 crc kubenswrapper[4758]: I0122 16:30:23.997948 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:23Z","lastTransitionTime":"2026-01-22T16:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.099948 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.099984 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.099992 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.100006 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.100015 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:24Z","lastTransitionTime":"2026-01-22T16:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.202525 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.202574 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.202586 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.202606 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.202619 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:24Z","lastTransitionTime":"2026-01-22T16:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.305538 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.305595 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.305606 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.305622 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.305634 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:24Z","lastTransitionTime":"2026-01-22T16:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.408108 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.408153 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.408168 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.408183 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.408191 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:24Z","lastTransitionTime":"2026-01-22T16:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.513253 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.513687 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.513697 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.513710 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.513720 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:24Z","lastTransitionTime":"2026-01-22T16:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.535016 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3ef1c490-d5f9-458d-8b3e-8580a5f07df6-metrics-certs\") pod \"network-metrics-daemon-2xqns\" (UID: \"3ef1c490-d5f9-458d-8b3e-8580a5f07df6\") " pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:30:24 crc kubenswrapper[4758]: E0122 16:30:24.535158 4758 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 16:30:24 crc kubenswrapper[4758]: E0122 16:30:24.535214 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ef1c490-d5f9-458d-8b3e-8580a5f07df6-metrics-certs podName:3ef1c490-d5f9-458d-8b3e-8580a5f07df6 nodeName:}" failed. No retries permitted until 2026-01-22 16:30:32.535199172 +0000 UTC m=+54.018538447 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3ef1c490-d5f9-458d-8b3e-8580a5f07df6-metrics-certs") pod "network-metrics-daemon-2xqns" (UID: "3ef1c490-d5f9-458d-8b3e-8580a5f07df6") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.616827 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.616896 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.616910 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.616929 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.616941 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:24Z","lastTransitionTime":"2026-01-22T16:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.719092 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.719147 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.719168 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.719197 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.719219 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:24Z","lastTransitionTime":"2026-01-22T16:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.807927 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.807985 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.808021 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:24 crc kubenswrapper[4758]: E0122 16:30:24.808047 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.807934 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:24 crc kubenswrapper[4758]: E0122 16:30:24.808153 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:30:24 crc kubenswrapper[4758]: E0122 16:30:24.808206 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:24 crc kubenswrapper[4758]: E0122 16:30:24.808298 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.821664 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.821702 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.821712 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.821730 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.821763 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:24Z","lastTransitionTime":"2026-01-22T16:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.843462 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 22:17:48.253283483 +0000 UTC Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.924411 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.924480 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.924498 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.924519 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:24 crc kubenswrapper[4758]: I0122 16:30:24.924537 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:24Z","lastTransitionTime":"2026-01-22T16:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.026616 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.026660 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.026671 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.026691 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.026703 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:25Z","lastTransitionTime":"2026-01-22T16:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.129098 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.129131 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.129140 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.129155 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.129167 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:25Z","lastTransitionTime":"2026-01-22T16:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.231711 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.231765 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.231774 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.231787 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.231797 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:25Z","lastTransitionTime":"2026-01-22T16:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.334423 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.334492 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.334510 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.334533 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.334550 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:25Z","lastTransitionTime":"2026-01-22T16:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.437076 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.437128 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.437140 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.437158 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.437173 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:25Z","lastTransitionTime":"2026-01-22T16:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.539607 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.539666 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.539686 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.539711 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.539729 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:25Z","lastTransitionTime":"2026-01-22T16:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.643052 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.643132 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.643152 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.643181 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.643203 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:25Z","lastTransitionTime":"2026-01-22T16:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.745385 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.745443 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.745460 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.745506 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.745524 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:25Z","lastTransitionTime":"2026-01-22T16:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.836144 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.843854 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 08:46:23.26053117 +0000 UTC Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.845188 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.847395 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.847427 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.847437 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.847457 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.847470 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:25Z","lastTransitionTime":"2026-01-22T16:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.863286 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9309c6-0336-4a15-8cbf-78178b4e57d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6824555f2019c5b0c92137ccb0a9af419b01ce0c63e1739c1d22b155a97c98a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a945d54b82518c2cda9257528f766444b687693255c50680adafb11651c792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca6e50d3a2acc2a4d43dc4a1fc1ff783ea5cb78978132377b7bb12b0dbd3e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c7268055ac9d7def228857bd8b974a53bb71fa873e1e0495d4691b8ca11902\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb71578e3eba87e91e6f6db0b03669e556cfbf38e2df367d20b6c8c79952f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:25Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.878452 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f128c8ae-2e32-4884-a296-728579141589\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:29:51.087222 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:29:51.088631 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2674264491/tls.crt::/tmp/serving-cert-2674264491/tls.key\\\\\\\"\\\\nI0122 16:29:56.617863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:29:56.621506 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:29:56.621541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:29:56.621606 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:29:56.621634 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:29:56.631508 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:29:56.631550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631559 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:29:56.631576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0122 16:29:56.631574 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 16:29:56.631584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:29:56.631610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 16:29:56.634157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:25Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.890964 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dfeba9911630f8c172fab9eee3a107fbc2e24407b0af1f69cd539bac18d47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:25Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.903928 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7dvfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97853b38-352d-42df-ad31-639c0e58093a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12409cad6bedda3da41a11ce209dd58b7d15e3fc0dde575d70b3aa6c64435144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7dvfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:25Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.916875 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xqns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ef1c490-d5f9-458d-8b3e-8580a5f07df6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8br2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8br2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xqns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:25Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.931601 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:25Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.944957 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afc42466-9bb2-4e33-abde-6a09e897045b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:25Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.949520 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.949575 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.949592 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.949615 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.949632 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:25Z","lastTransitionTime":"2026-01-22T16:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.962248 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:25Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:25 crc kubenswrapper[4758]: I0122 16:30:25.979517 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10fc91a9777392383ea1a48bb940f13581052f2aaadce7c2d94588884a8ff832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:25Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.002438 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385c8e25a62d5dad6aeac43a064397418c85c1b8720414cd44e3e925fa85a04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f98a04a30984aea45235e40edb9801d2939b35a08519d1d63df0d0c6c47131a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://596bd59377fe79f228ddda88e07b73a2f24a57ce836d0f0b2ca02d6008363020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ade0d50980af81530f1be5dbb599cf39cd13941d216485b18422f8474a1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2bb807fa30678efaca258ed72a274a7f4e065ce20066caf601177dbc8466409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915d9141459dc9d0a72681717513aaef7a876003397a1ed89a62b755bb45dc67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b6f8539a00b1a414508970ff366bcdcd904ecebe9f020c607ccb7311fcfa86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49acb04b625fa7a5eac407a46db0479dd9498d15612a10b91587eb27ab1b92e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"oval\\\\nI0122 16:30:14.741381 6045 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 16:30:14.741409 6045 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 16:30:14.741416 6045 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 16:30:14.741429 6045 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 16:30:14.741420 6045 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 16:30:14.741448 6045 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0122 16:30:14.741468 6045 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0122 16:30:14.741479 6045 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0122 16:30:14.741462 6045 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 16:30:14.741484 6045 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 16:30:14.741541 6045 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 16:30:14.741536 6045 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0122 16:30:14.741494 6045 factory.go:656] Stopping watch factory\\\\nI0122 16:30:14.741600 6045 ovnkube.go:599] Stopped ovnkube\\\\nI0122 16:30:14.741644 6045 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 16:30:14.741507 6045 handler.go:208] Removed *v1.Node event handler 7\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b6f8539a00b1a414508970ff366bcdcd904ecebe9f020c607ccb7311fcfa86\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:30:18Z\\\",\\\"message\\\":\\\"0122 16:30:17.825337 6251 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0122 16:30:17.825358 6251 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI0122 16:30:17.825368 6251 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-2xqns] creating logical port openshift-multus_network-metrics-daemon-2xqns for pod on switch crc\\\\nI0122 16:30:17.825368 6251 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI0122 16:30:17.825377 6251 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nF0122 16:30:17.825374 6251 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cfdd5744f9e8afe2a851b86ac85473f44fb49066784a282306ca8c1d621974b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jdpck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:25Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.020797 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cbszh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b21f81e8-3f11-43f9-abdb-09e8d25aeb73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0004ca3184c4311fd606fb18d3c4657d88f6212a1ac49a882c1a8ec5162c314b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5lx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e25bfe191c79389160e8c25e97ebd3bf2782cccecf01aac06c459041e083a793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5lx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cbszh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:26Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.032550 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9485b50dd3fa712a0f43f04b4d3ae98e0f152d17b5db4b6f214125c1e926a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:26Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.043185 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:26Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.051790 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.051847 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.051857 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.051869 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.051879 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:26Z","lastTransitionTime":"2026-01-22T16:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.054803 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g8wjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425c9f0a-b14e-48d3-bd86-6fc510f22a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1d22788bf54b1c4a55b0c19222ad6dde207887ab282b97324717333f0280f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtrsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g8wjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:26Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.065375 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b5f24a-19df-4969-b547-a5acc323c58a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208979f8d30765fcfd45650c760741d72bd7119bfe62ebf4d7c1554d6c6d56e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf5569b30ec6397014b282bf67eca77930756b413c7554ab366d2d31a4f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zsbtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:26Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.079298 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9182510-5fc6-4717-b94c-de8ca4fb7c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1b80316bb1f3b27668a5ff6e547c13c4f84ae30f40fc6d0407849fb59fb9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqfn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:26Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.090924 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lt6tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"090f3014-3d99-49d5-8a9d-9719b4efbcf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a09e0ee71eddb461f883d44293ed63887153350f0f617799e7f360b5d6fdd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhkzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lt6tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:26Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.153846 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.153886 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.153898 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.153913 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.153926 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:26Z","lastTransitionTime":"2026-01-22T16:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.256147 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.256418 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.256486 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.256558 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.256629 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:26Z","lastTransitionTime":"2026-01-22T16:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.359360 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.359419 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.359432 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.359453 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.359466 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:26Z","lastTransitionTime":"2026-01-22T16:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.462906 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.462966 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.462988 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.463018 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.463043 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:26Z","lastTransitionTime":"2026-01-22T16:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.565799 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.565862 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.565888 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.565916 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.565938 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:26Z","lastTransitionTime":"2026-01-22T16:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.668484 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.668522 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.668533 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.668546 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.668556 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:26Z","lastTransitionTime":"2026-01-22T16:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.771253 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.771302 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.771317 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.771340 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.771353 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:26Z","lastTransitionTime":"2026-01-22T16:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.807904 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.807973 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.808016 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:30:26 crc kubenswrapper[4758]: E0122 16:30:26.808048 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.807922 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:26 crc kubenswrapper[4758]: E0122 16:30:26.808186 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:30:26 crc kubenswrapper[4758]: E0122 16:30:26.808305 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:26 crc kubenswrapper[4758]: E0122 16:30:26.808400 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.844812 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 07:09:03.665467864 +0000 UTC Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.874459 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.874510 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.874522 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.874539 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.874551 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:26Z","lastTransitionTime":"2026-01-22T16:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.977725 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.977798 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.977813 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.977833 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:26 crc kubenswrapper[4758]: I0122 16:30:26.977847 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:26Z","lastTransitionTime":"2026-01-22T16:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:27 crc kubenswrapper[4758]: I0122 16:30:27.081112 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:27 crc kubenswrapper[4758]: I0122 16:30:27.081143 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:27 crc kubenswrapper[4758]: I0122 16:30:27.081152 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:27 crc kubenswrapper[4758]: I0122 16:30:27.081168 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:27 crc kubenswrapper[4758]: I0122 16:30:27.081177 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:27Z","lastTransitionTime":"2026-01-22T16:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:27 crc kubenswrapper[4758]: I0122 16:30:27.183445 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:27 crc kubenswrapper[4758]: I0122 16:30:27.183501 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:27 crc kubenswrapper[4758]: I0122 16:30:27.183518 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:27 crc kubenswrapper[4758]: I0122 16:30:27.183540 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:27 crc kubenswrapper[4758]: I0122 16:30:27.183558 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:27Z","lastTransitionTime":"2026-01-22T16:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:27 crc kubenswrapper[4758]: I0122 16:30:27.285896 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:27 crc kubenswrapper[4758]: I0122 16:30:27.285946 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:27 crc kubenswrapper[4758]: I0122 16:30:27.285961 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:27 crc kubenswrapper[4758]: I0122 16:30:27.285982 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:27 crc kubenswrapper[4758]: I0122 16:30:27.286000 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:27Z","lastTransitionTime":"2026-01-22T16:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:27 crc kubenswrapper[4758]: I0122 16:30:27.388078 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:27 crc kubenswrapper[4758]: I0122 16:30:27.388402 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:27 crc kubenswrapper[4758]: I0122 16:30:27.388468 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:27 crc kubenswrapper[4758]: I0122 16:30:27.388536 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:27 crc kubenswrapper[4758]: I0122 16:30:27.388589 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:27Z","lastTransitionTime":"2026-01-22T16:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:27 crc kubenswrapper[4758]: I0122 16:30:27.490582 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:27 crc kubenswrapper[4758]: I0122 16:30:27.490658 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:27 crc kubenswrapper[4758]: I0122 16:30:27.490680 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:27 crc kubenswrapper[4758]: I0122 16:30:27.490705 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:27 crc kubenswrapper[4758]: I0122 16:30:27.490874 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:27Z","lastTransitionTime":"2026-01-22T16:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:27 crc kubenswrapper[4758]: I0122 16:30:27.593294 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:27 crc kubenswrapper[4758]: I0122 16:30:27.593954 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:27 crc kubenswrapper[4758]: I0122 16:30:27.593985 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:27 crc kubenswrapper[4758]: I0122 16:30:27.594008 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:27 crc kubenswrapper[4758]: I0122 16:30:27.594023 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:27Z","lastTransitionTime":"2026-01-22T16:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:27 crc kubenswrapper[4758]: I0122 16:30:27.696644 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:27 crc kubenswrapper[4758]: I0122 16:30:27.696697 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:27 crc kubenswrapper[4758]: I0122 16:30:27.696713 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:27 crc kubenswrapper[4758]: I0122 16:30:27.696732 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:27 crc kubenswrapper[4758]: I0122 16:30:27.696777 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:27Z","lastTransitionTime":"2026-01-22T16:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:27 crc kubenswrapper[4758]: I0122 16:30:27.799398 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:27 crc kubenswrapper[4758]: I0122 16:30:27.799795 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:27 crc kubenswrapper[4758]: I0122 16:30:27.799990 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:27 crc kubenswrapper[4758]: I0122 16:30:27.800149 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:27 crc kubenswrapper[4758]: I0122 16:30:27.800321 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:27Z","lastTransitionTime":"2026-01-22T16:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:27 crc kubenswrapper[4758]: I0122 16:30:27.844958 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 14:27:11.852560173 +0000 UTC Jan 22 16:30:27 crc kubenswrapper[4758]: I0122 16:30:27.903505 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:27 crc kubenswrapper[4758]: I0122 16:30:27.903807 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:27 crc kubenswrapper[4758]: I0122 16:30:27.904113 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:27 crc kubenswrapper[4758]: I0122 16:30:27.904387 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:27 crc kubenswrapper[4758]: I0122 16:30:27.904613 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:27Z","lastTransitionTime":"2026-01-22T16:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.011162 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.011800 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.012005 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.012228 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.012411 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:28Z","lastTransitionTime":"2026-01-22T16:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.115794 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.115860 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.115872 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.115891 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.115903 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:28Z","lastTransitionTime":"2026-01-22T16:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.218705 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.219018 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.219055 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.219081 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.219100 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:28Z","lastTransitionTime":"2026-01-22T16:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.322731 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.322801 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.322812 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.322827 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.322837 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:28Z","lastTransitionTime":"2026-01-22T16:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.426298 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.426338 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.426347 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.426364 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.426376 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:28Z","lastTransitionTime":"2026-01-22T16:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.528430 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.528471 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.528481 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.528496 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.528506 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:28Z","lastTransitionTime":"2026-01-22T16:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.631831 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.631874 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.631884 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.631900 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.631912 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:28Z","lastTransitionTime":"2026-01-22T16:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.674724 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:30:28 crc kubenswrapper[4758]: E0122 16:30:28.675079 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:31:00.675043869 +0000 UTC m=+82.158383184 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.734793 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.734824 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.734834 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.734848 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.734857 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:28Z","lastTransitionTime":"2026-01-22T16:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.776048 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.776129 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.776170 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.776214 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:28 crc kubenswrapper[4758]: E0122 16:30:28.776264 4758 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 16:30:28 crc kubenswrapper[4758]: E0122 16:30:28.776334 4758 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 16:30:28 crc kubenswrapper[4758]: E0122 16:30:28.776371 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 16:31:00.776342748 +0000 UTC m=+82.259682083 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 16:30:28 crc kubenswrapper[4758]: E0122 16:30:28.776403 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 16:31:00.77638914 +0000 UTC m=+82.259728555 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 16:30:28 crc kubenswrapper[4758]: E0122 16:30:28.776421 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 16:30:28 crc kubenswrapper[4758]: E0122 16:30:28.776463 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 16:30:28 crc kubenswrapper[4758]: E0122 16:30:28.776483 4758 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:30:28 crc kubenswrapper[4758]: E0122 16:30:28.776555 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 16:31:00.776531205 +0000 UTC m=+82.259870530 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:30:28 crc kubenswrapper[4758]: E0122 16:30:28.776696 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 16:30:28 crc kubenswrapper[4758]: E0122 16:30:28.776717 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 16:30:28 crc kubenswrapper[4758]: E0122 16:30:28.776732 4758 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:30:28 crc kubenswrapper[4758]: E0122 16:30:28.776824 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 16:31:00.776808522 +0000 UTC m=+82.260147877 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.808185 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.808328 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:28 crc kubenswrapper[4758]: E0122 16:30:28.808422 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.808519 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:28 crc kubenswrapper[4758]: E0122 16:30:28.808653 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.808710 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:28 crc kubenswrapper[4758]: E0122 16:30:28.808807 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:28 crc kubenswrapper[4758]: E0122 16:30:28.808859 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.828142 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:28Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.837540 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.837593 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.837604 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.837622 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.837651 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:28Z","lastTransitionTime":"2026-01-22T16:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.842899 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afc42466-9bb2-4e33-abde-6a09e897045b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:28Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.845567 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 05:23:42.135704092 +0000 UTC Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.861072 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:28Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.882261 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10fc91a9777392383ea1a48bb940f13581052f2aaadce7c2d94588884a8ff832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:28Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.904053 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385c8e25a62d5dad6aeac43a064397418c85c1b8720414cd44e3e925fa85a04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f98a04a30984aea45235e40edb9801d2939b35a08519d1d63df0d0c6c47131a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://596bd59377fe79f228ddda88e07b73a2f24a57ce836d0f0b2ca02d6008363020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ade0d50980af81530f1be5dbb599cf39cd13941d216485b18422f8474a1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2bb807fa30678efaca258ed72a274a7f4e065ce20066caf601177dbc8466409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915d9141459dc9d0a72681717513aaef7a876003397a1ed89a62b755bb45dc67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b6f8539a00b1a414508970ff366bcdcd904ecebe9f020c607ccb7311fcfa86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49acb04b625fa7a5eac407a46db0479dd9498d15612a10b91587eb27ab1b92e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"oval\\\\nI0122 16:30:14.741381 6045 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 16:30:14.741409 6045 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 16:30:14.741416 6045 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 16:30:14.741429 6045 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 16:30:14.741420 6045 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 16:30:14.741448 6045 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0122 16:30:14.741468 6045 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0122 16:30:14.741479 6045 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0122 16:30:14.741462 6045 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 16:30:14.741484 6045 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 16:30:14.741541 6045 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 16:30:14.741536 6045 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0122 16:30:14.741494 6045 factory.go:656] Stopping watch factory\\\\nI0122 16:30:14.741600 6045 ovnkube.go:599] Stopped ovnkube\\\\nI0122 16:30:14.741644 6045 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 16:30:14.741507 6045 handler.go:208] Removed *v1.Node event handler 7\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b6f8539a00b1a414508970ff366bcdcd904ecebe9f020c607ccb7311fcfa86\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:30:18Z\\\",\\\"message\\\":\\\"0122 16:30:17.825337 6251 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0122 16:30:17.825358 6251 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI0122 16:30:17.825368 6251 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-2xqns] creating logical port openshift-multus_network-metrics-daemon-2xqns for pod on switch crc\\\\nI0122 16:30:17.825368 6251 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI0122 16:30:17.825377 6251 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nF0122 16:30:17.825374 6251 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cfdd5744f9e8afe2a851b86ac85473f44fb49066784a282306ca8c1d621974b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jdpck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:28Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.918318 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cbszh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b21f81e8-3f11-43f9-abdb-09e8d25aeb73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0004ca3184c4311fd606fb18d3c4657d88f6212a1ac49a882c1a8ec5162c314b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5lx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e25bfe191c79389160e8c25e97ebd3bf2782cccecf01aac06c459041e083a793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5lx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cbszh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:28Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.935629 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9485b50dd3fa712a0f43f04b4d3ae98e0f152d17b5db4b6f214125c1e926a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:28Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.940248 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.940292 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.940311 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.940335 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.940351 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:28Z","lastTransitionTime":"2026-01-22T16:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.949382 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:28Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.962581 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g8wjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425c9f0a-b14e-48d3-bd86-6fc510f22a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1d22788bf54b1c4a55b0c19222ad6dde207887ab282b97324717333f0280f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtrsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g8wjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:28Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.980158 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b5f24a-19df-4969-b547-a5acc323c58a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208979f8d30765fcfd45650c760741d72bd7119bfe62ebf4d7c1554d6c6d56e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf5569b30ec6397014b282bf67eca77930756b413c7554ab366d2d31a4f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zsbtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:28Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:28 crc kubenswrapper[4758]: I0122 16:30:28.997629 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9182510-5fc6-4717-b94c-de8ca4fb7c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1b80316bb1f3b27668a5ff6e547c13c4f84ae30f40fc6d0407849fb59fb9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqfn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:28Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.010864 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lt6tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"090f3014-3d99-49d5-8a9d-9719b4efbcf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a09e0ee71eddb461f883d44293ed63887153350f0f617799e7f360b5d6fdd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhkzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lt6tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:29Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.028610 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9309c6-0336-4a15-8cbf-78178b4e57d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6824555f2019c5b0c92137ccb0a9af419b01ce0c63e1739c1d22b155a97c98a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a945d54b82518c2cda9257528f766444b687693255c50680adafb11651c792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca6e50d3a2acc2a4d43dc4a1fc1ff783ea5cb78978132377b7bb12b0dbd3e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c7268055ac9d7def228857bd8b974a53bb71fa873e1e0495d4691b8ca11902\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb71578e3eba87e91e6f6db0b03669e556cfbf38e2df367d20b6c8c79952f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:29Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.043067 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.043110 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.043121 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.043136 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.043148 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:29Z","lastTransitionTime":"2026-01-22T16:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.043996 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f128c8ae-2e32-4884-a296-728579141589\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:29:51.087222 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:29:51.088631 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2674264491/tls.crt::/tmp/serving-cert-2674264491/tls.key\\\\\\\"\\\\nI0122 16:29:56.617863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:29:56.621506 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:29:56.621541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:29:56.621606 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:29:56.621634 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:29:56.631508 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:29:56.631550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631559 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:29:56.631576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0122 16:29:56.631574 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 16:29:56.631584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:29:56.631610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 16:29:56.634157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:29Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.061026 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ba0bf6-e521-4b47-a7e5-81f19a4bf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f742b25c51806335d17c6c67e8ad4944228fde89626352044f62ee1e708c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0197852c20ea1961ea8cff956886a8a42967c95fad73d2ed8bd37e6f763cca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3cdc36e13e13f43cb329beb4b415f17dab3d8427338168449ea3771053d668a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://981ef0ee873407291236dfd734567e3213a9451d495eb97e1029696cc788acbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://981ef0ee873407291236dfd734567e3213a9451d495eb97e1029696cc788acbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:29Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.072964 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dfeba9911630f8c172fab9eee3a107fbc2e24407b0af1f69cd539bac18d47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:29Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.084986 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7dvfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97853b38-352d-42df-ad31-639c0e58093a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12409cad6bedda3da41a11ce209dd58b7d15e3fc0dde575d70b3aa6c64435144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7dvfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:29Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.095967 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xqns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ef1c490-d5f9-458d-8b3e-8580a5f07df6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8br2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8br2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xqns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:29Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.145607 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.145650 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.145661 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.145677 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.145690 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:29Z","lastTransitionTime":"2026-01-22T16:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.248798 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.248856 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.248870 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.248887 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.248898 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:29Z","lastTransitionTime":"2026-01-22T16:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.288882 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.289915 4758 scope.go:117] "RemoveContainer" containerID="84b6f8539a00b1a414508970ff366bcdcd904ecebe9f020c607ccb7311fcfa86" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.308543 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10fc91a9777392383ea1a48bb940f13581052f2aaadce7c2d94588884a8ff832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:29Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.333215 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385c8e25a62d5dad6aeac43a064397418c85c1b8720414cd44e3e925fa85a04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f98a04a30984aea45235e40edb9801d2939b35a08519d1d63df0d0c6c47131a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://596bd59377fe79f228ddda88e07b73a2f24a57ce836d0f0b2ca02d6008363020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ade0d50980af81530f1be5dbb599cf39cd13941d216485b18422f8474a1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2bb807fa30678efaca258ed72a274a7f4e065ce20066caf601177dbc8466409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915d9141459dc9d0a72681717513aaef7a876003397a1ed89a62b755bb45dc67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b6f8539a00b1a414508970ff366bcdcd904ecebe9f020c607ccb7311fcfa86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b6f8539a00b1a414508970ff366bcdcd904ecebe9f020c607ccb7311fcfa86\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:30:18Z\\\",\\\"message\\\":\\\"0122 16:30:17.825337 6251 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0122 16:30:17.825358 6251 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI0122 16:30:17.825368 6251 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-2xqns] creating logical port openshift-multus_network-metrics-daemon-2xqns for pod on switch crc\\\\nI0122 16:30:17.825368 6251 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI0122 16:30:17.825377 6251 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nF0122 16:30:17.825374 6251 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jdpck_openshift-ovn-kubernetes(9b60a09e-8bfa-4d2e-998d-e1db5dec0faa)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cfdd5744f9e8afe2a851b86ac85473f44fb49066784a282306ca8c1d621974b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jdpck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:29Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.347865 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cbszh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b21f81e8-3f11-43f9-abdb-09e8d25aeb73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0004ca3184c4311fd606fb18d3c4657d88f6212a1ac49a882c1a8ec5162c314b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5lx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e25bfe191c79389160e8c25e97ebd3bf2782cccecf01aac06c459041e083a793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5lx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cbszh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:29Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.355373 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.355469 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.355483 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.355500 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.355548 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:29Z","lastTransitionTime":"2026-01-22T16:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.365362 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afc42466-9bb2-4e33-abde-6a09e897045b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:29Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.383277 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:29Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.400712 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b5f24a-19df-4969-b547-a5acc323c58a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208979f8d30765fcfd45650c760741d72bd7119bfe62ebf4d7c1554d6c6d56e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf5569b30ec6397014b282bf67eca77930756b413c7554ab366d2d31a4f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zsbtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:29Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.416160 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9182510-5fc6-4717-b94c-de8ca4fb7c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1b80316bb1f3b27668a5ff6e547c13c4f84ae30f40fc6d0407849fb59fb9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqfn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:29Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.426008 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lt6tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"090f3014-3d99-49d5-8a9d-9719b4efbcf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a09e0ee71eddb461f883d44293ed63887153350f0f617799e7f360b5d6fdd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhkzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lt6tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:29Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.438414 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9485b50dd3fa712a0f43f04b4d3ae98e0f152d17b5db4b6f214125c1e926a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:29Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.449588 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:29Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.458111 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.458152 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.458163 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.458182 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.458195 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:29Z","lastTransitionTime":"2026-01-22T16:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.459722 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g8wjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425c9f0a-b14e-48d3-bd86-6fc510f22a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1d22788bf54b1c4a55b0c19222ad6dde207887ab282b97324717333f0280f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtrsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g8wjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:29Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.471939 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dfeba9911630f8c172fab9eee3a107fbc2e24407b0af1f69cd539bac18d47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:29Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.488915 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7dvfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97853b38-352d-42df-ad31-639c0e58093a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12409cad6bedda3da41a11ce209dd58b7d15e3fc0dde575d70b3aa6c64435144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7dvfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:29Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.504002 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xqns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ef1c490-d5f9-458d-8b3e-8580a5f07df6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8br2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8br2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xqns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:29Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.527421 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9309c6-0336-4a15-8cbf-78178b4e57d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6824555f2019c5b0c92137ccb0a9af419b01ce0c63e1739c1d22b155a97c98a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a945d54b82518c2cda9257528f766444b687693255c50680adafb11651c792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca6e50d3a2acc2a4d43dc4a1fc1ff783ea5cb78978132377b7bb12b0dbd3e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c7268055ac9d7def228857bd8b974a53bb71fa873e1e0495d4691b8ca11902\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb71578e3eba87e91e6f6db0b03669e556cfbf38e2df367d20b6c8c79952f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:29Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.541828 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f128c8ae-2e32-4884-a296-728579141589\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:29:51.087222 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:29:51.088631 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2674264491/tls.crt::/tmp/serving-cert-2674264491/tls.key\\\\\\\"\\\\nI0122 16:29:56.617863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:29:56.621506 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:29:56.621541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:29:56.621606 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:29:56.621634 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:29:56.631508 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:29:56.631550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631559 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:29:56.631576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0122 16:29:56.631574 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 16:29:56.631584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:29:56.631610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 16:29:56.634157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:29Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.556189 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ba0bf6-e521-4b47-a7e5-81f19a4bf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f742b25c51806335d17c6c67e8ad4944228fde89626352044f62ee1e708c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0197852c20ea1961ea8cff956886a8a42967c95fad73d2ed8bd37e6f763cca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3cdc36e13e13f43cb329beb4b415f17dab3d8427338168449ea3771053d668a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://981ef0ee873407291236dfd734567e3213a9451d495eb97e1029696cc788acbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://981ef0ee873407291236dfd734567e3213a9451d495eb97e1029696cc788acbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:29Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.561074 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.561134 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.561149 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.561164 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.561176 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:29Z","lastTransitionTime":"2026-01-22T16:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.569960 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:29Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.663769 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.663995 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.664032 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.664057 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.664071 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:29Z","lastTransitionTime":"2026-01-22T16:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.768382 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.768446 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.768459 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.768479 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.768489 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:29Z","lastTransitionTime":"2026-01-22T16:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.845939 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 23:58:19.80524588 +0000 UTC Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.871248 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.871287 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.871298 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.871317 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.871329 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:29Z","lastTransitionTime":"2026-01-22T16:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.974655 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.974707 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.974717 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.974733 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:29 crc kubenswrapper[4758]: I0122 16:30:29.974769 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:29Z","lastTransitionTime":"2026-01-22T16:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.077501 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.077556 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.077570 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.077593 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.077607 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:30Z","lastTransitionTime":"2026-01-22T16:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.136951 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jdpck_9b60a09e-8bfa-4d2e-998d-e1db5dec0faa/ovnkube-controller/1.log" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.140459 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" event={"ID":"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa","Type":"ContainerStarted","Data":"99c5e5416f238f2982c2f7867eeca80db18dfebf840af2b1155a40d591c248e9"} Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.140999 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.159642 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7dvfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97853b38-352d-42df-ad31-639c0e58093a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12409cad6bedda3da41a11ce209dd58b7d15e3fc0dde575d70b3aa6c64435144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7dvfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:30Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.175054 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xqns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ef1c490-d5f9-458d-8b3e-8580a5f07df6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8br2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8br2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xqns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:30Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.180345 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.180392 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.180407 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.180424 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.180435 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:30Z","lastTransitionTime":"2026-01-22T16:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.199672 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9309c6-0336-4a15-8cbf-78178b4e57d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6824555f2019c5b0c92137ccb0a9af419b01ce0c63e1739c1d22b155a97c98a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a945d54b82518c2cda9257528f766444b687693255c50680adafb11651c792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca6e50d3a2acc2a4d43dc4a1fc1ff783ea5cb78978132377b7bb12b0dbd3e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c7268055ac9d7def228857bd8b974a53bb71fa873e1e0495d4691b8ca11902\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb71578e3eba87e91e6f6db0b03669e556cfbf38e2df367d20b6c8c79952f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:30Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.218696 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f128c8ae-2e32-4884-a296-728579141589\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:29:51.087222 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:29:51.088631 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2674264491/tls.crt::/tmp/serving-cert-2674264491/tls.key\\\\\\\"\\\\nI0122 16:29:56.617863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:29:56.621506 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:29:56.621541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:29:56.621606 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:29:56.621634 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:29:56.631508 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:29:56.631550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631559 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:29:56.631576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0122 16:29:56.631574 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 16:29:56.631584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:29:56.631610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 16:29:56.634157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:30Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.233308 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ba0bf6-e521-4b47-a7e5-81f19a4bf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f742b25c51806335d17c6c67e8ad4944228fde89626352044f62ee1e708c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0197852c20ea1961ea8cff956886a8a42967c95fad73d2ed8bd37e6f763cca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3cdc36e13e13f43cb329beb4b415f17dab3d8427338168449ea3771053d668a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://981ef0ee873407291236dfd734567e3213a9451d495eb97e1029696cc788acbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://981ef0ee873407291236dfd734567e3213a9451d495eb97e1029696cc788acbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:30Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.245584 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dfeba9911630f8c172fab9eee3a107fbc2e24407b0af1f69cd539bac18d47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:30Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.258642 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:30Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.280193 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385c8e25a62d5dad6aeac43a064397418c85c1b8720414cd44e3e925fa85a04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f98a04a30984aea45235e40edb9801d2939b35a08519d1d63df0d0c6c47131a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://596bd59377fe79f228ddda88e07b73a2f24a57ce836d0f0b2ca02d6008363020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ade0d50980af81530f1be5dbb599cf39cd13941d216485b18422f8474a1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2bb807fa30678efaca258ed72a274a7f4e065ce20066caf601177dbc8466409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915d9141459dc9d0a72681717513aaef7a876003397a1ed89a62b755bb45dc67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c5e5416f238f2982c2f7867eeca80db18dfebf840af2b1155a40d591c248e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b6f8539a00b1a414508970ff366bcdcd904ecebe9f020c607ccb7311fcfa86\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:30:18Z\\\",\\\"message\\\":\\\"0122 16:30:17.825337 6251 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0122 16:30:17.825358 6251 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI0122 16:30:17.825368 6251 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-2xqns] creating logical port openshift-multus_network-metrics-daemon-2xqns for pod on switch crc\\\\nI0122 16:30:17.825368 6251 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI0122 16:30:17.825377 6251 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nF0122 16:30:17.825374 6251 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cfdd5744f9e8afe2a851b86ac85473f44fb49066784a282306ca8c1d621974b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jdpck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:30Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.282257 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.282302 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.282315 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.282335 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.282347 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:30Z","lastTransitionTime":"2026-01-22T16:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.295361 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cbszh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b21f81e8-3f11-43f9-abdb-09e8d25aeb73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0004ca3184c4311fd606fb18d3c4657d88f6212a1ac49a882c1a8ec5162c314b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5lx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e25bfe191c79389160e8c25e97ebd3bf2782cccecf01aac06c459041e083a793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5lx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cbszh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:30Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.309422 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afc42466-9bb2-4e33-abde-6a09e897045b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:30Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.323151 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:30Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.342081 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10fc91a9777392383ea1a48bb940f13581052f2aaadce7c2d94588884a8ff832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:30Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.359400 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9182510-5fc6-4717-b94c-de8ca4fb7c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1b80316bb1f3b27668a5ff6e547c13c4f84ae30f40fc6d0407849fb59fb9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqfn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:30Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.372071 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lt6tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"090f3014-3d99-49d5-8a9d-9719b4efbcf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a09e0ee71eddb461f883d44293ed63887153350f0f617799e7f360b5d6fdd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhkzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lt6tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:30Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.384650 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.384697 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.384706 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.384721 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.384732 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:30Z","lastTransitionTime":"2026-01-22T16:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.388259 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9485b50dd3fa712a0f43f04b4d3ae98e0f152d17b5db4b6f214125c1e926a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:30Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.400819 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:30Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.412130 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g8wjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425c9f0a-b14e-48d3-bd86-6fc510f22a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1d22788bf54b1c4a55b0c19222ad6dde207887ab282b97324717333f0280f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtrsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g8wjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:30Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.424924 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b5f24a-19df-4969-b547-a5acc323c58a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208979f8d30765fcfd45650c760741d72bd7119bfe62ebf4d7c1554d6c6d56e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf5569b30ec6397014b282bf67eca77930756b413c7554ab366d2d31a4f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zsbtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:30Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.487274 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.487323 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.487334 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.487351 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.487366 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:30Z","lastTransitionTime":"2026-01-22T16:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.589842 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.590129 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.590249 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.590337 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.590426 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:30Z","lastTransitionTime":"2026-01-22T16:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.658871 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.659150 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.659304 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.659439 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.659549 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:30Z","lastTransitionTime":"2026-01-22T16:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:30 crc kubenswrapper[4758]: E0122 16:30:30.674386 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7288053-8dca-462f-b24f-6a9d8be738b3\\\",\\\"systemUUID\\\":\\\"83805c52-2bba-4705-bdbe-9101a9d1190e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:30Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.678639 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.678686 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.678698 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.678717 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.678729 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:30Z","lastTransitionTime":"2026-01-22T16:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:30 crc kubenswrapper[4758]: E0122 16:30:30.693264 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7288053-8dca-462f-b24f-6a9d8be738b3\\\",\\\"systemUUID\\\":\\\"83805c52-2bba-4705-bdbe-9101a9d1190e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:30Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.697717 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.697784 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.697799 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.697819 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.697831 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:30Z","lastTransitionTime":"2026-01-22T16:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:30 crc kubenswrapper[4758]: E0122 16:30:30.711936 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7288053-8dca-462f-b24f-6a9d8be738b3\\\",\\\"systemUUID\\\":\\\"83805c52-2bba-4705-bdbe-9101a9d1190e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:30Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.715460 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.715514 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.715527 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.715546 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.715561 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:30Z","lastTransitionTime":"2026-01-22T16:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:30 crc kubenswrapper[4758]: E0122 16:30:30.732532 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7288053-8dca-462f-b24f-6a9d8be738b3\\\",\\\"systemUUID\\\":\\\"83805c52-2bba-4705-bdbe-9101a9d1190e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:30Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.737396 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.737464 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.737477 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.737495 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.737507 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:30Z","lastTransitionTime":"2026-01-22T16:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:30 crc kubenswrapper[4758]: E0122 16:30:30.751611 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7288053-8dca-462f-b24f-6a9d8be738b3\\\",\\\"systemUUID\\\":\\\"83805c52-2bba-4705-bdbe-9101a9d1190e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:30Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:30 crc kubenswrapper[4758]: E0122 16:30:30.751790 4758 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.753346 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.753413 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.753430 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.753450 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.753465 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:30Z","lastTransitionTime":"2026-01-22T16:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.807235 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.807284 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:30 crc kubenswrapper[4758]: E0122 16:30:30.807388 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.807235 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:30:30 crc kubenswrapper[4758]: E0122 16:30:30.807468 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:30 crc kubenswrapper[4758]: E0122 16:30:30.807534 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.807586 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:30 crc kubenswrapper[4758]: E0122 16:30:30.807628 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.846893 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 06:02:45.958886407 +0000 UTC Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.856135 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.856188 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.856200 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.856221 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.856235 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:30Z","lastTransitionTime":"2026-01-22T16:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.958609 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.958655 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.958666 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.958683 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:30 crc kubenswrapper[4758]: I0122 16:30:30.958695 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:30Z","lastTransitionTime":"2026-01-22T16:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.061624 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.061912 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.062063 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.062248 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.062370 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:31Z","lastTransitionTime":"2026-01-22T16:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.145929 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jdpck_9b60a09e-8bfa-4d2e-998d-e1db5dec0faa/ovnkube-controller/2.log" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.146730 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jdpck_9b60a09e-8bfa-4d2e-998d-e1db5dec0faa/ovnkube-controller/1.log" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.149894 4758 generic.go:334] "Generic (PLEG): container finished" podID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerID="99c5e5416f238f2982c2f7867eeca80db18dfebf840af2b1155a40d591c248e9" exitCode=1 Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.149955 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" event={"ID":"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa","Type":"ContainerDied","Data":"99c5e5416f238f2982c2f7867eeca80db18dfebf840af2b1155a40d591c248e9"} Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.150012 4758 scope.go:117] "RemoveContainer" containerID="84b6f8539a00b1a414508970ff366bcdcd904ecebe9f020c607ccb7311fcfa86" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.150699 4758 scope.go:117] "RemoveContainer" containerID="99c5e5416f238f2982c2f7867eeca80db18dfebf840af2b1155a40d591c248e9" Jan 22 16:30:31 crc kubenswrapper[4758]: E0122 16:30:31.150900 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jdpck_openshift-ovn-kubernetes(9b60a09e-8bfa-4d2e-998d-e1db5dec0faa)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.164732 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afc42466-9bb2-4e33-abde-6a09e897045b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:31Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.165624 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.165657 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.165669 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.165685 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.165697 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:31Z","lastTransitionTime":"2026-01-22T16:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.178828 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:31Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.192094 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10fc91a9777392383ea1a48bb940f13581052f2aaadce7c2d94588884a8ff832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:31Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.225343 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385c8e25a62d5dad6aeac43a064397418c85c1b8720414cd44e3e925fa85a04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f98a04a30984aea45235e40edb9801d2939b35a08519d1d63df0d0c6c47131a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://596bd59377fe79f228ddda88e07b73a2f24a57ce836d0f0b2ca02d6008363020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ade0d50980af81530f1be5dbb599cf39cd13941d216485b18422f8474a1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2bb807fa30678efaca258ed72a274a7f4e065ce20066caf601177dbc8466409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915d9141459dc9d0a72681717513aaef7a876003397a1ed89a62b755bb45dc67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c5e5416f238f2982c2f7867eeca80db18dfebf840af2b1155a40d591c248e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b6f8539a00b1a414508970ff366bcdcd904ecebe9f020c607ccb7311fcfa86\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:30:18Z\\\",\\\"message\\\":\\\"0122 16:30:17.825337 6251 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0122 16:30:17.825358 6251 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI0122 16:30:17.825368 6251 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-2xqns] creating logical port openshift-multus_network-metrics-daemon-2xqns for pod on switch crc\\\\nI0122 16:30:17.825368 6251 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI0122 16:30:17.825377 6251 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nF0122 16:30:17.825374 6251 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99c5e5416f238f2982c2f7867eeca80db18dfebf840af2b1155a40d591c248e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:30:30Z\\\",\\\"message\\\":\\\"v1.Pod event handler 6 for removal\\\\nI0122 16:30:30.316677 6384 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0122 16:30:30.316778 6384 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 16:30:30.316793 6384 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 16:30:30.316803 6384 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 16:30:30.316810 6384 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 16:30:30.316816 6384 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 16:30:30.316861 6384 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0122 16:30:30.316907 6384 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 16:30:30.316935 6384 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 16:30:30.316967 6384 factory.go:656] Stopping watch factory\\\\nI0122 16:30:30.316968 6384 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0122 16:30:30.317043 6384 ovnkube.go:599] Stopped ovnkube\\\\nI0122 16:30:30.317063 6384 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 16:30:30.317083 6384 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0122 16:30:30.317040 6384 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nF0122 16:30:30.317166 6384 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cfdd5744f9e8afe2a851b86ac85473f44fb49066784a282306ca8c1d621974b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jdpck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:31Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.239478 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cbszh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b21f81e8-3f11-43f9-abdb-09e8d25aeb73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0004ca3184c4311fd606fb18d3c4657d88f6212a1ac49a882c1a8ec5162c314b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5lx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e25bfe191c79389160e8c25e97ebd3bf2782cccecf01aac06c459041e083a793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5lx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cbszh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:31Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.256315 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9485b50dd3fa712a0f43f04b4d3ae98e0f152d17b5db4b6f214125c1e926a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:31Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.267720 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.267764 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.267772 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.267790 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.267806 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:31Z","lastTransitionTime":"2026-01-22T16:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.272483 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:31Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.283892 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g8wjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425c9f0a-b14e-48d3-bd86-6fc510f22a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1d22788bf54b1c4a55b0c19222ad6dde207887ab282b97324717333f0280f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtrsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g8wjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:31Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.297401 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b5f24a-19df-4969-b547-a5acc323c58a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208979f8d30765fcfd45650c760741d72bd7119bfe62ebf4d7c1554d6c6d56e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf5569b30ec6397014b282bf67eca77930756b413c7554ab366d2d31a4f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zsbtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:31Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.312778 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9182510-5fc6-4717-b94c-de8ca4fb7c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1b80316bb1f3b27668a5ff6e547c13c4f84ae30f40fc6d0407849fb59fb9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqfn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:31Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.323393 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lt6tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"090f3014-3d99-49d5-8a9d-9719b4efbcf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a09e0ee71eddb461f883d44293ed63887153350f0f617799e7f360b5d6fdd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhkzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lt6tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:31Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.354083 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9309c6-0336-4a15-8cbf-78178b4e57d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6824555f2019c5b0c92137ccb0a9af419b01ce0c63e1739c1d22b155a97c98a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a945d54b82518c2cda9257528f766444b687693255c50680adafb11651c792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca6e50d3a2acc2a4d43dc4a1fc1ff783ea5cb78978132377b7bb12b0dbd3e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c7268055ac9d7def228857bd8b974a53bb71fa873e1e0495d4691b8ca11902\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb71578e3eba87e91e6f6db0b03669e556cfbf38e2df367d20b6c8c79952f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:31Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.369616 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f128c8ae-2e32-4884-a296-728579141589\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:29:51.087222 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:29:51.088631 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2674264491/tls.crt::/tmp/serving-cert-2674264491/tls.key\\\\\\\"\\\\nI0122 16:29:56.617863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:29:56.621506 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:29:56.621541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:29:56.621606 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:29:56.621634 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:29:56.631508 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:29:56.631550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631559 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:29:56.631576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0122 16:29:56.631574 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 16:29:56.631584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:29:56.631610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 16:29:56.634157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:31Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.370170 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.370195 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.370204 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.370218 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.370227 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:31Z","lastTransitionTime":"2026-01-22T16:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.382292 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ba0bf6-e521-4b47-a7e5-81f19a4bf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f742b25c51806335d17c6c67e8ad4944228fde89626352044f62ee1e708c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0197852c20ea1961ea8cff956886a8a42967c95fad73d2ed8bd37e6f763cca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3cdc36e13e13f43cb329beb4b415f17dab3d8427338168449ea3771053d668a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://981ef0ee873407291236dfd734567e3213a9451d495eb97e1029696cc788acbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://981ef0ee873407291236dfd734567e3213a9451d495eb97e1029696cc788acbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:31Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.394487 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dfeba9911630f8c172fab9eee3a107fbc2e24407b0af1f69cd539bac18d47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:31Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.407187 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7dvfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97853b38-352d-42df-ad31-639c0e58093a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12409cad6bedda3da41a11ce209dd58b7d15e3fc0dde575d70b3aa6c64435144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7dvfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:31Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.417908 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xqns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ef1c490-d5f9-458d-8b3e-8580a5f07df6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8br2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8br2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xqns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:31Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.435035 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:31Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.472614 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.472658 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.472680 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.472699 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.472711 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:31Z","lastTransitionTime":"2026-01-22T16:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.575096 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.575170 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.575196 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.575226 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.575249 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:31Z","lastTransitionTime":"2026-01-22T16:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.677871 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.677941 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.677953 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.677974 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.677987 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:31Z","lastTransitionTime":"2026-01-22T16:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.781156 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.781436 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.781542 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.781669 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.781776 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:31Z","lastTransitionTime":"2026-01-22T16:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.847066 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 09:30:00.787142803 +0000 UTC Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.883818 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.884022 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.884113 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.884179 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.884246 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:31Z","lastTransitionTime":"2026-01-22T16:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.986409 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.986491 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.986508 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.986535 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:31 crc kubenswrapper[4758]: I0122 16:30:31.986552 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:31Z","lastTransitionTime":"2026-01-22T16:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.089051 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.089099 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.089111 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.089129 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.089142 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:32Z","lastTransitionTime":"2026-01-22T16:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.156421 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jdpck_9b60a09e-8bfa-4d2e-998d-e1db5dec0faa/ovnkube-controller/2.log" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.161529 4758 scope.go:117] "RemoveContainer" containerID="99c5e5416f238f2982c2f7867eeca80db18dfebf840af2b1155a40d591c248e9" Jan 22 16:30:32 crc kubenswrapper[4758]: E0122 16:30:32.161718 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jdpck_openshift-ovn-kubernetes(9b60a09e-8bfa-4d2e-998d-e1db5dec0faa)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.178642 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b5f24a-19df-4969-b547-a5acc323c58a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208979f8d30765fcfd45650c760741d72bd7119bfe62ebf4d7c1554d6c6d56e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf5569b30ec6397014b282bf67eca77930756b413c7554ab366d2d31a4f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zsbtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:32Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.191927 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.191972 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.191986 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.192011 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.192025 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:32Z","lastTransitionTime":"2026-01-22T16:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.196324 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9182510-5fc6-4717-b94c-de8ca4fb7c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1b80316bb1f3b27668a5ff6e547c13c4f84ae30f40fc6d0407849fb59fb9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqfn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:32Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.210397 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lt6tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"090f3014-3d99-49d5-8a9d-9719b4efbcf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a09e0ee71eddb461f883d44293ed63887153350f0f617799e7f360b5d6fdd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhkzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lt6tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:32Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.226341 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9485b50dd3fa712a0f43f04b4d3ae98e0f152d17b5db4b6f214125c1e926a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:32Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.242330 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:32Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.254528 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g8wjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425c9f0a-b14e-48d3-bd86-6fc510f22a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1d22788bf54b1c4a55b0c19222ad6dde207887ab282b97324717333f0280f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtrsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g8wjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:32Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.268020 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dfeba9911630f8c172fab9eee3a107fbc2e24407b0af1f69cd539bac18d47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:32Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.288886 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7dvfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97853b38-352d-42df-ad31-639c0e58093a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12409cad6bedda3da41a11ce209dd58b7d15e3fc0dde575d70b3aa6c64435144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7dvfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:32Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.294715 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.294930 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.295036 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.295135 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.295216 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:32Z","lastTransitionTime":"2026-01-22T16:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.303945 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xqns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ef1c490-d5f9-458d-8b3e-8580a5f07df6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8br2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8br2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xqns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:32Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.329800 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9309c6-0336-4a15-8cbf-78178b4e57d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6824555f2019c5b0c92137ccb0a9af419b01ce0c63e1739c1d22b155a97c98a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a945d54b82518c2cda9257528f766444b687693255c50680adafb11651c792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca6e50d3a2acc2a4d43dc4a1fc1ff783ea5cb78978132377b7bb12b0dbd3e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c7268055ac9d7def228857bd8b974a53bb71fa873e1e0495d4691b8ca11902\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb71578e3eba87e91e6f6db0b03669e556cfbf38e2df367d20b6c8c79952f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:32Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.344685 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f128c8ae-2e32-4884-a296-728579141589\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:29:51.087222 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:29:51.088631 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2674264491/tls.crt::/tmp/serving-cert-2674264491/tls.key\\\\\\\"\\\\nI0122 16:29:56.617863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:29:56.621506 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:29:56.621541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:29:56.621606 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:29:56.621634 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:29:56.631508 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:29:56.631550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631559 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:29:56.631576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0122 16:29:56.631574 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 16:29:56.631584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:29:56.631610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 16:29:56.634157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:32Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.362697 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ba0bf6-e521-4b47-a7e5-81f19a4bf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f742b25c51806335d17c6c67e8ad4944228fde89626352044f62ee1e708c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0197852c20ea1961ea8cff956886a8a42967c95fad73d2ed8bd37e6f763cca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3cdc36e13e13f43cb329beb4b415f17dab3d8427338168449ea3771053d668a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://981ef0ee873407291236dfd734567e3213a9451d495eb97e1029696cc788acbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://981ef0ee873407291236dfd734567e3213a9451d495eb97e1029696cc788acbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:32Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.379240 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:32Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.392804 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10fc91a9777392383ea1a48bb940f13581052f2aaadce7c2d94588884a8ff832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:32Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.397173 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.397307 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.397413 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.397545 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.397633 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:32Z","lastTransitionTime":"2026-01-22T16:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.414370 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385c8e25a62d5dad6aeac43a064397418c85c1b8720414cd44e3e925fa85a04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f98a04a30984aea45235e40edb9801d2939b35a08519d1d63df0d0c6c47131a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://596bd59377fe79f228ddda88e07b73a2f24a57ce836d0f0b2ca02d6008363020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ade0d50980af81530f1be5dbb599cf39cd13941d216485b18422f8474a1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2bb807fa30678efaca258ed72a274a7f4e065ce20066caf601177dbc8466409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915d9141459dc9d0a72681717513aaef7a876003397a1ed89a62b755bb45dc67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c5e5416f238f2982c2f7867eeca80db18dfebf840af2b1155a40d591c248e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99c5e5416f238f2982c2f7867eeca80db18dfebf840af2b1155a40d591c248e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:30:30Z\\\",\\\"message\\\":\\\"v1.Pod event handler 6 for removal\\\\nI0122 16:30:30.316677 6384 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0122 16:30:30.316778 6384 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 16:30:30.316793 6384 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 16:30:30.316803 6384 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 16:30:30.316810 6384 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 16:30:30.316816 6384 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 16:30:30.316861 6384 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0122 16:30:30.316907 6384 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 16:30:30.316935 6384 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 16:30:30.316967 6384 factory.go:656] Stopping watch factory\\\\nI0122 16:30:30.316968 6384 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0122 16:30:30.317043 6384 ovnkube.go:599] Stopped ovnkube\\\\nI0122 16:30:30.317063 6384 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 16:30:30.317083 6384 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0122 16:30:30.317040 6384 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nF0122 16:30:30.317166 6384 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jdpck_openshift-ovn-kubernetes(9b60a09e-8bfa-4d2e-998d-e1db5dec0faa)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cfdd5744f9e8afe2a851b86ac85473f44fb49066784a282306ca8c1d621974b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jdpck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:32Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.428090 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cbszh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b21f81e8-3f11-43f9-abdb-09e8d25aeb73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0004ca3184c4311fd606fb18d3c4657d88f6212a1ac49a882c1a8ec5162c314b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5lx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e25bfe191c79389160e8c25e97ebd3bf2782cccecf01aac06c459041e083a793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5lx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cbszh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:32Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.442562 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afc42466-9bb2-4e33-abde-6a09e897045b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:32Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.455832 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:32Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.499831 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.499882 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.499898 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.499918 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.499933 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:32Z","lastTransitionTime":"2026-01-22T16:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.602155 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.602205 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.602221 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.602245 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.602263 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:32Z","lastTransitionTime":"2026-01-22T16:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.619288 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3ef1c490-d5f9-458d-8b3e-8580a5f07df6-metrics-certs\") pod \"network-metrics-daemon-2xqns\" (UID: \"3ef1c490-d5f9-458d-8b3e-8580a5f07df6\") " pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:30:32 crc kubenswrapper[4758]: E0122 16:30:32.619459 4758 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 16:30:32 crc kubenswrapper[4758]: E0122 16:30:32.619543 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ef1c490-d5f9-458d-8b3e-8580a5f07df6-metrics-certs podName:3ef1c490-d5f9-458d-8b3e-8580a5f07df6 nodeName:}" failed. No retries permitted until 2026-01-22 16:30:48.619519229 +0000 UTC m=+70.102858554 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3ef1c490-d5f9-458d-8b3e-8580a5f07df6-metrics-certs") pod "network-metrics-daemon-2xqns" (UID: "3ef1c490-d5f9-458d-8b3e-8580a5f07df6") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.704610 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.704648 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.704659 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.704674 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.704688 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:32Z","lastTransitionTime":"2026-01-22T16:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.807220 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.807304 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.807314 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.807220 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:30:32 crc kubenswrapper[4758]: E0122 16:30:32.807338 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:32 crc kubenswrapper[4758]: E0122 16:30:32.807443 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:32 crc kubenswrapper[4758]: E0122 16:30:32.807553 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.807686 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.807711 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.807722 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.807760 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.807778 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:32Z","lastTransitionTime":"2026-01-22T16:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:32 crc kubenswrapper[4758]: E0122 16:30:32.807874 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.848331 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 17:06:34.633694612 +0000 UTC Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.910193 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.910262 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.910278 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.910303 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:32 crc kubenswrapper[4758]: I0122 16:30:32.910317 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:32Z","lastTransitionTime":"2026-01-22T16:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.013499 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.013552 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.013564 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.013588 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.013600 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:33Z","lastTransitionTime":"2026-01-22T16:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.116938 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.116999 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.117016 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.117036 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.117056 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:33Z","lastTransitionTime":"2026-01-22T16:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.221258 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.221314 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.221330 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.221367 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.221380 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:33Z","lastTransitionTime":"2026-01-22T16:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.323538 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.323612 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.323623 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.323644 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.323657 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:33Z","lastTransitionTime":"2026-01-22T16:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.426670 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.426705 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.426717 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.426732 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.426762 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:33Z","lastTransitionTime":"2026-01-22T16:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.529397 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.529473 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.529487 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.529506 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.529519 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:33Z","lastTransitionTime":"2026-01-22T16:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.632784 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.632907 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.632921 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.632948 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.632963 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:33Z","lastTransitionTime":"2026-01-22T16:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.736012 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.736075 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.736089 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.736114 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.736130 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:33Z","lastTransitionTime":"2026-01-22T16:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.839516 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.839597 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.839612 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.839637 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.839657 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:33Z","lastTransitionTime":"2026-01-22T16:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.849281 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 16:01:21.944234944 +0000 UTC Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.942673 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.942719 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.942728 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.942763 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:33 crc kubenswrapper[4758]: I0122 16:30:33.942774 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:33Z","lastTransitionTime":"2026-01-22T16:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:34 crc kubenswrapper[4758]: I0122 16:30:34.045680 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:34 crc kubenswrapper[4758]: I0122 16:30:34.045763 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:34 crc kubenswrapper[4758]: I0122 16:30:34.045784 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:34 crc kubenswrapper[4758]: I0122 16:30:34.045811 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:34 crc kubenswrapper[4758]: I0122 16:30:34.045830 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:34Z","lastTransitionTime":"2026-01-22T16:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:34 crc kubenswrapper[4758]: I0122 16:30:34.148511 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:34 crc kubenswrapper[4758]: I0122 16:30:34.148607 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:34 crc kubenswrapper[4758]: I0122 16:30:34.148624 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:34 crc kubenswrapper[4758]: I0122 16:30:34.148650 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:34 crc kubenswrapper[4758]: I0122 16:30:34.148667 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:34Z","lastTransitionTime":"2026-01-22T16:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:34 crc kubenswrapper[4758]: I0122 16:30:34.252244 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:34 crc kubenswrapper[4758]: I0122 16:30:34.252291 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:34 crc kubenswrapper[4758]: I0122 16:30:34.252302 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:34 crc kubenswrapper[4758]: I0122 16:30:34.252323 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:34 crc kubenswrapper[4758]: I0122 16:30:34.252336 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:34Z","lastTransitionTime":"2026-01-22T16:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:34 crc kubenswrapper[4758]: I0122 16:30:34.355384 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:34 crc kubenswrapper[4758]: I0122 16:30:34.355480 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:34 crc kubenswrapper[4758]: I0122 16:30:34.355493 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:34 crc kubenswrapper[4758]: I0122 16:30:34.355516 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:34 crc kubenswrapper[4758]: I0122 16:30:34.355531 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:34Z","lastTransitionTime":"2026-01-22T16:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:34 crc kubenswrapper[4758]: I0122 16:30:34.458281 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:34 crc kubenswrapper[4758]: I0122 16:30:34.458348 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:34 crc kubenswrapper[4758]: I0122 16:30:34.458362 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:34 crc kubenswrapper[4758]: I0122 16:30:34.458381 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:34 crc kubenswrapper[4758]: I0122 16:30:34.458393 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:34Z","lastTransitionTime":"2026-01-22T16:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:34 crc kubenswrapper[4758]: I0122 16:30:34.562289 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:34 crc kubenswrapper[4758]: I0122 16:30:34.562365 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:34 crc kubenswrapper[4758]: I0122 16:30:34.562381 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:34 crc kubenswrapper[4758]: I0122 16:30:34.562404 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:34 crc kubenswrapper[4758]: I0122 16:30:34.562421 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:34Z","lastTransitionTime":"2026-01-22T16:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:34 crc kubenswrapper[4758]: I0122 16:30:34.665149 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:34 crc kubenswrapper[4758]: I0122 16:30:34.665209 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:34 crc kubenswrapper[4758]: I0122 16:30:34.665219 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:34 crc kubenswrapper[4758]: I0122 16:30:34.665242 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:34 crc kubenswrapper[4758]: I0122 16:30:34.665254 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:34Z","lastTransitionTime":"2026-01-22T16:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:34 crc kubenswrapper[4758]: I0122 16:30:34.768288 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:34 crc kubenswrapper[4758]: I0122 16:30:34.768353 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:34 crc kubenswrapper[4758]: I0122 16:30:34.768403 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:34 crc kubenswrapper[4758]: I0122 16:30:34.768446 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:34 crc kubenswrapper[4758]: I0122 16:30:34.768477 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:34Z","lastTransitionTime":"2026-01-22T16:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:34 crc kubenswrapper[4758]: I0122 16:30:34.807194 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:30:34 crc kubenswrapper[4758]: I0122 16:30:34.807251 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:34 crc kubenswrapper[4758]: I0122 16:30:34.807281 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:34 crc kubenswrapper[4758]: I0122 16:30:34.807374 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:34 crc kubenswrapper[4758]: E0122 16:30:34.807610 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:30:34 crc kubenswrapper[4758]: E0122 16:30:34.807830 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:34 crc kubenswrapper[4758]: E0122 16:30:34.807910 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:34 crc kubenswrapper[4758]: E0122 16:30:34.808051 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:34 crc kubenswrapper[4758]: I0122 16:30:34.850294 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 07:30:54.184300404 +0000 UTC Jan 22 16:30:34 crc kubenswrapper[4758]: I0122 16:30:34.909343 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:34 crc kubenswrapper[4758]: I0122 16:30:34.909383 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:34 crc kubenswrapper[4758]: I0122 16:30:34.909395 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:34 crc kubenswrapper[4758]: I0122 16:30:34.909414 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:34 crc kubenswrapper[4758]: I0122 16:30:34.909428 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:34Z","lastTransitionTime":"2026-01-22T16:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.012391 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.012430 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.012440 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.012455 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.012466 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:35Z","lastTransitionTime":"2026-01-22T16:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.116184 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.116246 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.116258 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.116279 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.116294 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:35Z","lastTransitionTime":"2026-01-22T16:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.219044 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.219089 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.219098 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.219116 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.219126 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:35Z","lastTransitionTime":"2026-01-22T16:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.321902 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.321964 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.321975 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.321997 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.322010 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:35Z","lastTransitionTime":"2026-01-22T16:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.425524 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.425559 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.425571 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.425589 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.425600 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:35Z","lastTransitionTime":"2026-01-22T16:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.529521 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.529574 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.529585 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.529606 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.529619 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:35Z","lastTransitionTime":"2026-01-22T16:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.633027 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.633071 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.633082 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.633096 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.633108 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:35Z","lastTransitionTime":"2026-01-22T16:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.736260 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.736331 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.736388 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.736420 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.736441 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:35Z","lastTransitionTime":"2026-01-22T16:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.838497 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.838532 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.838542 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.838557 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.838568 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:35Z","lastTransitionTime":"2026-01-22T16:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.851596 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 17:40:29.531494568 +0000 UTC Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.941161 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.941229 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.941245 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.941267 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:35 crc kubenswrapper[4758]: I0122 16:30:35.941280 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:35Z","lastTransitionTime":"2026-01-22T16:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.044440 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.044513 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.044528 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.044553 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.044570 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:36Z","lastTransitionTime":"2026-01-22T16:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.148380 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.148470 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.148497 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.148529 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.148549 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:36Z","lastTransitionTime":"2026-01-22T16:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.252017 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.252083 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.252094 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.252120 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.252136 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:36Z","lastTransitionTime":"2026-01-22T16:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.355286 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.355362 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.355387 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.355420 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.355446 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:36Z","lastTransitionTime":"2026-01-22T16:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.458587 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.458618 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.458626 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.458639 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.458648 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:36Z","lastTransitionTime":"2026-01-22T16:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.560704 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.560733 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.560784 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.560797 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.560807 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:36Z","lastTransitionTime":"2026-01-22T16:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.664077 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.664134 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.664147 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.664165 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.664177 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:36Z","lastTransitionTime":"2026-01-22T16:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.766735 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.766825 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.766844 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.766868 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.766888 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:36Z","lastTransitionTime":"2026-01-22T16:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.807425 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.807495 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.807530 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:36 crc kubenswrapper[4758]: E0122 16:30:36.807799 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.807836 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:36 crc kubenswrapper[4758]: E0122 16:30:36.807990 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:36 crc kubenswrapper[4758]: E0122 16:30:36.808793 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:36 crc kubenswrapper[4758]: E0122 16:30:36.808657 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.852254 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 21:36:14.901511855 +0000 UTC Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.869589 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.869646 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.869662 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.869683 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.869702 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:36Z","lastTransitionTime":"2026-01-22T16:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.973069 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.973120 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.973136 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.973154 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:36 crc kubenswrapper[4758]: I0122 16:30:36.973168 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:36Z","lastTransitionTime":"2026-01-22T16:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:37 crc kubenswrapper[4758]: I0122 16:30:37.075333 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:37 crc kubenswrapper[4758]: I0122 16:30:37.075412 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:37 crc kubenswrapper[4758]: I0122 16:30:37.075435 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:37 crc kubenswrapper[4758]: I0122 16:30:37.075467 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:37 crc kubenswrapper[4758]: I0122 16:30:37.075492 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:37Z","lastTransitionTime":"2026-01-22T16:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:37 crc kubenswrapper[4758]: I0122 16:30:37.179102 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:37 crc kubenswrapper[4758]: I0122 16:30:37.179140 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:37 crc kubenswrapper[4758]: I0122 16:30:37.179152 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:37 crc kubenswrapper[4758]: I0122 16:30:37.179167 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:37 crc kubenswrapper[4758]: I0122 16:30:37.179178 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:37Z","lastTransitionTime":"2026-01-22T16:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:37 crc kubenswrapper[4758]: I0122 16:30:37.281711 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:37 crc kubenswrapper[4758]: I0122 16:30:37.281792 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:37 crc kubenswrapper[4758]: I0122 16:30:37.281808 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:37 crc kubenswrapper[4758]: I0122 16:30:37.281822 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:37 crc kubenswrapper[4758]: I0122 16:30:37.281831 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:37Z","lastTransitionTime":"2026-01-22T16:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:37 crc kubenswrapper[4758]: I0122 16:30:37.385171 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:37 crc kubenswrapper[4758]: I0122 16:30:37.385254 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:37 crc kubenswrapper[4758]: I0122 16:30:37.385269 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:37 crc kubenswrapper[4758]: I0122 16:30:37.385290 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:37 crc kubenswrapper[4758]: I0122 16:30:37.385304 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:37Z","lastTransitionTime":"2026-01-22T16:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:37 crc kubenswrapper[4758]: I0122 16:30:37.488533 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:37 crc kubenswrapper[4758]: I0122 16:30:37.488586 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:37 crc kubenswrapper[4758]: I0122 16:30:37.488595 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:37 crc kubenswrapper[4758]: I0122 16:30:37.488615 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:37 crc kubenswrapper[4758]: I0122 16:30:37.488629 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:37Z","lastTransitionTime":"2026-01-22T16:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:37 crc kubenswrapper[4758]: I0122 16:30:37.590945 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:37 crc kubenswrapper[4758]: I0122 16:30:37.590980 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:37 crc kubenswrapper[4758]: I0122 16:30:37.590995 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:37 crc kubenswrapper[4758]: I0122 16:30:37.591015 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:37 crc kubenswrapper[4758]: I0122 16:30:37.591029 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:37Z","lastTransitionTime":"2026-01-22T16:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:37 crc kubenswrapper[4758]: I0122 16:30:37.693921 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:37 crc kubenswrapper[4758]: I0122 16:30:37.693950 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:37 crc kubenswrapper[4758]: I0122 16:30:37.693962 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:37 crc kubenswrapper[4758]: I0122 16:30:37.693977 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:37 crc kubenswrapper[4758]: I0122 16:30:37.693989 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:37Z","lastTransitionTime":"2026-01-22T16:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:37 crc kubenswrapper[4758]: I0122 16:30:37.797287 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:37 crc kubenswrapper[4758]: I0122 16:30:37.797323 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:37 crc kubenswrapper[4758]: I0122 16:30:37.797332 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:37 crc kubenswrapper[4758]: I0122 16:30:37.797346 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:37 crc kubenswrapper[4758]: I0122 16:30:37.797357 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:37Z","lastTransitionTime":"2026-01-22T16:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:37 crc kubenswrapper[4758]: I0122 16:30:37.852472 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 12:10:56.276145489 +0000 UTC Jan 22 16:30:37 crc kubenswrapper[4758]: I0122 16:30:37.900324 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:37 crc kubenswrapper[4758]: I0122 16:30:37.900367 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:37 crc kubenswrapper[4758]: I0122 16:30:37.900385 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:37 crc kubenswrapper[4758]: I0122 16:30:37.900407 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:37 crc kubenswrapper[4758]: I0122 16:30:37.900425 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:37Z","lastTransitionTime":"2026-01-22T16:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.002017 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.002044 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.002052 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.002065 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.002077 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:38Z","lastTransitionTime":"2026-01-22T16:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.104209 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.104249 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.104260 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.104277 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.104289 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:38Z","lastTransitionTime":"2026-01-22T16:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.206853 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.206893 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.206902 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.206918 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.206928 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:38Z","lastTransitionTime":"2026-01-22T16:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.310351 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.310419 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.310441 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.310470 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.310493 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:38Z","lastTransitionTime":"2026-01-22T16:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.413838 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.413875 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.413883 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.413898 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.413917 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:38Z","lastTransitionTime":"2026-01-22T16:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.516341 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.516701 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.516974 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.517223 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.517448 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:38Z","lastTransitionTime":"2026-01-22T16:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.621023 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.621368 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.621566 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.621699 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.621984 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:38Z","lastTransitionTime":"2026-01-22T16:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.725572 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.725668 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.725695 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.725841 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.725869 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:38Z","lastTransitionTime":"2026-01-22T16:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.807010 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.807081 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:30:38 crc kubenswrapper[4758]: E0122 16:30:38.807144 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.807210 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:38 crc kubenswrapper[4758]: E0122 16:30:38.807209 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.807091 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:38 crc kubenswrapper[4758]: E0122 16:30:38.807304 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:38 crc kubenswrapper[4758]: E0122 16:30:38.807389 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.818894 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:38Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.828298 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.828566 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.828591 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.828610 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.828624 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:38Z","lastTransitionTime":"2026-01-22T16:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.831425 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afc42466-9bb2-4e33-abde-6a09e897045b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:38Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.841357 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:38Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.851378 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10fc91a9777392383ea1a48bb940f13581052f2aaadce7c2d94588884a8ff832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:38Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.853375 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 06:58:47.557541189 +0000 UTC Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.873805 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385c8e25a62d5dad6aeac43a064397418c85c1b8720414cd44e3e925fa85a04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f98a04a30984aea45235e40edb9801d2939b35a08519d1d63df0d0c6c47131a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://596bd59377fe79f228ddda88e07b73a2f24a57ce836d0f0b2ca02d6008363020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ade0d50980af81530f1be5dbb599cf39cd13941d216485b18422f8474a1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2bb807fa30678efaca258ed72a274a7f4e065ce20066caf601177dbc8466409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915d9141459dc9d0a72681717513aaef7a876003397a1ed89a62b755bb45dc67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c5e5416f238f2982c2f7867eeca80db18dfebf840af2b1155a40d591c248e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99c5e5416f238f2982c2f7867eeca80db18dfebf840af2b1155a40d591c248e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:30:30Z\\\",\\\"message\\\":\\\"v1.Pod event handler 6 for removal\\\\nI0122 16:30:30.316677 6384 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0122 16:30:30.316778 6384 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 16:30:30.316793 6384 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 16:30:30.316803 6384 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 16:30:30.316810 6384 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 16:30:30.316816 6384 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 16:30:30.316861 6384 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0122 16:30:30.316907 6384 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 16:30:30.316935 6384 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 16:30:30.316967 6384 factory.go:656] Stopping watch factory\\\\nI0122 16:30:30.316968 6384 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0122 16:30:30.317043 6384 ovnkube.go:599] Stopped ovnkube\\\\nI0122 16:30:30.317063 6384 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 16:30:30.317083 6384 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0122 16:30:30.317040 6384 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nF0122 16:30:30.317166 6384 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jdpck_openshift-ovn-kubernetes(9b60a09e-8bfa-4d2e-998d-e1db5dec0faa)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cfdd5744f9e8afe2a851b86ac85473f44fb49066784a282306ca8c1d621974b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jdpck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:38Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.885027 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cbszh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b21f81e8-3f11-43f9-abdb-09e8d25aeb73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0004ca3184c4311fd606fb18d3c4657d88f6212a1ac49a882c1a8ec5162c314b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5lx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e25bfe191c79389160e8c25e97ebd3bf2782cccecf01aac06c459041e083a793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5lx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cbszh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:38Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.897060 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:38Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.906807 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g8wjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425c9f0a-b14e-48d3-bd86-6fc510f22a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1d22788bf54b1c4a55b0c19222ad6dde207887ab282b97324717333f0280f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtrsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g8wjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:38Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.917729 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b5f24a-19df-4969-b547-a5acc323c58a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208979f8d30765fcfd45650c760741d72bd7119bfe62ebf4d7c1554d6c6d56e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf5569b30ec6397014b282bf67eca77930756b413c7554ab366d2d31a4f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zsbtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:38Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.930351 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.930388 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.930397 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.930411 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.930421 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:38Z","lastTransitionTime":"2026-01-22T16:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.935937 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9182510-5fc6-4717-b94c-de8ca4fb7c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1b80316bb1f3b27668a5ff6e547c13c4f84ae30f40fc6d0407849fb59fb9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqfn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:38Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.944276 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lt6tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"090f3014-3d99-49d5-8a9d-9719b4efbcf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a09e0ee71eddb461f883d44293ed63887153350f0f617799e7f360b5d6fdd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhkzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lt6tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:38Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.955071 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9485b50dd3fa712a0f43f04b4d3ae98e0f152d17b5db4b6f214125c1e926a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:38Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.965716 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f128c8ae-2e32-4884-a296-728579141589\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:29:51.087222 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:29:51.088631 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2674264491/tls.crt::/tmp/serving-cert-2674264491/tls.key\\\\\\\"\\\\nI0122 16:29:56.617863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:29:56.621506 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:29:56.621541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:29:56.621606 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:29:56.621634 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:29:56.631508 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:29:56.631550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631559 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:29:56.631576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0122 16:29:56.631574 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 16:29:56.631584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:29:56.631610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 16:29:56.634157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:38Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.975936 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ba0bf6-e521-4b47-a7e5-81f19a4bf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f742b25c51806335d17c6c67e8ad4944228fde89626352044f62ee1e708c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0197852c20ea1961ea8cff956886a8a42967c95fad73d2ed8bd37e6f763cca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3cdc36e13e13f43cb329beb4b415f17dab3d8427338168449ea3771053d668a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://981ef0ee873407291236dfd734567e3213a9451d495eb97e1029696cc788acbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://981ef0ee873407291236dfd734567e3213a9451d495eb97e1029696cc788acbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:38Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:38 crc kubenswrapper[4758]: I0122 16:30:38.988175 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dfeba9911630f8c172fab9eee3a107fbc2e24407b0af1f69cd539bac18d47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:38Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.000513 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7dvfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97853b38-352d-42df-ad31-639c0e58093a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12409cad6bedda3da41a11ce209dd58b7d15e3fc0dde575d70b3aa6c64435144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7dvfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:38Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.012461 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xqns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ef1c490-d5f9-458d-8b3e-8580a5f07df6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8br2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8br2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xqns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:39Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.030494 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9309c6-0336-4a15-8cbf-78178b4e57d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6824555f2019c5b0c92137ccb0a9af419b01ce0c63e1739c1d22b155a97c98a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a945d54b82518c2cda9257528f766444b687693255c50680adafb11651c792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca6e50d3a2acc2a4d43dc4a1fc1ff783ea5cb78978132377b7bb12b0dbd3e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c7268055ac9d7def228857bd8b974a53bb71fa873e1e0495d4691b8ca11902\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb71578e3eba87e91e6f6db0b03669e556cfbf38e2df367d20b6c8c79952f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:39Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.031904 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.031930 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.031939 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.031951 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.031977 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:39Z","lastTransitionTime":"2026-01-22T16:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.135119 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.135158 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.135168 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.135182 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.135193 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:39Z","lastTransitionTime":"2026-01-22T16:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.238242 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.238285 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.238299 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.238328 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.238352 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:39Z","lastTransitionTime":"2026-01-22T16:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.341067 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.341136 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.341157 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.341177 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.341193 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:39Z","lastTransitionTime":"2026-01-22T16:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.444346 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.444381 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.444393 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.444409 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.444420 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:39Z","lastTransitionTime":"2026-01-22T16:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.547179 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.547218 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.547226 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.547263 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.547274 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:39Z","lastTransitionTime":"2026-01-22T16:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.649678 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.649730 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.649773 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.649796 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.649812 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:39Z","lastTransitionTime":"2026-01-22T16:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.752806 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.752846 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.752859 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.752875 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.752887 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:39Z","lastTransitionTime":"2026-01-22T16:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.853894 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 12:56:47.952192174 +0000 UTC Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.856808 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.856860 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.856906 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.856931 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.856947 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:39Z","lastTransitionTime":"2026-01-22T16:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.959450 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.959731 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.959917 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.960013 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:39 crc kubenswrapper[4758]: I0122 16:30:39.960103 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:39Z","lastTransitionTime":"2026-01-22T16:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.063049 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.063123 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.063136 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.063149 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.063158 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:40Z","lastTransitionTime":"2026-01-22T16:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.166025 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.166055 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.166064 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.166076 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.166086 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:40Z","lastTransitionTime":"2026-01-22T16:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.268922 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.268968 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.268980 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.268997 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.269010 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:40Z","lastTransitionTime":"2026-01-22T16:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.372040 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.372074 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.372082 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.372118 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.372128 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:40Z","lastTransitionTime":"2026-01-22T16:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.475107 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.475152 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.475164 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.475181 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.475192 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:40Z","lastTransitionTime":"2026-01-22T16:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.577379 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.577710 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.578127 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.578276 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.578414 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:40Z","lastTransitionTime":"2026-01-22T16:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.681278 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.681322 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.681333 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.681351 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.681364 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:40Z","lastTransitionTime":"2026-01-22T16:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.784482 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.784530 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.784538 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.784554 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.784565 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:40Z","lastTransitionTime":"2026-01-22T16:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.808094 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.808138 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.808208 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:30:40 crc kubenswrapper[4758]: E0122 16:30:40.808221 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.808236 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:40 crc kubenswrapper[4758]: E0122 16:30:40.808319 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:30:40 crc kubenswrapper[4758]: E0122 16:30:40.808377 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:40 crc kubenswrapper[4758]: E0122 16:30:40.808403 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.855089 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 11:50:34.72688283 +0000 UTC Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.887143 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.887347 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.887421 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.887486 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.887547 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:40Z","lastTransitionTime":"2026-01-22T16:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.903859 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.904056 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.904121 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.904191 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.904250 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:40Z","lastTransitionTime":"2026-01-22T16:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:40 crc kubenswrapper[4758]: E0122 16:30:40.915762 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7288053-8dca-462f-b24f-6a9d8be738b3\\\",\\\"systemUUID\\\":\\\"83805c52-2bba-4705-bdbe-9101a9d1190e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:40Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.919496 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.919604 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.919664 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.919771 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.919858 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:40Z","lastTransitionTime":"2026-01-22T16:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:40 crc kubenswrapper[4758]: E0122 16:30:40.937611 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7288053-8dca-462f-b24f-6a9d8be738b3\\\",\\\"systemUUID\\\":\\\"83805c52-2bba-4705-bdbe-9101a9d1190e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:40Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.948008 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.948292 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.948448 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.948592 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.948767 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:40Z","lastTransitionTime":"2026-01-22T16:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:40 crc kubenswrapper[4758]: E0122 16:30:40.962061 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7288053-8dca-462f-b24f-6a9d8be738b3\\\",\\\"systemUUID\\\":\\\"83805c52-2bba-4705-bdbe-9101a9d1190e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:40Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.966572 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.966611 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.966622 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.966636 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:40 crc kubenswrapper[4758]: I0122 16:30:40.966647 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:40Z","lastTransitionTime":"2026-01-22T16:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:40 crc kubenswrapper[4758]: E0122 16:30:40.998547 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7288053-8dca-462f-b24f-6a9d8be738b3\\\",\\\"systemUUID\\\":\\\"83805c52-2bba-4705-bdbe-9101a9d1190e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:40Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.004598 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.004658 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.004684 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.004734 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.004802 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:41Z","lastTransitionTime":"2026-01-22T16:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:41 crc kubenswrapper[4758]: E0122 16:30:41.024202 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7288053-8dca-462f-b24f-6a9d8be738b3\\\",\\\"systemUUID\\\":\\\"83805c52-2bba-4705-bdbe-9101a9d1190e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:41Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:41 crc kubenswrapper[4758]: E0122 16:30:41.024342 4758 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.026107 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.026300 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.026324 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.026357 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.026374 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:41Z","lastTransitionTime":"2026-01-22T16:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.128890 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.128955 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.128978 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.129005 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.129027 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:41Z","lastTransitionTime":"2026-01-22T16:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.233208 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.233258 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.233277 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.233298 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.233314 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:41Z","lastTransitionTime":"2026-01-22T16:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.336191 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.336222 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.336238 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.336254 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.336265 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:41Z","lastTransitionTime":"2026-01-22T16:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.438544 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.438583 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.438596 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.438625 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.438637 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:41Z","lastTransitionTime":"2026-01-22T16:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.541477 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.541533 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.541545 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.541563 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.541575 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:41Z","lastTransitionTime":"2026-01-22T16:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.644716 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.644814 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.644840 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.644872 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.644895 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:41Z","lastTransitionTime":"2026-01-22T16:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.747945 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.748017 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.748034 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.748061 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.748077 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:41Z","lastTransitionTime":"2026-01-22T16:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.851344 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.851400 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.851411 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.851426 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.851436 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:41Z","lastTransitionTime":"2026-01-22T16:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.856515 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 15:06:59.38582177 +0000 UTC Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.954282 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.954339 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.954350 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.954368 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:41 crc kubenswrapper[4758]: I0122 16:30:41.954383 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:41Z","lastTransitionTime":"2026-01-22T16:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.057452 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.057492 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.057501 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.057515 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.057525 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:42Z","lastTransitionTime":"2026-01-22T16:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.159683 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.159730 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.159761 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.159778 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.159792 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:42Z","lastTransitionTime":"2026-01-22T16:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.261652 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.261688 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.261697 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.261714 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.261723 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:42Z","lastTransitionTime":"2026-01-22T16:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.364569 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.364596 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.364603 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.364616 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.364626 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:42Z","lastTransitionTime":"2026-01-22T16:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.466674 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.466716 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.466728 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.466800 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.466908 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:42Z","lastTransitionTime":"2026-01-22T16:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.570070 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.570106 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.570114 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.570128 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.570137 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:42Z","lastTransitionTime":"2026-01-22T16:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.673480 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.673543 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.673559 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.673581 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.673597 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:42Z","lastTransitionTime":"2026-01-22T16:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.776776 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.776814 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.776862 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.776890 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.776935 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:42Z","lastTransitionTime":"2026-01-22T16:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.808000 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.808050 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:42 crc kubenswrapper[4758]: E0122 16:30:42.808117 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.808016 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.808160 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:42 crc kubenswrapper[4758]: E0122 16:30:42.808206 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:42 crc kubenswrapper[4758]: E0122 16:30:42.808257 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:42 crc kubenswrapper[4758]: E0122 16:30:42.808772 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.857056 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 10:40:00.233590824 +0000 UTC Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.879649 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.879677 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.879685 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.879698 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.879707 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:42Z","lastTransitionTime":"2026-01-22T16:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.981624 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.981668 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.981678 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.981695 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:42 crc kubenswrapper[4758]: I0122 16:30:42.981708 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:42Z","lastTransitionTime":"2026-01-22T16:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:43 crc kubenswrapper[4758]: I0122 16:30:43.084224 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:43 crc kubenswrapper[4758]: I0122 16:30:43.084290 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:43 crc kubenswrapper[4758]: I0122 16:30:43.084309 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:43 crc kubenswrapper[4758]: I0122 16:30:43.084333 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:43 crc kubenswrapper[4758]: I0122 16:30:43.084349 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:43Z","lastTransitionTime":"2026-01-22T16:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:43 crc kubenswrapper[4758]: I0122 16:30:43.186610 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:43 crc kubenswrapper[4758]: I0122 16:30:43.186654 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:43 crc kubenswrapper[4758]: I0122 16:30:43.186669 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:43 crc kubenswrapper[4758]: I0122 16:30:43.186690 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:43 crc kubenswrapper[4758]: I0122 16:30:43.186702 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:43Z","lastTransitionTime":"2026-01-22T16:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:43 crc kubenswrapper[4758]: I0122 16:30:43.289279 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:43 crc kubenswrapper[4758]: I0122 16:30:43.289323 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:43 crc kubenswrapper[4758]: I0122 16:30:43.289339 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:43 crc kubenswrapper[4758]: I0122 16:30:43.289355 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:43 crc kubenswrapper[4758]: I0122 16:30:43.289365 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:43Z","lastTransitionTime":"2026-01-22T16:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:43 crc kubenswrapper[4758]: I0122 16:30:43.392148 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:43 crc kubenswrapper[4758]: I0122 16:30:43.392184 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:43 crc kubenswrapper[4758]: I0122 16:30:43.392192 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:43 crc kubenswrapper[4758]: I0122 16:30:43.392205 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:43 crc kubenswrapper[4758]: I0122 16:30:43.392214 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:43Z","lastTransitionTime":"2026-01-22T16:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:43 crc kubenswrapper[4758]: I0122 16:30:43.494841 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:43 crc kubenswrapper[4758]: I0122 16:30:43.494879 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:43 crc kubenswrapper[4758]: I0122 16:30:43.494887 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:43 crc kubenswrapper[4758]: I0122 16:30:43.494902 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:43 crc kubenswrapper[4758]: I0122 16:30:43.494913 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:43Z","lastTransitionTime":"2026-01-22T16:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:43 crc kubenswrapper[4758]: I0122 16:30:43.597532 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:43 crc kubenswrapper[4758]: I0122 16:30:43.597579 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:43 crc kubenswrapper[4758]: I0122 16:30:43.597592 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:43 crc kubenswrapper[4758]: I0122 16:30:43.597608 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:43 crc kubenswrapper[4758]: I0122 16:30:43.597620 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:43Z","lastTransitionTime":"2026-01-22T16:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:43 crc kubenswrapper[4758]: I0122 16:30:43.700480 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:43 crc kubenswrapper[4758]: I0122 16:30:43.700575 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:43 crc kubenswrapper[4758]: I0122 16:30:43.700594 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:43 crc kubenswrapper[4758]: I0122 16:30:43.700620 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:43 crc kubenswrapper[4758]: I0122 16:30:43.700638 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:43Z","lastTransitionTime":"2026-01-22T16:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:43 crc kubenswrapper[4758]: I0122 16:30:43.803223 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:43 crc kubenswrapper[4758]: I0122 16:30:43.803265 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:43 crc kubenswrapper[4758]: I0122 16:30:43.803280 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:43 crc kubenswrapper[4758]: I0122 16:30:43.803301 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:43 crc kubenswrapper[4758]: I0122 16:30:43.803317 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:43Z","lastTransitionTime":"2026-01-22T16:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:43 crc kubenswrapper[4758]: I0122 16:30:43.858184 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 00:08:19.380875333 +0000 UTC Jan 22 16:30:43 crc kubenswrapper[4758]: I0122 16:30:43.905385 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:43 crc kubenswrapper[4758]: I0122 16:30:43.905440 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:43 crc kubenswrapper[4758]: I0122 16:30:43.905456 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:43 crc kubenswrapper[4758]: I0122 16:30:43.905478 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:43 crc kubenswrapper[4758]: I0122 16:30:43.905493 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:43Z","lastTransitionTime":"2026-01-22T16:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.007218 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.007262 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.007275 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.007292 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.007309 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:44Z","lastTransitionTime":"2026-01-22T16:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.109410 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.109450 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.109462 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.109479 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.109490 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:44Z","lastTransitionTime":"2026-01-22T16:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.211467 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.211553 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.211573 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.211603 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.211626 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:44Z","lastTransitionTime":"2026-01-22T16:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.313648 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.313702 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.313715 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.313755 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.313768 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:44Z","lastTransitionTime":"2026-01-22T16:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.415605 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.415649 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.415661 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.415677 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.415689 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:44Z","lastTransitionTime":"2026-01-22T16:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.518296 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.518359 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.518376 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.518400 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.518416 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:44Z","lastTransitionTime":"2026-01-22T16:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.620891 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.620933 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.620945 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.620958 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.620967 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:44Z","lastTransitionTime":"2026-01-22T16:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.723444 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.723488 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.723499 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.723524 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.723535 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:44Z","lastTransitionTime":"2026-01-22T16:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.808582 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.808644 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.808679 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.808601 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:44 crc kubenswrapper[4758]: E0122 16:30:44.808806 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:44 crc kubenswrapper[4758]: E0122 16:30:44.808844 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:44 crc kubenswrapper[4758]: E0122 16:30:44.809003 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:30:44 crc kubenswrapper[4758]: E0122 16:30:44.809346 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.825871 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.825901 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.825913 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.825926 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.825936 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:44Z","lastTransitionTime":"2026-01-22T16:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.858930 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 20:01:04.890329768 +0000 UTC Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.927691 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.927731 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.927756 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.927770 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:44 crc kubenswrapper[4758]: I0122 16:30:44.927780 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:44Z","lastTransitionTime":"2026-01-22T16:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.030129 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.030158 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.030166 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.030179 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.030187 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:45Z","lastTransitionTime":"2026-01-22T16:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.132897 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.132925 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.132933 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.132945 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.132954 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:45Z","lastTransitionTime":"2026-01-22T16:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.235009 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.235041 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.235054 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.235071 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.235083 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:45Z","lastTransitionTime":"2026-01-22T16:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.336905 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.336936 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.336947 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.336960 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.336970 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:45Z","lastTransitionTime":"2026-01-22T16:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.438965 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.439018 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.439028 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.439052 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.439063 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:45Z","lastTransitionTime":"2026-01-22T16:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.541639 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.541678 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.541686 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.541703 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.541712 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:45Z","lastTransitionTime":"2026-01-22T16:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.648851 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.648923 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.648943 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.648967 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.648984 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:45Z","lastTransitionTime":"2026-01-22T16:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.752078 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.752134 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.752143 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.752158 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.752166 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:45Z","lastTransitionTime":"2026-01-22T16:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.854588 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.854625 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.854636 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.854651 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.854661 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:45Z","lastTransitionTime":"2026-01-22T16:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.859958 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 17:02:27.094801377 +0000 UTC Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.957302 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.957353 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.957370 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.957392 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:45 crc kubenswrapper[4758]: I0122 16:30:45.957407 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:45Z","lastTransitionTime":"2026-01-22T16:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.059260 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.059302 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.059310 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.059325 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.059334 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:46Z","lastTransitionTime":"2026-01-22T16:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.161711 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.161781 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.161793 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.161807 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.161818 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:46Z","lastTransitionTime":"2026-01-22T16:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.263473 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.263514 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.263525 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.263540 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.263550 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:46Z","lastTransitionTime":"2026-01-22T16:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.366157 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.366195 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.366226 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.366241 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.366252 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:46Z","lastTransitionTime":"2026-01-22T16:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.468575 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.468620 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.468630 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.468645 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.468655 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:46Z","lastTransitionTime":"2026-01-22T16:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.571342 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.571385 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.571393 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.571406 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.571416 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:46Z","lastTransitionTime":"2026-01-22T16:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.674075 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.674108 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.674118 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.674131 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.674141 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:46Z","lastTransitionTime":"2026-01-22T16:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.776428 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.776485 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.776494 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.776507 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.776516 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:46Z","lastTransitionTime":"2026-01-22T16:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.807104 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.807122 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.807123 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.807262 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:46 crc kubenswrapper[4758]: E0122 16:30:46.807400 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:30:46 crc kubenswrapper[4758]: E0122 16:30:46.807480 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:46 crc kubenswrapper[4758]: E0122 16:30:46.807612 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:46 crc kubenswrapper[4758]: E0122 16:30:46.807710 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.860546 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 18:25:14.46317646 +0000 UTC Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.878027 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.878073 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.878084 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.878100 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.878114 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:46Z","lastTransitionTime":"2026-01-22T16:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.981079 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.981140 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.981161 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.981184 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:46 crc kubenswrapper[4758]: I0122 16:30:46.981202 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:46Z","lastTransitionTime":"2026-01-22T16:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:47 crc kubenswrapper[4758]: I0122 16:30:47.083452 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:47 crc kubenswrapper[4758]: I0122 16:30:47.083495 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:47 crc kubenswrapper[4758]: I0122 16:30:47.083505 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:47 crc kubenswrapper[4758]: I0122 16:30:47.083520 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:47 crc kubenswrapper[4758]: I0122 16:30:47.083530 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:47Z","lastTransitionTime":"2026-01-22T16:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:47 crc kubenswrapper[4758]: I0122 16:30:47.186458 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:47 crc kubenswrapper[4758]: I0122 16:30:47.186511 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:47 crc kubenswrapper[4758]: I0122 16:30:47.186530 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:47 crc kubenswrapper[4758]: I0122 16:30:47.186552 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:47 crc kubenswrapper[4758]: I0122 16:30:47.186569 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:47Z","lastTransitionTime":"2026-01-22T16:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:47 crc kubenswrapper[4758]: I0122 16:30:47.289092 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:47 crc kubenswrapper[4758]: I0122 16:30:47.289156 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:47 crc kubenswrapper[4758]: I0122 16:30:47.289184 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:47 crc kubenswrapper[4758]: I0122 16:30:47.289200 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:47 crc kubenswrapper[4758]: I0122 16:30:47.289212 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:47Z","lastTransitionTime":"2026-01-22T16:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:47 crc kubenswrapper[4758]: I0122 16:30:47.391793 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:47 crc kubenswrapper[4758]: I0122 16:30:47.391880 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:47 crc kubenswrapper[4758]: I0122 16:30:47.391900 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:47 crc kubenswrapper[4758]: I0122 16:30:47.391925 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:47 crc kubenswrapper[4758]: I0122 16:30:47.391942 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:47Z","lastTransitionTime":"2026-01-22T16:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:47 crc kubenswrapper[4758]: I0122 16:30:47.493931 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:47 crc kubenswrapper[4758]: I0122 16:30:47.494198 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:47 crc kubenswrapper[4758]: I0122 16:30:47.494400 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:47 crc kubenswrapper[4758]: I0122 16:30:47.494653 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:47 crc kubenswrapper[4758]: I0122 16:30:47.494840 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:47Z","lastTransitionTime":"2026-01-22T16:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:47 crc kubenswrapper[4758]: I0122 16:30:47.597785 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:47 crc kubenswrapper[4758]: I0122 16:30:47.598002 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:47 crc kubenswrapper[4758]: I0122 16:30:47.598111 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:47 crc kubenswrapper[4758]: I0122 16:30:47.598194 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:47 crc kubenswrapper[4758]: I0122 16:30:47.598271 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:47Z","lastTransitionTime":"2026-01-22T16:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:47 crc kubenswrapper[4758]: I0122 16:30:47.702233 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:47 crc kubenswrapper[4758]: I0122 16:30:47.702529 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:47 crc kubenswrapper[4758]: I0122 16:30:47.702611 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:47 crc kubenswrapper[4758]: I0122 16:30:47.702679 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:47 crc kubenswrapper[4758]: I0122 16:30:47.702759 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:47Z","lastTransitionTime":"2026-01-22T16:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:47 crc kubenswrapper[4758]: I0122 16:30:47.804947 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:47 crc kubenswrapper[4758]: I0122 16:30:47.805171 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:47 crc kubenswrapper[4758]: I0122 16:30:47.805276 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:47 crc kubenswrapper[4758]: I0122 16:30:47.805384 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:47 crc kubenswrapper[4758]: I0122 16:30:47.805479 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:47Z","lastTransitionTime":"2026-01-22T16:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:47 crc kubenswrapper[4758]: I0122 16:30:47.807814 4758 scope.go:117] "RemoveContainer" containerID="99c5e5416f238f2982c2f7867eeca80db18dfebf840af2b1155a40d591c248e9" Jan 22 16:30:47 crc kubenswrapper[4758]: E0122 16:30:47.808079 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jdpck_openshift-ovn-kubernetes(9b60a09e-8bfa-4d2e-998d-e1db5dec0faa)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" Jan 22 16:30:47 crc kubenswrapper[4758]: I0122 16:30:47.861064 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 14:33:59.04996917 +0000 UTC Jan 22 16:30:47 crc kubenswrapper[4758]: I0122 16:30:47.907995 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:47 crc kubenswrapper[4758]: I0122 16:30:47.908117 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:47 crc kubenswrapper[4758]: I0122 16:30:47.908227 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:47 crc kubenswrapper[4758]: I0122 16:30:47.908314 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:47 crc kubenswrapper[4758]: I0122 16:30:47.908395 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:47Z","lastTransitionTime":"2026-01-22T16:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.010941 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.010985 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.010995 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.011009 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.011020 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:48Z","lastTransitionTime":"2026-01-22T16:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.113584 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.113667 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.113690 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.113715 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.113733 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:48Z","lastTransitionTime":"2026-01-22T16:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.215929 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.215967 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.215979 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.215994 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.216009 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:48Z","lastTransitionTime":"2026-01-22T16:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.319079 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.319129 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.319138 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.319154 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.319164 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:48Z","lastTransitionTime":"2026-01-22T16:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.421636 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.421664 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.421674 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.421686 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.421696 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:48Z","lastTransitionTime":"2026-01-22T16:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.524637 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.524711 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.524722 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.524759 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.524772 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:48Z","lastTransitionTime":"2026-01-22T16:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.627361 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.627409 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.627421 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.627438 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.627449 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:48Z","lastTransitionTime":"2026-01-22T16:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.705337 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3ef1c490-d5f9-458d-8b3e-8580a5f07df6-metrics-certs\") pod \"network-metrics-daemon-2xqns\" (UID: \"3ef1c490-d5f9-458d-8b3e-8580a5f07df6\") " pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:30:48 crc kubenswrapper[4758]: E0122 16:30:48.705391 4758 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 16:30:48 crc kubenswrapper[4758]: E0122 16:30:48.705524 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ef1c490-d5f9-458d-8b3e-8580a5f07df6-metrics-certs podName:3ef1c490-d5f9-458d-8b3e-8580a5f07df6 nodeName:}" failed. No retries permitted until 2026-01-22 16:31:20.705489451 +0000 UTC m=+102.188828916 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3ef1c490-d5f9-458d-8b3e-8580a5f07df6-metrics-certs") pod "network-metrics-daemon-2xqns" (UID: "3ef1c490-d5f9-458d-8b3e-8580a5f07df6") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.730191 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.730233 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.730250 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.730266 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.730278 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:48Z","lastTransitionTime":"2026-01-22T16:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.807160 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.807264 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:30:48 crc kubenswrapper[4758]: E0122 16:30:48.807433 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.807728 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.807794 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:48 crc kubenswrapper[4758]: E0122 16:30:48.807841 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:48 crc kubenswrapper[4758]: E0122 16:30:48.807964 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:30:48 crc kubenswrapper[4758]: E0122 16:30:48.808137 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.820345 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afc42466-9bb2-4e33-abde-6a09e897045b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.833036 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.833239 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.833360 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.833053 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.833467 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.833680 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:48Z","lastTransitionTime":"2026-01-22T16:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.848503 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10fc91a9777392383ea1a48bb940f13581052f2aaadce7c2d94588884a8ff832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.862083 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 03:54:05.59335134 +0000 UTC Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.874288 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385c8e25a62d5dad6aeac43a064397418c85c1b8720414cd44e3e925fa85a04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f98a04a30984aea45235e40edb9801d2939b35a08519d1d63df0d0c6c47131a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://596bd59377fe79f228ddda88e07b73a2f24a57ce836d0f0b2ca02d6008363020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ade0d50980af81530f1be5dbb599cf39cd13941d216485b18422f8474a1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2bb807fa30678efaca258ed72a274a7f4e065ce20066caf601177dbc8466409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915d9141459dc9d0a72681717513aaef7a876003397a1ed89a62b755bb45dc67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c5e5416f238f2982c2f7867eeca80db18dfebf840af2b1155a40d591c248e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99c5e5416f238f2982c2f7867eeca80db18dfebf840af2b1155a40d591c248e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:30:30Z\\\",\\\"message\\\":\\\"v1.Pod event handler 6 for removal\\\\nI0122 16:30:30.316677 6384 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0122 16:30:30.316778 6384 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 16:30:30.316793 6384 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 16:30:30.316803 6384 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 16:30:30.316810 6384 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 16:30:30.316816 6384 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 16:30:30.316861 6384 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0122 16:30:30.316907 6384 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 16:30:30.316935 6384 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 16:30:30.316967 6384 factory.go:656] Stopping watch factory\\\\nI0122 16:30:30.316968 6384 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0122 16:30:30.317043 6384 ovnkube.go:599] Stopped ovnkube\\\\nI0122 16:30:30.317063 6384 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 16:30:30.317083 6384 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0122 16:30:30.317040 6384 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nF0122 16:30:30.317166 6384 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jdpck_openshift-ovn-kubernetes(9b60a09e-8bfa-4d2e-998d-e1db5dec0faa)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cfdd5744f9e8afe2a851b86ac85473f44fb49066784a282306ca8c1d621974b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jdpck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.887377 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cbszh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b21f81e8-3f11-43f9-abdb-09e8d25aeb73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0004ca3184c4311fd606fb18d3c4657d88f6212a1ac49a882c1a8ec5162c314b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5lx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e25bfe191c79389160e8c25e97ebd3bf2782cccecf01aac06c459041e083a793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5lx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cbszh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.902440 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.912939 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g8wjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425c9f0a-b14e-48d3-bd86-6fc510f22a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1d22788bf54b1c4a55b0c19222ad6dde207887ab282b97324717333f0280f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtrsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g8wjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.925207 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b5f24a-19df-4969-b547-a5acc323c58a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208979f8d30765fcfd45650c760741d72bd7119bfe62ebf4d7c1554d6c6d56e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf5569b30ec6397014b282bf67eca77930756b413c7554ab366d2d31a4f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zsbtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.938846 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.938891 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.938899 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.938913 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.938924 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:48Z","lastTransitionTime":"2026-01-22T16:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.941296 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9182510-5fc6-4717-b94c-de8ca4fb7c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1b80316bb1f3b27668a5ff6e547c13c4f84ae30f40fc6d0407849fb59fb9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqfn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.956071 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lt6tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"090f3014-3d99-49d5-8a9d-9719b4efbcf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a09e0ee71eddb461f883d44293ed63887153350f0f617799e7f360b5d6fdd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhkzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lt6tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.972722 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9485b50dd3fa712a0f43f04b4d3ae98e0f152d17b5db4b6f214125c1e926a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:48 crc kubenswrapper[4758]: I0122 16:30:48.987688 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f128c8ae-2e32-4884-a296-728579141589\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:29:51.087222 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:29:51.088631 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2674264491/tls.crt::/tmp/serving-cert-2674264491/tls.key\\\\\\\"\\\\nI0122 16:29:56.617863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:29:56.621506 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:29:56.621541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:29:56.621606 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:29:56.621634 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:29:56.631508 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:29:56.631550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631559 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:29:56.631576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0122 16:29:56.631574 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 16:29:56.631584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:29:56.631610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 16:29:56.634157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.000888 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ba0bf6-e521-4b47-a7e5-81f19a4bf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f742b25c51806335d17c6c67e8ad4944228fde89626352044f62ee1e708c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0197852c20ea1961ea8cff956886a8a42967c95fad73d2ed8bd37e6f763cca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3cdc36e13e13f43cb329beb4b415f17dab3d8427338168449ea3771053d668a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://981ef0ee873407291236dfd734567e3213a9451d495eb97e1029696cc788acbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://981ef0ee873407291236dfd734567e3213a9451d495eb97e1029696cc788acbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.013284 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dfeba9911630f8c172fab9eee3a107fbc2e24407b0af1f69cd539bac18d47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:49Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.026003 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7dvfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97853b38-352d-42df-ad31-639c0e58093a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12409cad6bedda3da41a11ce209dd58b7d15e3fc0dde575d70b3aa6c64435144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7dvfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:49Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.037234 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xqns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ef1c490-d5f9-458d-8b3e-8580a5f07df6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8br2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8br2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xqns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:49Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.040919 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.040946 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.040954 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.040966 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.040976 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:49Z","lastTransitionTime":"2026-01-22T16:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.058086 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9309c6-0336-4a15-8cbf-78178b4e57d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6824555f2019c5b0c92137ccb0a9af419b01ce0c63e1739c1d22b155a97c98a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a945d54b82518c2cda9257528f766444b687693255c50680adafb11651c792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca6e50d3a2acc2a4d43dc4a1fc1ff783ea5cb78978132377b7bb12b0dbd3e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c7268055ac9d7def228857bd8b974a53bb71fa873e1e0495d4691b8ca11902\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb71578e3eba87e91e6f6db0b03669e556cfbf38e2df367d20b6c8c79952f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:49Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.073717 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:49Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.143849 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.143891 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.143900 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.143914 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.143923 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:49Z","lastTransitionTime":"2026-01-22T16:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.246817 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.246870 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.246884 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.246903 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.246924 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:49Z","lastTransitionTime":"2026-01-22T16:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.349117 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.349313 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.349418 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.349490 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.349575 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:49Z","lastTransitionTime":"2026-01-22T16:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.452108 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.452680 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.452763 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.452859 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.452918 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:49Z","lastTransitionTime":"2026-01-22T16:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.555020 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.555069 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.555078 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.555092 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.555101 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:49Z","lastTransitionTime":"2026-01-22T16:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.657501 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.657529 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.657537 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.657549 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.657558 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:49Z","lastTransitionTime":"2026-01-22T16:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.759781 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.759820 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.759829 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.759845 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.759857 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:49Z","lastTransitionTime":"2026-01-22T16:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.862174 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 23:36:14.546548622 +0000 UTC Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.862860 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.862902 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.862914 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.862932 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.862947 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:49Z","lastTransitionTime":"2026-01-22T16:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.964820 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.964867 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.964879 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.964898 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:49 crc kubenswrapper[4758]: I0122 16:30:49.964913 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:49Z","lastTransitionTime":"2026-01-22T16:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.066534 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.066787 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.066870 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.066948 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.067027 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:50Z","lastTransitionTime":"2026-01-22T16:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.169635 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.169667 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.169677 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.169692 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.169704 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:50Z","lastTransitionTime":"2026-01-22T16:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.217689 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7dvfg_97853b38-352d-42df-ad31-639c0e58093a/kube-multus/0.log" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.217760 4758 generic.go:334] "Generic (PLEG): container finished" podID="97853b38-352d-42df-ad31-639c0e58093a" containerID="12409cad6bedda3da41a11ce209dd58b7d15e3fc0dde575d70b3aa6c64435144" exitCode=1 Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.217790 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7dvfg" event={"ID":"97853b38-352d-42df-ad31-639c0e58093a","Type":"ContainerDied","Data":"12409cad6bedda3da41a11ce209dd58b7d15e3fc0dde575d70b3aa6c64435144"} Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.218158 4758 scope.go:117] "RemoveContainer" containerID="12409cad6bedda3da41a11ce209dd58b7d15e3fc0dde575d70b3aa6c64435144" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.231118 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afc42466-9bb2-4e33-abde-6a09e897045b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.246548 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.260612 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10fc91a9777392383ea1a48bb940f13581052f2aaadce7c2d94588884a8ff832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.272102 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.272132 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.272140 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.272153 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.272161 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:50Z","lastTransitionTime":"2026-01-22T16:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.277307 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385c8e25a62d5dad6aeac43a064397418c85c1b8720414cd44e3e925fa85a04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f98a04a30984aea45235e40edb9801d2939b35a08519d1d63df0d0c6c47131a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://596bd59377fe79f228ddda88e07b73a2f24a57ce836d0f0b2ca02d6008363020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ade0d50980af81530f1be5dbb599cf39cd13941d216485b18422f8474a1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2bb807fa30678efaca258ed72a274a7f4e065ce20066caf601177dbc8466409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915d9141459dc9d0a72681717513aaef7a876003397a1ed89a62b755bb45dc67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c5e5416f238f2982c2f7867eeca80db18dfebf840af2b1155a40d591c248e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99c5e5416f238f2982c2f7867eeca80db18dfebf840af2b1155a40d591c248e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:30:30Z\\\",\\\"message\\\":\\\"v1.Pod event handler 6 for removal\\\\nI0122 16:30:30.316677 6384 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0122 16:30:30.316778 6384 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 16:30:30.316793 6384 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 16:30:30.316803 6384 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 16:30:30.316810 6384 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 16:30:30.316816 6384 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 16:30:30.316861 6384 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0122 16:30:30.316907 6384 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 16:30:30.316935 6384 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 16:30:30.316967 6384 factory.go:656] Stopping watch factory\\\\nI0122 16:30:30.316968 6384 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0122 16:30:30.317043 6384 ovnkube.go:599] Stopped ovnkube\\\\nI0122 16:30:30.317063 6384 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 16:30:30.317083 6384 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0122 16:30:30.317040 6384 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nF0122 16:30:30.317166 6384 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jdpck_openshift-ovn-kubernetes(9b60a09e-8bfa-4d2e-998d-e1db5dec0faa)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cfdd5744f9e8afe2a851b86ac85473f44fb49066784a282306ca8c1d621974b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jdpck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.289865 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cbszh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b21f81e8-3f11-43f9-abdb-09e8d25aeb73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0004ca3184c4311fd606fb18d3c4657d88f6212a1ac49a882c1a8ec5162c314b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5lx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e25bfe191c79389160e8c25e97ebd3bf2782cccecf01aac06c459041e083a793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5lx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cbszh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.302244 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9485b50dd3fa712a0f43f04b4d3ae98e0f152d17b5db4b6f214125c1e926a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.313783 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.325050 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g8wjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425c9f0a-b14e-48d3-bd86-6fc510f22a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1d22788bf54b1c4a55b0c19222ad6dde207887ab282b97324717333f0280f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtrsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g8wjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.333885 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b5f24a-19df-4969-b547-a5acc323c58a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208979f8d30765fcfd45650c760741d72bd7119bfe62ebf4d7c1554d6c6d56e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf5569b30ec6397014b282bf67eca77930756b413c7554ab366d2d31a4f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zsbtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.346370 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9182510-5fc6-4717-b94c-de8ca4fb7c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1b80316bb1f3b27668a5ff6e547c13c4f84ae30f40fc6d0407849fb59fb9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqfn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.357688 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lt6tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"090f3014-3d99-49d5-8a9d-9719b4efbcf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a09e0ee71eddb461f883d44293ed63887153350f0f617799e7f360b5d6fdd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhkzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lt6tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.374170 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.374208 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.374218 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.374235 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.374247 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:50Z","lastTransitionTime":"2026-01-22T16:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.377243 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9309c6-0336-4a15-8cbf-78178b4e57d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6824555f2019c5b0c92137ccb0a9af419b01ce0c63e1739c1d22b155a97c98a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a945d54b82518c2cda9257528f766444b687693255c50680adafb11651c792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca6e50d3a2acc2a4d43dc4a1fc1ff783ea5cb78978132377b7bb12b0dbd3e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c7268055ac9d7def228857bd8b974a53bb71fa873e1e0495d4691b8ca11902\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb71578e3eba87e91e6f6db0b03669e556cfbf38e2df367d20b6c8c79952f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.392043 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f128c8ae-2e32-4884-a296-728579141589\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:29:51.087222 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:29:51.088631 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2674264491/tls.crt::/tmp/serving-cert-2674264491/tls.key\\\\\\\"\\\\nI0122 16:29:56.617863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:29:56.621506 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:29:56.621541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:29:56.621606 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:29:56.621634 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:29:56.631508 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:29:56.631550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631559 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:29:56.631576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0122 16:29:56.631574 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 16:29:56.631584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:29:56.631610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 16:29:56.634157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.404270 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ba0bf6-e521-4b47-a7e5-81f19a4bf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f742b25c51806335d17c6c67e8ad4944228fde89626352044f62ee1e708c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0197852c20ea1961ea8cff956886a8a42967c95fad73d2ed8bd37e6f763cca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3cdc36e13e13f43cb329beb4b415f17dab3d8427338168449ea3771053d668a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://981ef0ee873407291236dfd734567e3213a9451d495eb97e1029696cc788acbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://981ef0ee873407291236dfd734567e3213a9451d495eb97e1029696cc788acbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.414969 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dfeba9911630f8c172fab9eee3a107fbc2e24407b0af1f69cd539bac18d47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.427844 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7dvfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97853b38-352d-42df-ad31-639c0e58093a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12409cad6bedda3da41a11ce209dd58b7d15e3fc0dde575d70b3aa6c64435144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12409cad6bedda3da41a11ce209dd58b7d15e3fc0dde575d70b3aa6c64435144\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:30:49Z\\\",\\\"message\\\":\\\"2026-01-22T16:30:04+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c3ab300e-f214-48f5-80e5-57280a3cce0f\\\\n2026-01-22T16:30:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c3ab300e-f214-48f5-80e5-57280a3cce0f to /host/opt/cni/bin/\\\\n2026-01-22T16:30:04Z [verbose] multus-daemon started\\\\n2026-01-22T16:30:04Z [verbose] Readiness Indicator file check\\\\n2026-01-22T16:30:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7dvfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.438852 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xqns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ef1c490-d5f9-458d-8b3e-8580a5f07df6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8br2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8br2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xqns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.451296 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.477179 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.477215 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.477226 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.477241 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.477252 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:50Z","lastTransitionTime":"2026-01-22T16:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.579452 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.579533 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.579558 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.579586 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.579609 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:50Z","lastTransitionTime":"2026-01-22T16:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.682019 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.682070 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.682086 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.682102 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.682114 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:50Z","lastTransitionTime":"2026-01-22T16:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.784349 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.784393 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.784410 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.784427 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.784438 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:50Z","lastTransitionTime":"2026-01-22T16:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.808440 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.808441 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.808469 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.809122 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:50 crc kubenswrapper[4758]: E0122 16:30:50.809330 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:50 crc kubenswrapper[4758]: E0122 16:30:50.809575 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:30:50 crc kubenswrapper[4758]: E0122 16:30:50.809717 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:50 crc kubenswrapper[4758]: E0122 16:30:50.809869 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.863196 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 12:13:37.852825452 +0000 UTC Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.886332 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.886580 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.886658 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.886764 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.886862 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:50Z","lastTransitionTime":"2026-01-22T16:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.989646 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.989957 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.990040 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.990107 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:50 crc kubenswrapper[4758]: I0122 16:30:50.990207 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:50Z","lastTransitionTime":"2026-01-22T16:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.091812 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.092069 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.092158 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.092240 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.092300 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:51Z","lastTransitionTime":"2026-01-22T16:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.194253 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.194291 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.194301 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.194318 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.194330 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:51Z","lastTransitionTime":"2026-01-22T16:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.222495 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7dvfg_97853b38-352d-42df-ad31-639c0e58093a/kube-multus/0.log" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.222724 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7dvfg" event={"ID":"97853b38-352d-42df-ad31-639c0e58093a","Type":"ContainerStarted","Data":"56af628fe62b476141809cfaea6a06fdd7dfa34ed41fb97425db4cdaa3ec7b4e"} Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.236300 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.236415 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.236439 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.236471 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.236495 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:51Z","lastTransitionTime":"2026-01-22T16:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.236976 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.251197 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10fc91a9777392383ea1a48bb940f13581052f2aaadce7c2d94588884a8ff832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:51 crc kubenswrapper[4758]: E0122 16:30:51.251625 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7288053-8dca-462f-b24f-6a9d8be738b3\\\",\\\"systemUUID\\\":\\\"83805c52-2bba-4705-bdbe-9101a9d1190e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.255665 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.255702 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.255712 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.255730 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.255757 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:51Z","lastTransitionTime":"2026-01-22T16:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:51 crc kubenswrapper[4758]: E0122 16:30:51.270246 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7288053-8dca-462f-b24f-6a9d8be738b3\\\",\\\"systemUUID\\\":\\\"83805c52-2bba-4705-bdbe-9101a9d1190e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.273259 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385c8e25a62d5dad6aeac43a064397418c85c1b8720414cd44e3e925fa85a04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f98a04a30984aea45235e40edb9801d2939b35a08519d1d63df0d0c6c47131a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://596bd59377fe79f228ddda88e07b73a2f24a57ce836d0f0b2ca02d6008363020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ade0d50980af81530f1be5dbb599cf39cd13941d216485b18422f8474a1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2bb807fa30678efaca258ed72a274a7f4e065ce20066caf601177dbc8466409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915d9141459dc9d0a72681717513aaef7a876003397a1ed89a62b755bb45dc67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c5e5416f238f2982c2f7867eeca80db18dfebf840af2b1155a40d591c248e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99c5e5416f238f2982c2f7867eeca80db18dfebf840af2b1155a40d591c248e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:30:30Z\\\",\\\"message\\\":\\\"v1.Pod event handler 6 for removal\\\\nI0122 16:30:30.316677 6384 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0122 16:30:30.316778 6384 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 16:30:30.316793 6384 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 16:30:30.316803 6384 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 16:30:30.316810 6384 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 16:30:30.316816 6384 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 16:30:30.316861 6384 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0122 16:30:30.316907 6384 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 16:30:30.316935 6384 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 16:30:30.316967 6384 factory.go:656] Stopping watch factory\\\\nI0122 16:30:30.316968 6384 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0122 16:30:30.317043 6384 ovnkube.go:599] Stopped ovnkube\\\\nI0122 16:30:30.317063 6384 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 16:30:30.317083 6384 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0122 16:30:30.317040 6384 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nF0122 16:30:30.317166 6384 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jdpck_openshift-ovn-kubernetes(9b60a09e-8bfa-4d2e-998d-e1db5dec0faa)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cfdd5744f9e8afe2a851b86ac85473f44fb49066784a282306ca8c1d621974b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jdpck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.274273 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.274330 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.274341 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.274354 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.274364 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:51Z","lastTransitionTime":"2026-01-22T16:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.283355 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cbszh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b21f81e8-3f11-43f9-abdb-09e8d25aeb73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0004ca3184c4311fd606fb18d3c4657d88f6212a1ac49a882c1a8ec5162c314b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5lx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e25bfe191c79389160e8c25e97ebd3bf2782cccecf01aac06c459041e083a793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5lx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cbszh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:51 crc kubenswrapper[4758]: E0122 16:30:51.285953 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7288053-8dca-462f-b24f-6a9d8be738b3\\\",\\\"systemUUID\\\":\\\"83805c52-2bba-4705-bdbe-9101a9d1190e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.288838 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.288914 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.288925 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.288941 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.288955 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:51Z","lastTransitionTime":"2026-01-22T16:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.298625 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afc42466-9bb2-4e33-abde-6a09e897045b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:51 crc kubenswrapper[4758]: E0122 16:30:51.300296 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7288053-8dca-462f-b24f-6a9d8be738b3\\\",\\\"systemUUID\\\":\\\"83805c52-2bba-4705-bdbe-9101a9d1190e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.306999 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.307113 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.307188 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.307261 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.307325 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:51Z","lastTransitionTime":"2026-01-22T16:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.309202 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g8wjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425c9f0a-b14e-48d3-bd86-6fc510f22a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1d22788bf54b1c4a55b0c19222ad6dde207887ab282b97324717333f0280f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtrsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g8wjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:51 crc kubenswrapper[4758]: E0122 16:30:51.318255 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7288053-8dca-462f-b24f-6a9d8be738b3\\\",\\\"systemUUID\\\":\\\"83805c52-2bba-4705-bdbe-9101a9d1190e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:51 crc kubenswrapper[4758]: E0122 16:30:51.318513 4758 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.319974 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b5f24a-19df-4969-b547-a5acc323c58a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208979f8d30765fcfd45650c760741d72bd7119bfe62ebf4d7c1554d6c6d56e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf5569b30ec6397014b282bf67eca77930756b413c7554ab366d2d31a4f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zsbtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.320039 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.320309 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.320382 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.320470 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.320536 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:51Z","lastTransitionTime":"2026-01-22T16:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.332593 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9182510-5fc6-4717-b94c-de8ca4fb7c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1b80316bb1f3b27668a5ff6e547c13c4f84ae30f40fc6d0407849fb59fb9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqfn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.342309 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lt6tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"090f3014-3d99-49d5-8a9d-9719b4efbcf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a09e0ee71eddb461f883d44293ed63887153350f0f617799e7f360b5d6fdd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhkzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lt6tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.356109 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9485b50dd3fa712a0f43f04b4d3ae98e0f152d17b5db4b6f214125c1e926a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.367194 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.379180 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ba0bf6-e521-4b47-a7e5-81f19a4bf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f742b25c51806335d17c6c67e8ad4944228fde89626352044f62ee1e708c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0197852c20ea1961ea8cff956886a8a42967c95fad73d2ed8bd37e6f763cca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3cdc36e13e13f43cb329beb4b415f17dab3d8427338168449ea3771053d668a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://981ef0ee873407291236dfd734567e3213a9451d495eb97e1029696cc788acbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://981ef0ee873407291236dfd734567e3213a9451d495eb97e1029696cc788acbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.394282 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dfeba9911630f8c172fab9eee3a107fbc2e24407b0af1f69cd539bac18d47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.407231 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7dvfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97853b38-352d-42df-ad31-639c0e58093a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56af628fe62b476141809cfaea6a06fdd7dfa34ed41fb97425db4cdaa3ec7b4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12409cad6bedda3da41a11ce209dd58b7d15e3fc0dde575d70b3aa6c64435144\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:30:49Z\\\",\\\"message\\\":\\\"2026-01-22T16:30:04+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c3ab300e-f214-48f5-80e5-57280a3cce0f\\\\n2026-01-22T16:30:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c3ab300e-f214-48f5-80e5-57280a3cce0f to /host/opt/cni/bin/\\\\n2026-01-22T16:30:04Z [verbose] multus-daemon started\\\\n2026-01-22T16:30:04Z [verbose] Readiness Indicator file check\\\\n2026-01-22T16:30:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7dvfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.417897 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xqns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ef1c490-d5f9-458d-8b3e-8580a5f07df6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8br2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8br2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xqns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.423875 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.423915 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.423927 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.423945 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.423955 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:51Z","lastTransitionTime":"2026-01-22T16:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.439256 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9309c6-0336-4a15-8cbf-78178b4e57d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6824555f2019c5b0c92137ccb0a9af419b01ce0c63e1739c1d22b155a97c98a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a945d54b82518c2cda9257528f766444b687693255c50680adafb11651c792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca6e50d3a2acc2a4d43dc4a1fc1ff783ea5cb78978132377b7bb12b0dbd3e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c7268055ac9d7def228857bd8b974a53bb71fa873e1e0495d4691b8ca11902\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb71578e3eba87e91e6f6db0b03669e556cfbf38e2df367d20b6c8c79952f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.451510 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f128c8ae-2e32-4884-a296-728579141589\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:29:51.087222 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:29:51.088631 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2674264491/tls.crt::/tmp/serving-cert-2674264491/tls.key\\\\\\\"\\\\nI0122 16:29:56.617863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:29:56.621506 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:29:56.621541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:29:56.621606 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:29:56.621634 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:29:56.631508 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:29:56.631550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631559 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:29:56.631576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0122 16:29:56.631574 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 16:29:56.631584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:29:56.631610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 16:29:56.634157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.466019 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.527331 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.527408 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.527424 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.527447 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.527471 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:51Z","lastTransitionTime":"2026-01-22T16:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.629931 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.629979 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.629991 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.630008 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.630019 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:51Z","lastTransitionTime":"2026-01-22T16:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.732604 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.732643 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.732655 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.732671 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.732682 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:51Z","lastTransitionTime":"2026-01-22T16:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.822013 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.835941 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.836094 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.836123 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.836195 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.836221 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:51Z","lastTransitionTime":"2026-01-22T16:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.864284 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 03:21:27.344527512 +0000 UTC Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.938703 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.938730 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.938759 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.938771 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:51 crc kubenswrapper[4758]: I0122 16:30:51.938782 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:51Z","lastTransitionTime":"2026-01-22T16:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.041402 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.041450 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.041467 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.041487 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.041502 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:52Z","lastTransitionTime":"2026-01-22T16:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.143541 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.143593 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.143605 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.143622 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.143634 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:52Z","lastTransitionTime":"2026-01-22T16:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.246195 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.246254 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.246265 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.246289 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.246302 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:52Z","lastTransitionTime":"2026-01-22T16:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.348943 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.349003 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.349023 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.349047 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.349065 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:52Z","lastTransitionTime":"2026-01-22T16:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.451305 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.451367 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.451383 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.451403 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.451419 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:52Z","lastTransitionTime":"2026-01-22T16:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.553783 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.553826 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.553836 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.553851 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.553861 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:52Z","lastTransitionTime":"2026-01-22T16:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.656991 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.657069 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.657095 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.657129 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.657150 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:52Z","lastTransitionTime":"2026-01-22T16:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.760233 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.760303 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.760315 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.760335 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.760348 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:52Z","lastTransitionTime":"2026-01-22T16:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.807814 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.807866 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.807896 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.807867 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:30:52 crc kubenswrapper[4758]: E0122 16:30:52.808060 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:52 crc kubenswrapper[4758]: E0122 16:30:52.808186 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:52 crc kubenswrapper[4758]: E0122 16:30:52.808337 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:52 crc kubenswrapper[4758]: E0122 16:30:52.808425 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.864425 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 12:25:21.106262731 +0000 UTC Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.864543 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.864604 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.864621 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.864646 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.864667 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:52Z","lastTransitionTime":"2026-01-22T16:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.967345 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.967410 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.967425 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.967441 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:52 crc kubenswrapper[4758]: I0122 16:30:52.967453 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:52Z","lastTransitionTime":"2026-01-22T16:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.070132 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.070189 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.070207 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.070233 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.070250 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:53Z","lastTransitionTime":"2026-01-22T16:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.173054 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.173125 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.173155 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.173191 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.173225 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:53Z","lastTransitionTime":"2026-01-22T16:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.276063 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.276098 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.276107 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.276119 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.276130 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:53Z","lastTransitionTime":"2026-01-22T16:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.379480 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.379548 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.379570 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.379601 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.379624 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:53Z","lastTransitionTime":"2026-01-22T16:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.482280 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.482344 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.482361 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.482384 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.482401 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:53Z","lastTransitionTime":"2026-01-22T16:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.585390 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.585447 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.585465 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.585494 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.585516 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:53Z","lastTransitionTime":"2026-01-22T16:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.688033 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.688068 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.688078 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.688096 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.688107 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:53Z","lastTransitionTime":"2026-01-22T16:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.790493 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.790522 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.790532 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.790547 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.790557 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:53Z","lastTransitionTime":"2026-01-22T16:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.864970 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 10:47:49.38144559 +0000 UTC Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.893296 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.893335 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.893346 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.893361 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.893373 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:53Z","lastTransitionTime":"2026-01-22T16:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.995308 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.995346 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.995359 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.995374 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:53 crc kubenswrapper[4758]: I0122 16:30:53.995385 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:53Z","lastTransitionTime":"2026-01-22T16:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:54 crc kubenswrapper[4758]: I0122 16:30:54.098174 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:54 crc kubenswrapper[4758]: I0122 16:30:54.098214 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:54 crc kubenswrapper[4758]: I0122 16:30:54.098222 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:54 crc kubenswrapper[4758]: I0122 16:30:54.098236 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:54 crc kubenswrapper[4758]: I0122 16:30:54.098245 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:54Z","lastTransitionTime":"2026-01-22T16:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:54 crc kubenswrapper[4758]: I0122 16:30:54.200317 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:54 crc kubenswrapper[4758]: I0122 16:30:54.200380 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:54 crc kubenswrapper[4758]: I0122 16:30:54.200390 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:54 crc kubenswrapper[4758]: I0122 16:30:54.200406 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:54 crc kubenswrapper[4758]: I0122 16:30:54.200418 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:54Z","lastTransitionTime":"2026-01-22T16:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:54 crc kubenswrapper[4758]: I0122 16:30:54.302379 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:54 crc kubenswrapper[4758]: I0122 16:30:54.302421 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:54 crc kubenswrapper[4758]: I0122 16:30:54.302433 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:54 crc kubenswrapper[4758]: I0122 16:30:54.302489 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:54 crc kubenswrapper[4758]: I0122 16:30:54.302502 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:54Z","lastTransitionTime":"2026-01-22T16:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:54 crc kubenswrapper[4758]: I0122 16:30:54.404872 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:54 crc kubenswrapper[4758]: I0122 16:30:54.404909 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:54 crc kubenswrapper[4758]: I0122 16:30:54.404918 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:54 crc kubenswrapper[4758]: I0122 16:30:54.404935 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:54 crc kubenswrapper[4758]: I0122 16:30:54.404943 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:54Z","lastTransitionTime":"2026-01-22T16:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:54 crc kubenswrapper[4758]: I0122 16:30:54.506961 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:54 crc kubenswrapper[4758]: I0122 16:30:54.506998 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:54 crc kubenswrapper[4758]: I0122 16:30:54.507008 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:54 crc kubenswrapper[4758]: I0122 16:30:54.507021 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:54 crc kubenswrapper[4758]: I0122 16:30:54.507031 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:54Z","lastTransitionTime":"2026-01-22T16:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:54 crc kubenswrapper[4758]: I0122 16:30:54.609133 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:54 crc kubenswrapper[4758]: I0122 16:30:54.609214 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:54 crc kubenswrapper[4758]: I0122 16:30:54.609232 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:54 crc kubenswrapper[4758]: I0122 16:30:54.609253 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:54 crc kubenswrapper[4758]: I0122 16:30:54.609267 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:54Z","lastTransitionTime":"2026-01-22T16:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:54 crc kubenswrapper[4758]: I0122 16:30:54.711824 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:54 crc kubenswrapper[4758]: I0122 16:30:54.711857 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:54 crc kubenswrapper[4758]: I0122 16:30:54.711866 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:54 crc kubenswrapper[4758]: I0122 16:30:54.711879 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:54 crc kubenswrapper[4758]: I0122 16:30:54.711906 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:54Z","lastTransitionTime":"2026-01-22T16:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:54 crc kubenswrapper[4758]: I0122 16:30:54.810957 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:30:54 crc kubenswrapper[4758]: I0122 16:30:54.811083 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:54 crc kubenswrapper[4758]: I0122 16:30:54.811145 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:54 crc kubenswrapper[4758]: I0122 16:30:54.810965 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:54 crc kubenswrapper[4758]: E0122 16:30:54.811134 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:30:54 crc kubenswrapper[4758]: E0122 16:30:54.811211 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:54 crc kubenswrapper[4758]: E0122 16:30:54.811374 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:54 crc kubenswrapper[4758]: E0122 16:30:54.811543 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:54 crc kubenswrapper[4758]: I0122 16:30:54.814639 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:54 crc kubenswrapper[4758]: I0122 16:30:54.814692 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:54 crc kubenswrapper[4758]: I0122 16:30:54.814709 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:54 crc kubenswrapper[4758]: I0122 16:30:54.814789 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:54 crc kubenswrapper[4758]: I0122 16:30:54.814816 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:54Z","lastTransitionTime":"2026-01-22T16:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:54 crc kubenswrapper[4758]: I0122 16:30:54.865644 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 23:33:09.402017169 +0000 UTC Jan 22 16:30:54 crc kubenswrapper[4758]: I0122 16:30:54.917437 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:54 crc kubenswrapper[4758]: I0122 16:30:54.917485 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:54 crc kubenswrapper[4758]: I0122 16:30:54.917496 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:54 crc kubenswrapper[4758]: I0122 16:30:54.917516 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:54 crc kubenswrapper[4758]: I0122 16:30:54.917530 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:54Z","lastTransitionTime":"2026-01-22T16:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.019648 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.019688 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.019701 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.019717 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.019729 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:55Z","lastTransitionTime":"2026-01-22T16:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.121642 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.121692 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.121702 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.121717 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.121731 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:55Z","lastTransitionTime":"2026-01-22T16:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.224786 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.224835 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.224847 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.224866 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.224879 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:55Z","lastTransitionTime":"2026-01-22T16:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.327596 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.327630 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.327639 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.327652 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.327661 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:55Z","lastTransitionTime":"2026-01-22T16:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.430987 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.431027 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.431038 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.431079 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.431091 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:55Z","lastTransitionTime":"2026-01-22T16:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.533718 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.533807 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.533820 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.533836 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.533848 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:55Z","lastTransitionTime":"2026-01-22T16:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.636616 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.636659 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.636671 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.636687 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.636700 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:55Z","lastTransitionTime":"2026-01-22T16:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.740710 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.740800 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.740816 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.740835 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.740846 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:55Z","lastTransitionTime":"2026-01-22T16:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.842888 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.842959 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.842983 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.843011 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.843036 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:55Z","lastTransitionTime":"2026-01-22T16:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.866883 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 22:57:53.87046695 +0000 UTC Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.945964 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.946023 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.946040 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.946063 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:55 crc kubenswrapper[4758]: I0122 16:30:55.946081 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:55Z","lastTransitionTime":"2026-01-22T16:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.048682 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.048788 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.048807 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.048830 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.048847 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:56Z","lastTransitionTime":"2026-01-22T16:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.152851 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.152914 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.152934 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.152958 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.152977 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:56Z","lastTransitionTime":"2026-01-22T16:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.255239 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.255294 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.255312 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.255332 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.255348 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:56Z","lastTransitionTime":"2026-01-22T16:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.357811 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.357955 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.357983 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.358013 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.358032 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:56Z","lastTransitionTime":"2026-01-22T16:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.460533 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.460580 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.460595 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.460616 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.460632 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:56Z","lastTransitionTime":"2026-01-22T16:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.562628 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.562683 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.562715 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.562757 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.562774 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:56Z","lastTransitionTime":"2026-01-22T16:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.665213 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.665279 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.665296 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.665321 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.665339 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:56Z","lastTransitionTime":"2026-01-22T16:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.768250 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.768292 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.768304 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.768320 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.768330 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:56Z","lastTransitionTime":"2026-01-22T16:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.807424 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.807497 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:56 crc kubenswrapper[4758]: E0122 16:30:56.807543 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.807515 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:56 crc kubenswrapper[4758]: E0122 16:30:56.807635 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:56 crc kubenswrapper[4758]: E0122 16:30:56.807730 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.807801 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:56 crc kubenswrapper[4758]: E0122 16:30:56.807874 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.867574 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 03:48:16.082300071 +0000 UTC Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.870979 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.871050 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.871073 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.871102 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.871123 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:56Z","lastTransitionTime":"2026-01-22T16:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.973470 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.973496 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.973505 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.973517 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:56 crc kubenswrapper[4758]: I0122 16:30:56.973526 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:56Z","lastTransitionTime":"2026-01-22T16:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:57 crc kubenswrapper[4758]: I0122 16:30:57.076887 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:57 crc kubenswrapper[4758]: I0122 16:30:57.076924 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:57 crc kubenswrapper[4758]: I0122 16:30:57.076935 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:57 crc kubenswrapper[4758]: I0122 16:30:57.076949 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:57 crc kubenswrapper[4758]: I0122 16:30:57.076959 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:57Z","lastTransitionTime":"2026-01-22T16:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:57 crc kubenswrapper[4758]: I0122 16:30:57.180232 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:57 crc kubenswrapper[4758]: I0122 16:30:57.180271 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:57 crc kubenswrapper[4758]: I0122 16:30:57.180281 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:57 crc kubenswrapper[4758]: I0122 16:30:57.180297 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:57 crc kubenswrapper[4758]: I0122 16:30:57.180308 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:57Z","lastTransitionTime":"2026-01-22T16:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:57 crc kubenswrapper[4758]: I0122 16:30:57.282268 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:57 crc kubenswrapper[4758]: I0122 16:30:57.282341 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:57 crc kubenswrapper[4758]: I0122 16:30:57.282366 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:57 crc kubenswrapper[4758]: I0122 16:30:57.282396 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:57 crc kubenswrapper[4758]: I0122 16:30:57.282422 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:57Z","lastTransitionTime":"2026-01-22T16:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:57 crc kubenswrapper[4758]: I0122 16:30:57.385233 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:57 crc kubenswrapper[4758]: I0122 16:30:57.385311 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:57 crc kubenswrapper[4758]: I0122 16:30:57.385335 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:57 crc kubenswrapper[4758]: I0122 16:30:57.385366 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:57 crc kubenswrapper[4758]: I0122 16:30:57.385387 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:57Z","lastTransitionTime":"2026-01-22T16:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:57 crc kubenswrapper[4758]: I0122 16:30:57.488542 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:57 crc kubenswrapper[4758]: I0122 16:30:57.488591 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:57 crc kubenswrapper[4758]: I0122 16:30:57.488602 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:57 crc kubenswrapper[4758]: I0122 16:30:57.488621 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:57 crc kubenswrapper[4758]: I0122 16:30:57.488631 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:57Z","lastTransitionTime":"2026-01-22T16:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:57 crc kubenswrapper[4758]: I0122 16:30:57.591908 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:57 crc kubenswrapper[4758]: I0122 16:30:57.591986 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:57 crc kubenswrapper[4758]: I0122 16:30:57.591996 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:57 crc kubenswrapper[4758]: I0122 16:30:57.592020 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:57 crc kubenswrapper[4758]: I0122 16:30:57.592034 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:57Z","lastTransitionTime":"2026-01-22T16:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:57 crc kubenswrapper[4758]: I0122 16:30:57.695243 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:57 crc kubenswrapper[4758]: I0122 16:30:57.695330 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:57 crc kubenswrapper[4758]: I0122 16:30:57.695347 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:57 crc kubenswrapper[4758]: I0122 16:30:57.695376 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:57 crc kubenswrapper[4758]: I0122 16:30:57.695395 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:57Z","lastTransitionTime":"2026-01-22T16:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:57 crc kubenswrapper[4758]: I0122 16:30:57.798369 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:57 crc kubenswrapper[4758]: I0122 16:30:57.798406 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:57 crc kubenswrapper[4758]: I0122 16:30:57.798417 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:57 crc kubenswrapper[4758]: I0122 16:30:57.798433 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:57 crc kubenswrapper[4758]: I0122 16:30:57.798445 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:57Z","lastTransitionTime":"2026-01-22T16:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:57 crc kubenswrapper[4758]: I0122 16:30:57.867898 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 06:14:35.610294981 +0000 UTC Jan 22 16:30:57 crc kubenswrapper[4758]: I0122 16:30:57.902451 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:57 crc kubenswrapper[4758]: I0122 16:30:57.902510 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:57 crc kubenswrapper[4758]: I0122 16:30:57.902528 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:57 crc kubenswrapper[4758]: I0122 16:30:57.902550 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:57 crc kubenswrapper[4758]: I0122 16:30:57.902566 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:57Z","lastTransitionTime":"2026-01-22T16:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.005177 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.005258 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.005285 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.005315 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.005339 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:58Z","lastTransitionTime":"2026-01-22T16:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.108023 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.108087 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.108104 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.108131 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.108149 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:58Z","lastTransitionTime":"2026-01-22T16:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.211170 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.211232 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.211244 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.211263 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.211275 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:58Z","lastTransitionTime":"2026-01-22T16:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.313887 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.313948 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.313971 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.313996 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.314013 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:58Z","lastTransitionTime":"2026-01-22T16:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.416010 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.416044 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.416054 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.416069 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.416079 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:58Z","lastTransitionTime":"2026-01-22T16:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.518649 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.518692 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.518703 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.518720 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.518732 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:58Z","lastTransitionTime":"2026-01-22T16:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.621433 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.621478 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.621490 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.621507 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.621518 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:58Z","lastTransitionTime":"2026-01-22T16:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.725106 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.725150 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.725182 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.725200 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.725211 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:58Z","lastTransitionTime":"2026-01-22T16:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.808083 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.808125 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.808095 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.808104 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:58 crc kubenswrapper[4758]: E0122 16:30:58.808222 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:58 crc kubenswrapper[4758]: E0122 16:30:58.808319 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:58 crc kubenswrapper[4758]: E0122 16:30:58.808425 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:58 crc kubenswrapper[4758]: E0122 16:30:58.808523 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.824116 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7dvfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97853b38-352d-42df-ad31-639c0e58093a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56af628fe62b476141809cfaea6a06fdd7dfa34ed41fb97425db4cdaa3ec7b4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12409cad6bedda3da41a11ce209dd58b7d15e3fc0dde575d70b3aa6c64435144\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:30:49Z\\\",\\\"message\\\":\\\"2026-01-22T16:30:04+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c3ab300e-f214-48f5-80e5-57280a3cce0f\\\\n2026-01-22T16:30:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c3ab300e-f214-48f5-80e5-57280a3cce0f to /host/opt/cni/bin/\\\\n2026-01-22T16:30:04Z [verbose] multus-daemon started\\\\n2026-01-22T16:30:04Z [verbose] Readiness Indicator file check\\\\n2026-01-22T16:30:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7dvfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.827908 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.827955 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.827971 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.827992 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.828007 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:58Z","lastTransitionTime":"2026-01-22T16:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.835134 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xqns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ef1c490-d5f9-458d-8b3e-8580a5f07df6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8br2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8br2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xqns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.856803 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9309c6-0336-4a15-8cbf-78178b4e57d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6824555f2019c5b0c92137ccb0a9af419b01ce0c63e1739c1d22b155a97c98a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a945d54b82518c2cda9257528f766444b687693255c50680adafb11651c792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca6e50d3a2acc2a4d43dc4a1fc1ff783ea5cb78978132377b7bb12b0dbd3e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c7268055ac9d7def228857bd8b974a53bb71fa873e1e0495d4691b8ca11902\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb71578e3eba87e91e6f6db0b03669e556cfbf38e2df367d20b6c8c79952f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.868495 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 02:43:53.588916224 +0000 UTC Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.872900 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f128c8ae-2e32-4884-a296-728579141589\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:29:51.087222 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:29:51.088631 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2674264491/tls.crt::/tmp/serving-cert-2674264491/tls.key\\\\\\\"\\\\nI0122 16:29:56.617863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:29:56.621506 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:29:56.621541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:29:56.621606 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:29:56.621634 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:29:56.631508 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:29:56.631550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631559 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:29:56.631576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0122 16:29:56.631574 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 16:29:56.631584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:29:56.631610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 16:29:56.634157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.891061 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ba0bf6-e521-4b47-a7e5-81f19a4bf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f742b25c51806335d17c6c67e8ad4944228fde89626352044f62ee1e708c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0197852c20ea1961ea8cff956886a8a42967c95fad73d2ed8bd37e6f763cca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3cdc36e13e13f43cb329beb4b415f17dab3d8427338168449ea3771053d668a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://981ef0ee873407291236dfd734567e3213a9451d495eb97e1029696cc788acbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://981ef0ee873407291236dfd734567e3213a9451d495eb97e1029696cc788acbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.908690 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dfeba9911630f8c172fab9eee3a107fbc2e24407b0af1f69cd539bac18d47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.927321 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.931080 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.931112 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.931124 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.931140 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.931151 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:58Z","lastTransitionTime":"2026-01-22T16:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.946489 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385c8e25a62d5dad6aeac43a064397418c85c1b8720414cd44e3e925fa85a04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f98a04a30984aea45235e40edb9801d2939b35a08519d1d63df0d0c6c47131a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://596bd59377fe79f228ddda88e07b73a2f24a57ce836d0f0b2ca02d6008363020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ade0d50980af81530f1be5dbb599cf39cd13941d216485b18422f8474a1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2bb807fa30678efaca258ed72a274a7f4e065ce20066caf601177dbc8466409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915d9141459dc9d0a72681717513aaef7a876003397a1ed89a62b755bb45dc67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c5e5416f238f2982c2f7867eeca80db18dfebf840af2b1155a40d591c248e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99c5e5416f238f2982c2f7867eeca80db18dfebf840af2b1155a40d591c248e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:30:30Z\\\",\\\"message\\\":\\\"v1.Pod event handler 6 for removal\\\\nI0122 16:30:30.316677 6384 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0122 16:30:30.316778 6384 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 16:30:30.316793 6384 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 16:30:30.316803 6384 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 16:30:30.316810 6384 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 16:30:30.316816 6384 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 16:30:30.316861 6384 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0122 16:30:30.316907 6384 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 16:30:30.316935 6384 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 16:30:30.316967 6384 factory.go:656] Stopping watch factory\\\\nI0122 16:30:30.316968 6384 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0122 16:30:30.317043 6384 ovnkube.go:599] Stopped ovnkube\\\\nI0122 16:30:30.317063 6384 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 16:30:30.317083 6384 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0122 16:30:30.317040 6384 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nF0122 16:30:30.317166 6384 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jdpck_openshift-ovn-kubernetes(9b60a09e-8bfa-4d2e-998d-e1db5dec0faa)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cfdd5744f9e8afe2a851b86ac85473f44fb49066784a282306ca8c1d621974b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jdpck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.958584 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cbszh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b21f81e8-3f11-43f9-abdb-09e8d25aeb73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0004ca3184c4311fd606fb18d3c4657d88f6212a1ac49a882c1a8ec5162c314b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5lx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e25bfe191c79389160e8c25e97ebd3bf2782cccecf01aac06c459041e083a793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5lx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cbszh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.969713 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bf18bca-54c9-46a7-ae1a-0e4cd3f2ff2f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdcb3871deb3a437bfd84b017af8233d06a10cbc0da01bb1aca18a10b40ca3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f65d6332d7a785ece6b513b6dc9c2b705475831c3d926b61070af12139bd51bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f65d6332d7a785ece6b513b6dc9c2b705475831c3d926b61070af12139bd51bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.982884 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afc42466-9bb2-4e33-abde-6a09e897045b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:58 crc kubenswrapper[4758]: I0122 16:30:58.993112 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.003220 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10fc91a9777392383ea1a48bb940f13581052f2aaadce7c2d94588884a8ff832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:59Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.015972 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9182510-5fc6-4717-b94c-de8ca4fb7c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1b80316bb1f3b27668a5ff6e547c13c4f84ae30f40fc6d0407849fb59fb9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqfn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:59Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.024183 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lt6tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"090f3014-3d99-49d5-8a9d-9719b4efbcf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a09e0ee71eddb461f883d44293ed63887153350f0f617799e7f360b5d6fdd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhkzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lt6tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:59Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.033421 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.033449 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.033458 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.033476 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.033487 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:59Z","lastTransitionTime":"2026-01-22T16:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.035813 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9485b50dd3fa712a0f43f04b4d3ae98e0f152d17b5db4b6f214125c1e926a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:59Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.045752 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:59Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.057708 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g8wjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425c9f0a-b14e-48d3-bd86-6fc510f22a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1d22788bf54b1c4a55b0c19222ad6dde207887ab282b97324717333f0280f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtrsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g8wjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:59Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.067809 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b5f24a-19df-4969-b547-a5acc323c58a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208979f8d30765fcfd45650c760741d72bd7119bfe62ebf4d7c1554d6c6d56e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf5569b30ec6397014b282bf67eca77930756b413c7554ab366d2d31a4f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zsbtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:30:59Z is after 2025-08-24T17:21:41Z" Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.136114 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.136181 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.136193 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.136210 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.136222 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:59Z","lastTransitionTime":"2026-01-22T16:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.238810 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.238874 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.238920 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.238954 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.238982 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:59Z","lastTransitionTime":"2026-01-22T16:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.341755 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.341798 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.341810 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.341827 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.341839 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:59Z","lastTransitionTime":"2026-01-22T16:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.444277 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.444348 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.444369 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.444395 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.444416 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:59Z","lastTransitionTime":"2026-01-22T16:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.546931 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.546966 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.546975 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.546988 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.547016 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:59Z","lastTransitionTime":"2026-01-22T16:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.650066 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.650142 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.650165 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.650191 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.650209 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:59Z","lastTransitionTime":"2026-01-22T16:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.752735 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.752793 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.752805 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.752821 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.752832 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:59Z","lastTransitionTime":"2026-01-22T16:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.855201 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.855239 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.855248 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.855259 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.855267 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:59Z","lastTransitionTime":"2026-01-22T16:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.868616 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 03:10:33.732848495 +0000 UTC Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.957760 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.957789 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.957798 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.957811 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:59 crc kubenswrapper[4758]: I0122 16:30:59.957820 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:59Z","lastTransitionTime":"2026-01-22T16:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.061253 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.061327 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.061376 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.061402 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.061421 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:00Z","lastTransitionTime":"2026-01-22T16:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.164239 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.164303 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.164320 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.164342 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.164358 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:00Z","lastTransitionTime":"2026-01-22T16:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.267660 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.267787 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.267828 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.267859 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.267882 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:00Z","lastTransitionTime":"2026-01-22T16:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.371443 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.371506 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.371528 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.371561 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.371584 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:00Z","lastTransitionTime":"2026-01-22T16:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.473791 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.473832 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.473841 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.473853 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.473863 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:00Z","lastTransitionTime":"2026-01-22T16:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.576411 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.576481 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.576504 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.576535 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.576559 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:00Z","lastTransitionTime":"2026-01-22T16:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.679637 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.679710 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.679734 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.679815 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.679838 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:00Z","lastTransitionTime":"2026-01-22T16:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.733541 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:31:00 crc kubenswrapper[4758]: E0122 16:31:00.733651 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:04.733629714 +0000 UTC m=+146.216969009 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.782270 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.782316 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.782329 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.782344 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.782354 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:00Z","lastTransitionTime":"2026-01-22T16:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.807554 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.807564 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.807621 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.807850 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:31:00 crc kubenswrapper[4758]: E0122 16:31:00.807842 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:31:00 crc kubenswrapper[4758]: E0122 16:31:00.807902 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:31:00 crc kubenswrapper[4758]: E0122 16:31:00.807943 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:31:00 crc kubenswrapper[4758]: E0122 16:31:00.808130 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.834728 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.834881 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.834924 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.834965 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:31:00 crc kubenswrapper[4758]: E0122 16:31:00.834926 4758 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 16:31:00 crc kubenswrapper[4758]: E0122 16:31:00.834964 4758 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 16:31:00 crc kubenswrapper[4758]: E0122 16:31:00.835148 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 16:32:04.835118694 +0000 UTC m=+146.318458019 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 16:31:00 crc kubenswrapper[4758]: E0122 16:31:00.835190 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 16:32:04.835171076 +0000 UTC m=+146.318510371 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 16:31:00 crc kubenswrapper[4758]: E0122 16:31:00.835038 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 16:31:00 crc kubenswrapper[4758]: E0122 16:31:00.835196 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 16:31:00 crc kubenswrapper[4758]: E0122 16:31:00.835220 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 16:31:00 crc kubenswrapper[4758]: E0122 16:31:00.835242 4758 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:31:00 crc kubenswrapper[4758]: E0122 16:31:00.835241 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 16:31:00 crc kubenswrapper[4758]: E0122 16:31:00.835268 4758 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:31:00 crc kubenswrapper[4758]: E0122 16:31:00.835272 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 16:32:04.835263238 +0000 UTC m=+146.318602543 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:31:00 crc kubenswrapper[4758]: E0122 16:31:00.835375 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 16:32:04.83535046 +0000 UTC m=+146.318689775 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.868806 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 00:45:54.7553484 +0000 UTC Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.884854 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.884898 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.884910 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.884927 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.884939 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:00Z","lastTransitionTime":"2026-01-22T16:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.986890 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.986944 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.986956 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.986974 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:00 crc kubenswrapper[4758]: I0122 16:31:00.986986 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:00Z","lastTransitionTime":"2026-01-22T16:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.090202 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.090241 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.090251 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.090267 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.090279 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:01Z","lastTransitionTime":"2026-01-22T16:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.193245 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.193309 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.193331 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.193361 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.193384 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:01Z","lastTransitionTime":"2026-01-22T16:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.295624 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.295665 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.295676 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.295693 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.295704 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:01Z","lastTransitionTime":"2026-01-22T16:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.399030 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.399116 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.399149 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.399179 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.399200 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:01Z","lastTransitionTime":"2026-01-22T16:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.502898 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.502962 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.502978 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.503001 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.503019 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:01Z","lastTransitionTime":"2026-01-22T16:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.606810 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.606854 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.606866 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.606885 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.606898 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:01Z","lastTransitionTime":"2026-01-22T16:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.610663 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.610724 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.610764 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.610786 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.610799 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:01Z","lastTransitionTime":"2026-01-22T16:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:01 crc kubenswrapper[4758]: E0122 16:31:01.631158 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7288053-8dca-462f-b24f-6a9d8be738b3\\\",\\\"systemUUID\\\":\\\"83805c52-2bba-4705-bdbe-9101a9d1190e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:01Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.635852 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.635917 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.635934 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.635960 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.635976 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:01Z","lastTransitionTime":"2026-01-22T16:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:01 crc kubenswrapper[4758]: E0122 16:31:01.654154 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7288053-8dca-462f-b24f-6a9d8be738b3\\\",\\\"systemUUID\\\":\\\"83805c52-2bba-4705-bdbe-9101a9d1190e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:01Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.658868 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.658926 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.658941 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.658959 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.658977 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:01Z","lastTransitionTime":"2026-01-22T16:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:01 crc kubenswrapper[4758]: E0122 16:31:01.675245 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7288053-8dca-462f-b24f-6a9d8be738b3\\\",\\\"systemUUID\\\":\\\"83805c52-2bba-4705-bdbe-9101a9d1190e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:01Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.679272 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.679296 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.679346 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.679362 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.679370 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:01Z","lastTransitionTime":"2026-01-22T16:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:01 crc kubenswrapper[4758]: E0122 16:31:01.695055 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7288053-8dca-462f-b24f-6a9d8be738b3\\\",\\\"systemUUID\\\":\\\"83805c52-2bba-4705-bdbe-9101a9d1190e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:01Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.699604 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.699683 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.699702 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.699728 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.699775 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:01Z","lastTransitionTime":"2026-01-22T16:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:01 crc kubenswrapper[4758]: E0122 16:31:01.721229 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7288053-8dca-462f-b24f-6a9d8be738b3\\\",\\\"systemUUID\\\":\\\"83805c52-2bba-4705-bdbe-9101a9d1190e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:01Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:01 crc kubenswrapper[4758]: E0122 16:31:01.721468 4758 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.723397 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.723433 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.723445 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.723462 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.723474 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:01Z","lastTransitionTime":"2026-01-22T16:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.826495 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.826620 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.826680 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.826718 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.826734 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:01Z","lastTransitionTime":"2026-01-22T16:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.869604 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 09:15:13.366597694 +0000 UTC Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.929101 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.929161 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.929180 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.929205 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:01 crc kubenswrapper[4758]: I0122 16:31:01.929225 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:01Z","lastTransitionTime":"2026-01-22T16:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.032550 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.032656 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.032676 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.032729 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.032835 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:02Z","lastTransitionTime":"2026-01-22T16:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.136648 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.136690 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.136708 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.136732 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.136787 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:02Z","lastTransitionTime":"2026-01-22T16:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.239810 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.239852 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.239861 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.239877 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.239887 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:02Z","lastTransitionTime":"2026-01-22T16:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.342988 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.343039 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.343058 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.343080 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.343095 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:02Z","lastTransitionTime":"2026-01-22T16:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.446140 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.446191 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.446206 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.446225 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.446243 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:02Z","lastTransitionTime":"2026-01-22T16:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.549508 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.549548 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.549560 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.549576 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.549587 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:02Z","lastTransitionTime":"2026-01-22T16:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.651922 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.651962 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.651976 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.651989 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.651999 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:02Z","lastTransitionTime":"2026-01-22T16:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.753977 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.754049 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.754059 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.754092 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.754103 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:02Z","lastTransitionTime":"2026-01-22T16:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.807703 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.807840 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:31:02 crc kubenswrapper[4758]: E0122 16:31:02.807860 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.808863 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:31:02 crc kubenswrapper[4758]: E0122 16:31:02.809048 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:31:02 crc kubenswrapper[4758]: E0122 16:31:02.809319 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.808917 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.809601 4758 scope.go:117] "RemoveContainer" containerID="99c5e5416f238f2982c2f7867eeca80db18dfebf840af2b1155a40d591c248e9" Jan 22 16:31:02 crc kubenswrapper[4758]: E0122 16:31:02.809930 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.856456 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.856510 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.856526 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.856549 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.856566 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:02Z","lastTransitionTime":"2026-01-22T16:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.869883 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 08:07:33.582904947 +0000 UTC Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.960431 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.960486 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.960507 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.960538 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:02 crc kubenswrapper[4758]: I0122 16:31:02.960562 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:02Z","lastTransitionTime":"2026-01-22T16:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.062554 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.062593 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.062608 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.062623 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.062635 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:03Z","lastTransitionTime":"2026-01-22T16:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.165077 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.165128 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.165137 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.165181 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.165193 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:03Z","lastTransitionTime":"2026-01-22T16:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.267404 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.267624 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.267701 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.267794 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.267876 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:03Z","lastTransitionTime":"2026-01-22T16:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.371716 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.371826 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.371857 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.371887 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.372024 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:03Z","lastTransitionTime":"2026-01-22T16:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.475128 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.475162 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.475172 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.475188 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.475198 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:03Z","lastTransitionTime":"2026-01-22T16:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.578546 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.578899 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.579135 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.579402 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.579579 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:03Z","lastTransitionTime":"2026-01-22T16:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.682647 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.682715 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.682724 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.682763 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.682778 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:03Z","lastTransitionTime":"2026-01-22T16:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.785471 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.785501 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.785509 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.785521 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.785532 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:03Z","lastTransitionTime":"2026-01-22T16:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.870423 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 22:18:33.33308192 +0000 UTC Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.887666 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.887715 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.887731 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.887773 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.887791 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:03Z","lastTransitionTime":"2026-01-22T16:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.989912 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.989954 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.989964 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.989980 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:03 crc kubenswrapper[4758]: I0122 16:31:03.989992 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:03Z","lastTransitionTime":"2026-01-22T16:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.092869 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.092918 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.092927 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.092944 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.092956 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:04Z","lastTransitionTime":"2026-01-22T16:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.195182 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.195229 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.195240 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.195256 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.195268 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:04Z","lastTransitionTime":"2026-01-22T16:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.270943 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jdpck_9b60a09e-8bfa-4d2e-998d-e1db5dec0faa/ovnkube-controller/2.log" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.274281 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" event={"ID":"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa","Type":"ContainerStarted","Data":"7a265cc950ba85a41da92efbf8a471efa10bdc6ef7aa7837fc86c3e4e023a263"} Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.274768 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.288906 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9485b50dd3fa712a0f43f04b4d3ae98e0f152d17b5db4b6f214125c1e926a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.297515 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.297553 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.297563 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.297580 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.297592 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:04Z","lastTransitionTime":"2026-01-22T16:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.303391 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.320713 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g8wjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425c9f0a-b14e-48d3-bd86-6fc510f22a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1d22788bf54b1c4a55b0c19222ad6dde207887ab282b97324717333f0280f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtrsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g8wjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.336838 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b5f24a-19df-4969-b547-a5acc323c58a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208979f8d30765fcfd45650c760741d72bd7119bfe62ebf4d7c1554d6c6d56e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf5569b30ec6397014b282bf67eca77930756b413c7554ab366d2d31a4f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zsbtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.358978 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9182510-5fc6-4717-b94c-de8ca4fb7c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1b80316bb1f3b27668a5ff6e547c13c4f84ae30f40fc6d0407849fb59fb9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqfn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.371670 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lt6tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"090f3014-3d99-49d5-8a9d-9719b4efbcf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a09e0ee71eddb461f883d44293ed63887153350f0f617799e7f360b5d6fdd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhkzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lt6tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.394648 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9309c6-0336-4a15-8cbf-78178b4e57d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6824555f2019c5b0c92137ccb0a9af419b01ce0c63e1739c1d22b155a97c98a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a945d54b82518c2cda9257528f766444b687693255c50680adafb11651c792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca6e50d3a2acc2a4d43dc4a1fc1ff783ea5cb78978132377b7bb12b0dbd3e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c7268055ac9d7def228857bd8b974a53bb71fa873e1e0495d4691b8ca11902\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb71578e3eba87e91e6f6db0b03669e556cfbf38e2df367d20b6c8c79952f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.399634 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.399664 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.399673 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.399691 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.399701 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:04Z","lastTransitionTime":"2026-01-22T16:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.412930 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f128c8ae-2e32-4884-a296-728579141589\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:29:51.087222 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:29:51.088631 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2674264491/tls.crt::/tmp/serving-cert-2674264491/tls.key\\\\\\\"\\\\nI0122 16:29:56.617863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:29:56.621506 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:29:56.621541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:29:56.621606 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:29:56.621634 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:29:56.631508 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:29:56.631550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631559 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:29:56.631576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0122 16:29:56.631574 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 16:29:56.631584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:29:56.631610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 16:29:56.634157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.426719 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ba0bf6-e521-4b47-a7e5-81f19a4bf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f742b25c51806335d17c6c67e8ad4944228fde89626352044f62ee1e708c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0197852c20ea1961ea8cff956886a8a42967c95fad73d2ed8bd37e6f763cca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3cdc36e13e13f43cb329beb4b415f17dab3d8427338168449ea3771053d668a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://981ef0ee873407291236dfd734567e3213a9451d495eb97e1029696cc788acbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://981ef0ee873407291236dfd734567e3213a9451d495eb97e1029696cc788acbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.441658 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dfeba9911630f8c172fab9eee3a107fbc2e24407b0af1f69cd539bac18d47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.455221 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7dvfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97853b38-352d-42df-ad31-639c0e58093a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56af628fe62b476141809cfaea6a06fdd7dfa34ed41fb97425db4cdaa3ec7b4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12409cad6bedda3da41a11ce209dd58b7d15e3fc0dde575d70b3aa6c64435144\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:30:49Z\\\",\\\"message\\\":\\\"2026-01-22T16:30:04+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c3ab300e-f214-48f5-80e5-57280a3cce0f\\\\n2026-01-22T16:30:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c3ab300e-f214-48f5-80e5-57280a3cce0f to /host/opt/cni/bin/\\\\n2026-01-22T16:30:04Z [verbose] multus-daemon started\\\\n2026-01-22T16:30:04Z [verbose] Readiness Indicator file check\\\\n2026-01-22T16:30:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7dvfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.468026 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xqns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ef1c490-d5f9-458d-8b3e-8580a5f07df6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8br2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8br2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xqns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.484786 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.496184 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bf18bca-54c9-46a7-ae1a-0e4cd3f2ff2f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdcb3871deb3a437bfd84b017af8233d06a10cbc0da01bb1aca18a10b40ca3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f65d6332d7a785ece6b513b6dc9c2b705475831c3d926b61070af12139bd51bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f65d6332d7a785ece6b513b6dc9c2b705475831c3d926b61070af12139bd51bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.502104 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.502127 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.502135 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.502147 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.502155 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:04Z","lastTransitionTime":"2026-01-22T16:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.509668 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afc42466-9bb2-4e33-abde-6a09e897045b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.520907 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.533488 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10fc91a9777392383ea1a48bb940f13581052f2aaadce7c2d94588884a8ff832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.552256 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385c8e25a62d5dad6aeac43a064397418c85c1b8720414cd44e3e925fa85a04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f98a04a30984aea45235e40edb9801d2939b35a08519d1d63df0d0c6c47131a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://596bd59377fe79f228ddda88e07b73a2f24a57ce836d0f0b2ca02d6008363020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ade0d50980af81530f1be5dbb599cf39cd13941d216485b18422f8474a1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2bb807fa30678efaca258ed72a274a7f4e065ce20066caf601177dbc8466409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915d9141459dc9d0a72681717513aaef7a876003397a1ed89a62b755bb45dc67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a265cc950ba85a41da92efbf8a471efa10bdc6ef7aa7837fc86c3e4e023a263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99c5e5416f238f2982c2f7867eeca80db18dfebf840af2b1155a40d591c248e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:30:30Z\\\",\\\"message\\\":\\\"v1.Pod event handler 6 for removal\\\\nI0122 16:30:30.316677 6384 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0122 16:30:30.316778 6384 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 16:30:30.316793 6384 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 16:30:30.316803 6384 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 16:30:30.316810 6384 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 16:30:30.316816 6384 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 16:30:30.316861 6384 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0122 16:30:30.316907 6384 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 16:30:30.316935 6384 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 16:30:30.316967 6384 factory.go:656] Stopping watch factory\\\\nI0122 16:30:30.316968 6384 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0122 16:30:30.317043 6384 ovnkube.go:599] Stopped ovnkube\\\\nI0122 16:30:30.317063 6384 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 16:30:30.317083 6384 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0122 16:30:30.317040 6384 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nF0122 16:30:30.317166 6384 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:31:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cfdd5744f9e8afe2a851b86ac85473f44fb49066784a282306ca8c1d621974b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jdpck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.562143 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cbszh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b21f81e8-3f11-43f9-abdb-09e8d25aeb73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0004ca3184c4311fd606fb18d3c4657d88f6212a1ac49a882c1a8ec5162c314b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5lx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e25bfe191c79389160e8c25e97ebd3bf2782cccecf01aac06c459041e083a793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5lx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cbszh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.604604 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.604884 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.604977 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.605115 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.605226 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:04Z","lastTransitionTime":"2026-01-22T16:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.707692 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.708184 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.708364 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.708518 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.708675 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:04Z","lastTransitionTime":"2026-01-22T16:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.808581 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:31:04 crc kubenswrapper[4758]: E0122 16:31:04.808685 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.808855 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:31:04 crc kubenswrapper[4758]: E0122 16:31:04.808903 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.809001 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:31:04 crc kubenswrapper[4758]: E0122 16:31:04.809062 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.809452 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:31:04 crc kubenswrapper[4758]: E0122 16:31:04.809505 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.810444 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.810466 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.810474 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.810484 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.810492 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:04Z","lastTransitionTime":"2026-01-22T16:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.870724 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 08:02:27.346724007 +0000 UTC Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.913242 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.913294 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.913311 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.913331 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:04 crc kubenswrapper[4758]: I0122 16:31:04.913346 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:04Z","lastTransitionTime":"2026-01-22T16:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.016038 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.016135 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.016155 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.016178 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.016197 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:05Z","lastTransitionTime":"2026-01-22T16:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.118549 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.118596 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.118611 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.118629 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.118642 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:05Z","lastTransitionTime":"2026-01-22T16:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.221026 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.221074 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.221086 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.221106 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.221120 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:05Z","lastTransitionTime":"2026-01-22T16:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.280361 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jdpck_9b60a09e-8bfa-4d2e-998d-e1db5dec0faa/ovnkube-controller/3.log" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.281010 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jdpck_9b60a09e-8bfa-4d2e-998d-e1db5dec0faa/ovnkube-controller/2.log" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.283943 4758 generic.go:334] "Generic (PLEG): container finished" podID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerID="7a265cc950ba85a41da92efbf8a471efa10bdc6ef7aa7837fc86c3e4e023a263" exitCode=1 Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.284017 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" event={"ID":"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa","Type":"ContainerDied","Data":"7a265cc950ba85a41da92efbf8a471efa10bdc6ef7aa7837fc86c3e4e023a263"} Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.284079 4758 scope.go:117] "RemoveContainer" containerID="99c5e5416f238f2982c2f7867eeca80db18dfebf840af2b1155a40d591c248e9" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.284792 4758 scope.go:117] "RemoveContainer" containerID="7a265cc950ba85a41da92efbf8a471efa10bdc6ef7aa7837fc86c3e4e023a263" Jan 22 16:31:05 crc kubenswrapper[4758]: E0122 16:31:05.285144 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jdpck_openshift-ovn-kubernetes(9b60a09e-8bfa-4d2e-998d-e1db5dec0faa)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.298691 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:05Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.313700 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10fc91a9777392383ea1a48bb940f13581052f2aaadce7c2d94588884a8ff832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:05Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.323970 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.324187 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.324320 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.324420 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.324511 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:05Z","lastTransitionTime":"2026-01-22T16:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.334176 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385c8e25a62d5dad6aeac43a064397418c85c1b8720414cd44e3e925fa85a04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f98a04a30984aea45235e40edb9801d2939b35a08519d1d63df0d0c6c47131a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://596bd59377fe79f228ddda88e07b73a2f24a57ce836d0f0b2ca02d6008363020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ade0d50980af81530f1be5dbb599cf39cd13941d216485b18422f8474a1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2bb807fa30678efaca258ed72a274a7f4e065ce20066caf601177dbc8466409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915d9141459dc9d0a72681717513aaef7a876003397a1ed89a62b755bb45dc67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a265cc950ba85a41da92efbf8a471efa10bdc6ef7aa7837fc86c3e4e023a263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99c5e5416f238f2982c2f7867eeca80db18dfebf840af2b1155a40d591c248e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:30:30Z\\\",\\\"message\\\":\\\"v1.Pod event handler 6 for removal\\\\nI0122 16:30:30.316677 6384 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0122 16:30:30.316778 6384 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 16:30:30.316793 6384 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 16:30:30.316803 6384 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 16:30:30.316810 6384 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 16:30:30.316816 6384 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 16:30:30.316861 6384 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0122 16:30:30.316907 6384 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 16:30:30.316935 6384 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 16:30:30.316967 6384 factory.go:656] Stopping watch factory\\\\nI0122 16:30:30.316968 6384 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0122 16:30:30.317043 6384 ovnkube.go:599] Stopped ovnkube\\\\nI0122 16:30:30.317063 6384 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 16:30:30.317083 6384 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0122 16:30:30.317040 6384 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nF0122 16:30:30.317166 6384 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a265cc950ba85a41da92efbf8a471efa10bdc6ef7aa7837fc86c3e4e023a263\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:31:04Z\\\",\\\"message\\\":\\\".417396 6826 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 16:31:04.417817 6826 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 16:31:04.418084 6826 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0122 16:31:04.418126 6826 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 16:31:04.418809 6826 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 16:31:04.418839 6826 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0122 16:31:04.418855 6826 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 16:31:04.418896 6826 factory.go:656] Stopping watch factory\\\\nI0122 16:31:04.418923 6826 ovnkube.go:599] Stopped ovnkube\\\\nI0122 16:31:04.418925 6826 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 16:31:04.418949 6826 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 16:31:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:31:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cfdd5744f9e8afe2a851b86ac85473f44fb49066784a282306ca8c1d621974b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jdpck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:05Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.349641 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cbszh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b21f81e8-3f11-43f9-abdb-09e8d25aeb73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0004ca3184c4311fd606fb18d3c4657d88f6212a1ac49a882c1a8ec5162c314b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5lx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e25bfe191c79389160e8c25e97ebd3bf2782cccecf01aac06c459041e083a793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5lx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cbszh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:05Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.362092 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bf18bca-54c9-46a7-ae1a-0e4cd3f2ff2f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdcb3871deb3a437bfd84b017af8233d06a10cbc0da01bb1aca18a10b40ca3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f65d6332d7a785ece6b513b6dc9c2b705475831c3d926b61070af12139bd51bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f65d6332d7a785ece6b513b6dc9c2b705475831c3d926b61070af12139bd51bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:05Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.382684 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afc42466-9bb2-4e33-abde-6a09e897045b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:05Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.394554 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g8wjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425c9f0a-b14e-48d3-bd86-6fc510f22a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1d22788bf54b1c4a55b0c19222ad6dde207887ab282b97324717333f0280f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtrsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g8wjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:05Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.409761 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b5f24a-19df-4969-b547-a5acc323c58a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208979f8d30765fcfd45650c760741d72bd7119bfe62ebf4d7c1554d6c6d56e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf5569b30ec6397014b282bf67eca77930756b413c7554ab366d2d31a4f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zsbtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:05Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.423112 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9182510-5fc6-4717-b94c-de8ca4fb7c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1b80316bb1f3b27668a5ff6e547c13c4f84ae30f40fc6d0407849fb59fb9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqfn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:05Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.427505 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.427563 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.427579 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.427603 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.427623 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:05Z","lastTransitionTime":"2026-01-22T16:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.433010 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lt6tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"090f3014-3d99-49d5-8a9d-9719b4efbcf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a09e0ee71eddb461f883d44293ed63887153350f0f617799e7f360b5d6fdd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhkzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lt6tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:05Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.443774 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9485b50dd3fa712a0f43f04b4d3ae98e0f152d17b5db4b6f214125c1e926a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:05Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.455472 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:05Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.465988 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ba0bf6-e521-4b47-a7e5-81f19a4bf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f742b25c51806335d17c6c67e8ad4944228fde89626352044f62ee1e708c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0197852c20ea1961ea8cff956886a8a42967c95fad73d2ed8bd37e6f763cca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3cdc36e13e13f43cb329beb4b415f17dab3d8427338168449ea3771053d668a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://981ef0ee873407291236dfd734567e3213a9451d495eb97e1029696cc788acbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://981ef0ee873407291236dfd734567e3213a9451d495eb97e1029696cc788acbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:05Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.479686 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dfeba9911630f8c172fab9eee3a107fbc2e24407b0af1f69cd539bac18d47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:05Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.491777 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7dvfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97853b38-352d-42df-ad31-639c0e58093a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56af628fe62b476141809cfaea6a06fdd7dfa34ed41fb97425db4cdaa3ec7b4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12409cad6bedda3da41a11ce209dd58b7d15e3fc0dde575d70b3aa6c64435144\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:30:49Z\\\",\\\"message\\\":\\\"2026-01-22T16:30:04+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c3ab300e-f214-48f5-80e5-57280a3cce0f\\\\n2026-01-22T16:30:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c3ab300e-f214-48f5-80e5-57280a3cce0f to /host/opt/cni/bin/\\\\n2026-01-22T16:30:04Z [verbose] multus-daemon started\\\\n2026-01-22T16:30:04Z [verbose] Readiness Indicator file check\\\\n2026-01-22T16:30:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7dvfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:05Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.502278 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xqns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ef1c490-d5f9-458d-8b3e-8580a5f07df6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8br2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8br2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xqns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:05Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.518803 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9309c6-0336-4a15-8cbf-78178b4e57d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6824555f2019c5b0c92137ccb0a9af419b01ce0c63e1739c1d22b155a97c98a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a945d54b82518c2cda9257528f766444b687693255c50680adafb11651c792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca6e50d3a2acc2a4d43dc4a1fc1ff783ea5cb78978132377b7bb12b0dbd3e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c7268055ac9d7def228857bd8b974a53bb71fa873e1e0495d4691b8ca11902\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb71578e3eba87e91e6f6db0b03669e556cfbf38e2df367d20b6c8c79952f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:05Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.530350 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.530388 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.530400 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.530419 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.530431 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:05Z","lastTransitionTime":"2026-01-22T16:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.533242 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f128c8ae-2e32-4884-a296-728579141589\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:29:51.087222 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:29:51.088631 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2674264491/tls.crt::/tmp/serving-cert-2674264491/tls.key\\\\\\\"\\\\nI0122 16:29:56.617863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:29:56.621506 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:29:56.621541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:29:56.621606 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:29:56.621634 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:29:56.631508 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:29:56.631550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631559 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:29:56.631576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0122 16:29:56.631574 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 16:29:56.631584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:29:56.631610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 16:29:56.634157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:05Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.543922 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:05Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.633054 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.633091 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.633102 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.633117 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.633128 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:05Z","lastTransitionTime":"2026-01-22T16:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.735708 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.735820 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.735841 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.735872 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.735915 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:05Z","lastTransitionTime":"2026-01-22T16:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.839858 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.839932 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.839955 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.839986 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.840010 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:05Z","lastTransitionTime":"2026-01-22T16:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.871818 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 02:00:00.139409048 +0000 UTC Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.943169 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.943251 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.943277 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.943307 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:05 crc kubenswrapper[4758]: I0122 16:31:05.943327 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:05Z","lastTransitionTime":"2026-01-22T16:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.045895 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.045948 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.045960 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.045977 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.045992 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:06Z","lastTransitionTime":"2026-01-22T16:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.148913 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.148982 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.148995 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.149012 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.149026 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:06Z","lastTransitionTime":"2026-01-22T16:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.251898 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.251939 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.251951 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.251968 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.251980 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:06Z","lastTransitionTime":"2026-01-22T16:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.290422 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jdpck_9b60a09e-8bfa-4d2e-998d-e1db5dec0faa/ovnkube-controller/3.log" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.295545 4758 scope.go:117] "RemoveContainer" containerID="7a265cc950ba85a41da92efbf8a471efa10bdc6ef7aa7837fc86c3e4e023a263" Jan 22 16:31:06 crc kubenswrapper[4758]: E0122 16:31:06.295864 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jdpck_openshift-ovn-kubernetes(9b60a09e-8bfa-4d2e-998d-e1db5dec0faa)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.314452 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9309c6-0336-4a15-8cbf-78178b4e57d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6824555f2019c5b0c92137ccb0a9af419b01ce0c63e1739c1d22b155a97c98a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a945d54b82518c2cda9257528f766444b687693255c50680adafb11651c792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca6e50d3a2acc2a4d43dc4a1fc1ff783ea5cb78978132377b7bb12b0dbd3e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c7268055ac9d7def228857bd8b974a53bb71fa873e1e0495d4691b8ca11902\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb71578e3eba87e91e6f6db0b03669e556cfbf38e2df367d20b6c8c79952f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:06Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.332017 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f128c8ae-2e32-4884-a296-728579141589\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:29:51.087222 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:29:51.088631 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2674264491/tls.crt::/tmp/serving-cert-2674264491/tls.key\\\\\\\"\\\\nI0122 16:29:56.617863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:29:56.621506 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:29:56.621541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:29:56.621606 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:29:56.621634 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:29:56.631508 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:29:56.631550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631559 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:29:56.631576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0122 16:29:56.631574 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 16:29:56.631584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:29:56.631610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 16:29:56.634157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:06Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.343690 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ba0bf6-e521-4b47-a7e5-81f19a4bf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f742b25c51806335d17c6c67e8ad4944228fde89626352044f62ee1e708c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0197852c20ea1961ea8cff956886a8a42967c95fad73d2ed8bd37e6f763cca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3cdc36e13e13f43cb329beb4b415f17dab3d8427338168449ea3771053d668a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://981ef0ee873407291236dfd734567e3213a9451d495eb97e1029696cc788acbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://981ef0ee873407291236dfd734567e3213a9451d495eb97e1029696cc788acbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:06Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.354122 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.354194 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.354213 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.354232 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.354247 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:06Z","lastTransitionTime":"2026-01-22T16:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.364883 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dfeba9911630f8c172fab9eee3a107fbc2e24407b0af1f69cd539bac18d47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:06Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.377894 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7dvfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97853b38-352d-42df-ad31-639c0e58093a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56af628fe62b476141809cfaea6a06fdd7dfa34ed41fb97425db4cdaa3ec7b4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12409cad6bedda3da41a11ce209dd58b7d15e3fc0dde575d70b3aa6c64435144\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:30:49Z\\\",\\\"message\\\":\\\"2026-01-22T16:30:04+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c3ab300e-f214-48f5-80e5-57280a3cce0f\\\\n2026-01-22T16:30:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c3ab300e-f214-48f5-80e5-57280a3cce0f to /host/opt/cni/bin/\\\\n2026-01-22T16:30:04Z [verbose] multus-daemon started\\\\n2026-01-22T16:30:04Z [verbose] Readiness Indicator file check\\\\n2026-01-22T16:30:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7dvfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:06Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.387055 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xqns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ef1c490-d5f9-458d-8b3e-8580a5f07df6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8br2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8br2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xqns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:06Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.400605 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:06Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.411659 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bf18bca-54c9-46a7-ae1a-0e4cd3f2ff2f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdcb3871deb3a437bfd84b017af8233d06a10cbc0da01bb1aca18a10b40ca3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f65d6332d7a785ece6b513b6dc9c2b705475831c3d926b61070af12139bd51bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f65d6332d7a785ece6b513b6dc9c2b705475831c3d926b61070af12139bd51bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:06Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.424437 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afc42466-9bb2-4e33-abde-6a09e897045b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:06Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.434917 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:06Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.446617 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10fc91a9777392383ea1a48bb940f13581052f2aaadce7c2d94588884a8ff832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:06Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.455854 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.455887 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.455896 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.455910 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.455920 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:06Z","lastTransitionTime":"2026-01-22T16:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.465216 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385c8e25a62d5dad6aeac43a064397418c85c1b8720414cd44e3e925fa85a04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f98a04a30984aea45235e40edb9801d2939b35a08519d1d63df0d0c6c47131a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://596bd59377fe79f228ddda88e07b73a2f24a57ce836d0f0b2ca02d6008363020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ade0d50980af81530f1be5dbb599cf39cd13941d216485b18422f8474a1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2bb807fa30678efaca258ed72a274a7f4e065ce20066caf601177dbc8466409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915d9141459dc9d0a72681717513aaef7a876003397a1ed89a62b755bb45dc67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a265cc950ba85a41da92efbf8a471efa10bdc6ef7aa7837fc86c3e4e023a263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a265cc950ba85a41da92efbf8a471efa10bdc6ef7aa7837fc86c3e4e023a263\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:31:04Z\\\",\\\"message\\\":\\\".417396 6826 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 16:31:04.417817 6826 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 16:31:04.418084 6826 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0122 16:31:04.418126 6826 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 16:31:04.418809 6826 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 16:31:04.418839 6826 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0122 16:31:04.418855 6826 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 16:31:04.418896 6826 factory.go:656] Stopping watch factory\\\\nI0122 16:31:04.418923 6826 ovnkube.go:599] Stopped ovnkube\\\\nI0122 16:31:04.418925 6826 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 16:31:04.418949 6826 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 16:31:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:31:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jdpck_openshift-ovn-kubernetes(9b60a09e-8bfa-4d2e-998d-e1db5dec0faa)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cfdd5744f9e8afe2a851b86ac85473f44fb49066784a282306ca8c1d621974b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jdpck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:06Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.480038 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cbszh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b21f81e8-3f11-43f9-abdb-09e8d25aeb73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0004ca3184c4311fd606fb18d3c4657d88f6212a1ac49a882c1a8ec5162c314b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5lx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e25bfe191c79389160e8c25e97ebd3bf2782cccecf01aac06c459041e083a793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5lx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cbszh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:06Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.489967 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9485b50dd3fa712a0f43f04b4d3ae98e0f152d17b5db4b6f214125c1e926a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:06Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.501303 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:06Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.512154 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g8wjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425c9f0a-b14e-48d3-bd86-6fc510f22a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1d22788bf54b1c4a55b0c19222ad6dde207887ab282b97324717333f0280f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtrsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g8wjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:06Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.521485 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b5f24a-19df-4969-b547-a5acc323c58a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208979f8d30765fcfd45650c760741d72bd7119bfe62ebf4d7c1554d6c6d56e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf5569b30ec6397014b282bf67eca77930756b413c7554ab366d2d31a4f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zsbtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:06Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.534586 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9182510-5fc6-4717-b94c-de8ca4fb7c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1b80316bb1f3b27668a5ff6e547c13c4f84ae30f40fc6d0407849fb59fb9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqfn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:06Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.544445 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lt6tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"090f3014-3d99-49d5-8a9d-9719b4efbcf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a09e0ee71eddb461f883d44293ed63887153350f0f617799e7f360b5d6fdd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhkzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lt6tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:06Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.558216 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.558251 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.558262 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.558277 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.558288 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:06Z","lastTransitionTime":"2026-01-22T16:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.661145 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.661204 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.661220 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.661242 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.661261 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:06Z","lastTransitionTime":"2026-01-22T16:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.764614 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.764768 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.764788 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.764811 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.764828 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:06Z","lastTransitionTime":"2026-01-22T16:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.807186 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.807311 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.807211 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.807420 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:31:06 crc kubenswrapper[4758]: E0122 16:31:06.807580 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:31:06 crc kubenswrapper[4758]: E0122 16:31:06.808024 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:31:06 crc kubenswrapper[4758]: E0122 16:31:06.808138 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:31:06 crc kubenswrapper[4758]: E0122 16:31:06.808443 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.868091 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.868162 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.868184 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.868213 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.868233 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:06Z","lastTransitionTime":"2026-01-22T16:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.872983 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 15:12:52.835301645 +0000 UTC Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.971667 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.971722 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.971734 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.971779 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:06 crc kubenswrapper[4758]: I0122 16:31:06.971796 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:06Z","lastTransitionTime":"2026-01-22T16:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:07 crc kubenswrapper[4758]: I0122 16:31:07.074949 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:07 crc kubenswrapper[4758]: I0122 16:31:07.075027 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:07 crc kubenswrapper[4758]: I0122 16:31:07.075045 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:07 crc kubenswrapper[4758]: I0122 16:31:07.075068 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:07 crc kubenswrapper[4758]: I0122 16:31:07.075086 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:07Z","lastTransitionTime":"2026-01-22T16:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:07 crc kubenswrapper[4758]: I0122 16:31:07.178249 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:07 crc kubenswrapper[4758]: I0122 16:31:07.178301 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:07 crc kubenswrapper[4758]: I0122 16:31:07.178313 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:07 crc kubenswrapper[4758]: I0122 16:31:07.178333 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:07 crc kubenswrapper[4758]: I0122 16:31:07.178346 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:07Z","lastTransitionTime":"2026-01-22T16:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:07 crc kubenswrapper[4758]: I0122 16:31:07.282015 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:07 crc kubenswrapper[4758]: I0122 16:31:07.282092 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:07 crc kubenswrapper[4758]: I0122 16:31:07.282108 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:07 crc kubenswrapper[4758]: I0122 16:31:07.282131 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:07 crc kubenswrapper[4758]: I0122 16:31:07.282147 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:07Z","lastTransitionTime":"2026-01-22T16:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:07 crc kubenswrapper[4758]: I0122 16:31:07.386779 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:07 crc kubenswrapper[4758]: I0122 16:31:07.386824 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:07 crc kubenswrapper[4758]: I0122 16:31:07.386837 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:07 crc kubenswrapper[4758]: I0122 16:31:07.386856 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:07 crc kubenswrapper[4758]: I0122 16:31:07.386866 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:07Z","lastTransitionTime":"2026-01-22T16:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:07 crc kubenswrapper[4758]: I0122 16:31:07.489290 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:07 crc kubenswrapper[4758]: I0122 16:31:07.489335 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:07 crc kubenswrapper[4758]: I0122 16:31:07.489349 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:07 crc kubenswrapper[4758]: I0122 16:31:07.489370 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:07 crc kubenswrapper[4758]: I0122 16:31:07.489381 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:07Z","lastTransitionTime":"2026-01-22T16:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:07 crc kubenswrapper[4758]: I0122 16:31:07.592173 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:07 crc kubenswrapper[4758]: I0122 16:31:07.592279 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:07 crc kubenswrapper[4758]: I0122 16:31:07.592300 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:07 crc kubenswrapper[4758]: I0122 16:31:07.592321 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:07 crc kubenswrapper[4758]: I0122 16:31:07.592344 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:07Z","lastTransitionTime":"2026-01-22T16:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:07 crc kubenswrapper[4758]: I0122 16:31:07.694720 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:07 crc kubenswrapper[4758]: I0122 16:31:07.694790 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:07 crc kubenswrapper[4758]: I0122 16:31:07.694802 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:07 crc kubenswrapper[4758]: I0122 16:31:07.694820 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:07 crc kubenswrapper[4758]: I0122 16:31:07.694854 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:07Z","lastTransitionTime":"2026-01-22T16:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:07 crc kubenswrapper[4758]: I0122 16:31:07.798279 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:07 crc kubenswrapper[4758]: I0122 16:31:07.798318 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:07 crc kubenswrapper[4758]: I0122 16:31:07.798331 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:07 crc kubenswrapper[4758]: I0122 16:31:07.798347 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:07 crc kubenswrapper[4758]: I0122 16:31:07.798358 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:07Z","lastTransitionTime":"2026-01-22T16:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:07 crc kubenswrapper[4758]: I0122 16:31:07.873461 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 22:12:23.557235102 +0000 UTC Jan 22 16:31:07 crc kubenswrapper[4758]: I0122 16:31:07.900887 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:07 crc kubenswrapper[4758]: I0122 16:31:07.900966 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:07 crc kubenswrapper[4758]: I0122 16:31:07.900989 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:07 crc kubenswrapper[4758]: I0122 16:31:07.901019 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:07 crc kubenswrapper[4758]: I0122 16:31:07.901042 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:07Z","lastTransitionTime":"2026-01-22T16:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.002923 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.003008 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.003036 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.003066 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.003100 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:08Z","lastTransitionTime":"2026-01-22T16:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.105030 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.105096 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.105114 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.105130 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.105140 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:08Z","lastTransitionTime":"2026-01-22T16:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.207509 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.207555 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.207570 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.207587 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.207597 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:08Z","lastTransitionTime":"2026-01-22T16:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.310029 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.310072 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.310084 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.310100 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.310112 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:08Z","lastTransitionTime":"2026-01-22T16:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.412578 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.412630 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.412643 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.412660 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.412671 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:08Z","lastTransitionTime":"2026-01-22T16:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.515481 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.515560 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.515579 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.515606 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.515628 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:08Z","lastTransitionTime":"2026-01-22T16:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.618678 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.618735 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.618781 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.618802 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.618815 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:08Z","lastTransitionTime":"2026-01-22T16:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.721608 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.721897 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.721999 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.722123 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.722221 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:08Z","lastTransitionTime":"2026-01-22T16:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.807924 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.807945 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.807985 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:31:08 crc kubenswrapper[4758]: E0122 16:31:08.808058 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.808207 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:31:08 crc kubenswrapper[4758]: E0122 16:31:08.808253 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:31:08 crc kubenswrapper[4758]: E0122 16:31:08.808368 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:31:08 crc kubenswrapper[4758]: E0122 16:31:08.808411 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.824661 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.824706 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.824721 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.824757 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.824772 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:08Z","lastTransitionTime":"2026-01-22T16:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.830432 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7dvfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97853b38-352d-42df-ad31-639c0e58093a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56af628fe62b476141809cfaea6a06fdd7dfa34ed41fb97425db4cdaa3ec7b4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12409cad6bedda3da41a11ce209dd58b7d15e3fc0dde575d70b3aa6c64435144\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:30:49Z\\\",\\\"message\\\":\\\"2026-01-22T16:30:04+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c3ab300e-f214-48f5-80e5-57280a3cce0f\\\\n2026-01-22T16:30:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c3ab300e-f214-48f5-80e5-57280a3cce0f to /host/opt/cni/bin/\\\\n2026-01-22T16:30:04Z [verbose] multus-daemon started\\\\n2026-01-22T16:30:04Z [verbose] Readiness Indicator file check\\\\n2026-01-22T16:30:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7dvfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:08Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.844586 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xqns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ef1c490-d5f9-458d-8b3e-8580a5f07df6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8br2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8br2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xqns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:08Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.862790 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9309c6-0336-4a15-8cbf-78178b4e57d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6824555f2019c5b0c92137ccb0a9af419b01ce0c63e1739c1d22b155a97c98a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a945d54b82518c2cda9257528f766444b687693255c50680adafb11651c792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca6e50d3a2acc2a4d43dc4a1fc1ff783ea5cb78978132377b7bb12b0dbd3e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c7268055ac9d7def228857bd8b974a53bb71fa873e1e0495d4691b8ca11902\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb71578e3eba87e91e6f6db0b03669e556cfbf38e2df367d20b6c8c79952f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:08Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.873963 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 12:45:14.616097072 +0000 UTC Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.881780 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f128c8ae-2e32-4884-a296-728579141589\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:29:51.087222 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:29:51.088631 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2674264491/tls.crt::/tmp/serving-cert-2674264491/tls.key\\\\\\\"\\\\nI0122 16:29:56.617863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:29:56.621506 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:29:56.621541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:29:56.621606 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:29:56.621634 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:29:56.631508 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:29:56.631550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631559 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:29:56.631576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0122 16:29:56.631574 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 16:29:56.631584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:29:56.631610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 16:29:56.634157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:08Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.895382 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ba0bf6-e521-4b47-a7e5-81f19a4bf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f742b25c51806335d17c6c67e8ad4944228fde89626352044f62ee1e708c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0197852c20ea1961ea8cff956886a8a42967c95fad73d2ed8bd37e6f763cca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3cdc36e13e13f43cb329beb4b415f17dab3d8427338168449ea3771053d668a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://981ef0ee873407291236dfd734567e3213a9451d495eb97e1029696cc788acbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://981ef0ee873407291236dfd734567e3213a9451d495eb97e1029696cc788acbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:08Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.911075 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dfeba9911630f8c172fab9eee3a107fbc2e24407b0af1f69cd539bac18d47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:08Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.925809 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:08Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.927025 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.927062 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.927075 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.927094 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.927106 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:08Z","lastTransitionTime":"2026-01-22T16:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.944824 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385c8e25a62d5dad6aeac43a064397418c85c1b8720414cd44e3e925fa85a04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f98a04a30984aea45235e40edb9801d2939b35a08519d1d63df0d0c6c47131a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://596bd59377fe79f228ddda88e07b73a2f24a57ce836d0f0b2ca02d6008363020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ade0d50980af81530f1be5dbb599cf39cd13941d216485b18422f8474a1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2bb807fa30678efaca258ed72a274a7f4e065ce20066caf601177dbc8466409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915d9141459dc9d0a72681717513aaef7a876003397a1ed89a62b755bb45dc67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a265cc950ba85a41da92efbf8a471efa10bdc6ef7aa7837fc86c3e4e023a263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a265cc950ba85a41da92efbf8a471efa10bdc6ef7aa7837fc86c3e4e023a263\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:31:04Z\\\",\\\"message\\\":\\\".417396 6826 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 16:31:04.417817 6826 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 16:31:04.418084 6826 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0122 16:31:04.418126 6826 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 16:31:04.418809 6826 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 16:31:04.418839 6826 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0122 16:31:04.418855 6826 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 16:31:04.418896 6826 factory.go:656] Stopping watch factory\\\\nI0122 16:31:04.418923 6826 ovnkube.go:599] Stopped ovnkube\\\\nI0122 16:31:04.418925 6826 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 16:31:04.418949 6826 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 16:31:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:31:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jdpck_openshift-ovn-kubernetes(9b60a09e-8bfa-4d2e-998d-e1db5dec0faa)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cfdd5744f9e8afe2a851b86ac85473f44fb49066784a282306ca8c1d621974b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jdpck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:08Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.954762 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cbszh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b21f81e8-3f11-43f9-abdb-09e8d25aeb73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0004ca3184c4311fd606fb18d3c4657d88f6212a1ac49a882c1a8ec5162c314b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5lx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e25bfe191c79389160e8c25e97ebd3bf2782cccecf01aac06c459041e083a793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5lx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cbszh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:08Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.962732 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bf18bca-54c9-46a7-ae1a-0e4cd3f2ff2f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdcb3871deb3a437bfd84b017af8233d06a10cbc0da01bb1aca18a10b40ca3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f65d6332d7a785ece6b513b6dc9c2b705475831c3d926b61070af12139bd51bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f65d6332d7a785ece6b513b6dc9c2b705475831c3d926b61070af12139bd51bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:08Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.973491 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afc42466-9bb2-4e33-abde-6a09e897045b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:08Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.985048 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:08Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:08 crc kubenswrapper[4758]: I0122 16:31:08.997231 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10fc91a9777392383ea1a48bb940f13581052f2aaadce7c2d94588884a8ff832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:08Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.011335 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9182510-5fc6-4717-b94c-de8ca4fb7c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1b80316bb1f3b27668a5ff6e547c13c4f84ae30f40fc6d0407849fb59fb9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqfn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:09Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.022404 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lt6tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"090f3014-3d99-49d5-8a9d-9719b4efbcf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a09e0ee71eddb461f883d44293ed63887153350f0f617799e7f360b5d6fdd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhkzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lt6tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:09Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.029320 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.029375 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.029384 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.029397 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.029407 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:09Z","lastTransitionTime":"2026-01-22T16:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.034195 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9485b50dd3fa712a0f43f04b4d3ae98e0f152d17b5db4b6f214125c1e926a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:09Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.045125 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:09Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.056621 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g8wjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425c9f0a-b14e-48d3-bd86-6fc510f22a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1d22788bf54b1c4a55b0c19222ad6dde207887ab282b97324717333f0280f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtrsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g8wjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:09Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.067377 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b5f24a-19df-4969-b547-a5acc323c58a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208979f8d30765fcfd45650c760741d72bd7119bfe62ebf4d7c1554d6c6d56e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf5569b30ec6397014b282bf67eca77930756b413c7554ab366d2d31a4f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zsbtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:09Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.131118 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.131150 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.131158 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.131170 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.131179 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:09Z","lastTransitionTime":"2026-01-22T16:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.233513 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.233563 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.233576 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.233594 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.233609 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:09Z","lastTransitionTime":"2026-01-22T16:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.335202 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.335244 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.335255 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.335268 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.335277 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:09Z","lastTransitionTime":"2026-01-22T16:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.436984 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.437029 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.437038 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.437052 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.437062 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:09Z","lastTransitionTime":"2026-01-22T16:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.539919 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.539969 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.539981 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.539999 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.540013 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:09Z","lastTransitionTime":"2026-01-22T16:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.642353 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.642434 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.642457 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.642486 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.642510 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:09Z","lastTransitionTime":"2026-01-22T16:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.744858 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.744910 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.744927 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.744951 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.744969 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:09Z","lastTransitionTime":"2026-01-22T16:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.847878 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.847958 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.847971 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.847995 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.848010 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:09Z","lastTransitionTime":"2026-01-22T16:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.875410 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 04:00:15.721723347 +0000 UTC Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.950294 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.950364 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.950387 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.950420 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:09 crc kubenswrapper[4758]: I0122 16:31:09.950443 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:09Z","lastTransitionTime":"2026-01-22T16:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.052958 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.053056 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.053073 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.053096 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.053112 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:10Z","lastTransitionTime":"2026-01-22T16:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.156343 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.156410 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.156435 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.156464 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.156485 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:10Z","lastTransitionTime":"2026-01-22T16:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.259590 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.259652 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.259672 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.259699 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.259716 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:10Z","lastTransitionTime":"2026-01-22T16:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.362554 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.362623 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.362632 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.362649 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.362659 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:10Z","lastTransitionTime":"2026-01-22T16:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.466250 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.466570 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.466705 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.466867 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.467000 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:10Z","lastTransitionTime":"2026-01-22T16:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.570306 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.570666 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.570878 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.571069 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.571207 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:10Z","lastTransitionTime":"2026-01-22T16:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.674561 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.674601 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.674611 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.674626 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.674636 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:10Z","lastTransitionTime":"2026-01-22T16:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.776964 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.777022 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.777034 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.777053 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.777067 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:10Z","lastTransitionTime":"2026-01-22T16:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.807697 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:31:10 crc kubenswrapper[4758]: E0122 16:31:10.807942 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.808002 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.808038 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.808239 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:31:10 crc kubenswrapper[4758]: E0122 16:31:10.808349 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:31:10 crc kubenswrapper[4758]: E0122 16:31:10.808571 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:31:10 crc kubenswrapper[4758]: E0122 16:31:10.808778 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.875563 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 07:46:29.289822633 +0000 UTC Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.879494 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.879573 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.879595 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.879622 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.879643 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:10Z","lastTransitionTime":"2026-01-22T16:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.982082 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.982130 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.982145 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.982167 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:10 crc kubenswrapper[4758]: I0122 16:31:10.982183 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:10Z","lastTransitionTime":"2026-01-22T16:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:11 crc kubenswrapper[4758]: I0122 16:31:11.085213 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:11 crc kubenswrapper[4758]: I0122 16:31:11.085422 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:11 crc kubenswrapper[4758]: I0122 16:31:11.085445 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:11 crc kubenswrapper[4758]: I0122 16:31:11.085501 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:11 crc kubenswrapper[4758]: I0122 16:31:11.085516 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:11Z","lastTransitionTime":"2026-01-22T16:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:11 crc kubenswrapper[4758]: I0122 16:31:11.188552 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:11 crc kubenswrapper[4758]: I0122 16:31:11.188639 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:11 crc kubenswrapper[4758]: I0122 16:31:11.188655 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:11 crc kubenswrapper[4758]: I0122 16:31:11.188672 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:11 crc kubenswrapper[4758]: I0122 16:31:11.188683 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:11Z","lastTransitionTime":"2026-01-22T16:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:11 crc kubenswrapper[4758]: I0122 16:31:11.291925 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:11 crc kubenswrapper[4758]: I0122 16:31:11.291976 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:11 crc kubenswrapper[4758]: I0122 16:31:11.291987 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:11 crc kubenswrapper[4758]: I0122 16:31:11.292005 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:11 crc kubenswrapper[4758]: I0122 16:31:11.292166 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:11Z","lastTransitionTime":"2026-01-22T16:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:11 crc kubenswrapper[4758]: I0122 16:31:11.394686 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:11 crc kubenswrapper[4758]: I0122 16:31:11.394725 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:11 crc kubenswrapper[4758]: I0122 16:31:11.394766 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:11 crc kubenswrapper[4758]: I0122 16:31:11.394811 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:11 crc kubenswrapper[4758]: I0122 16:31:11.394830 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:11Z","lastTransitionTime":"2026-01-22T16:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:11 crc kubenswrapper[4758]: I0122 16:31:11.498949 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:11 crc kubenswrapper[4758]: I0122 16:31:11.498997 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:11 crc kubenswrapper[4758]: I0122 16:31:11.499008 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:11 crc kubenswrapper[4758]: I0122 16:31:11.499028 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:11 crc kubenswrapper[4758]: I0122 16:31:11.499041 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:11Z","lastTransitionTime":"2026-01-22T16:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:11 crc kubenswrapper[4758]: I0122 16:31:11.602672 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:11 crc kubenswrapper[4758]: I0122 16:31:11.603015 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:11 crc kubenswrapper[4758]: I0122 16:31:11.603207 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:11 crc kubenswrapper[4758]: I0122 16:31:11.603381 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:11 crc kubenswrapper[4758]: I0122 16:31:11.603691 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:11Z","lastTransitionTime":"2026-01-22T16:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:11 crc kubenswrapper[4758]: I0122 16:31:11.707224 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:11 crc kubenswrapper[4758]: I0122 16:31:11.707273 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:11 crc kubenswrapper[4758]: I0122 16:31:11.707284 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:11 crc kubenswrapper[4758]: I0122 16:31:11.707301 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:11 crc kubenswrapper[4758]: I0122 16:31:11.707312 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:11Z","lastTransitionTime":"2026-01-22T16:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:11 crc kubenswrapper[4758]: I0122 16:31:11.810290 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:11 crc kubenswrapper[4758]: I0122 16:31:11.810336 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:11 crc kubenswrapper[4758]: I0122 16:31:11.810349 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:11 crc kubenswrapper[4758]: I0122 16:31:11.810364 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:11 crc kubenswrapper[4758]: I0122 16:31:11.810378 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:11Z","lastTransitionTime":"2026-01-22T16:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:11 crc kubenswrapper[4758]: I0122 16:31:11.876351 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 00:11:00.206567044 +0000 UTC Jan 22 16:31:11 crc kubenswrapper[4758]: I0122 16:31:11.912261 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:11 crc kubenswrapper[4758]: I0122 16:31:11.912289 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:11 crc kubenswrapper[4758]: I0122 16:31:11.912297 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:11 crc kubenswrapper[4758]: I0122 16:31:11.912309 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:11 crc kubenswrapper[4758]: I0122 16:31:11.912318 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:11Z","lastTransitionTime":"2026-01-22T16:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.015711 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.015877 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.015904 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.015938 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.015962 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:12Z","lastTransitionTime":"2026-01-22T16:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.118842 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.118877 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.118889 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.118906 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.118918 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:12Z","lastTransitionTime":"2026-01-22T16:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.120257 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.120287 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.120296 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.120307 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.120321 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:12Z","lastTransitionTime":"2026-01-22T16:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:12 crc kubenswrapper[4758]: E0122 16:31:12.146290 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7288053-8dca-462f-b24f-6a9d8be738b3\\\",\\\"systemUUID\\\":\\\"83805c52-2bba-4705-bdbe-9101a9d1190e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:12Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.151914 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.152017 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.152032 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.152051 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.152063 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:12Z","lastTransitionTime":"2026-01-22T16:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:12 crc kubenswrapper[4758]: E0122 16:31:12.167672 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7288053-8dca-462f-b24f-6a9d8be738b3\\\",\\\"systemUUID\\\":\\\"83805c52-2bba-4705-bdbe-9101a9d1190e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:12Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.172189 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.172228 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.172237 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.172254 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.172263 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:12Z","lastTransitionTime":"2026-01-22T16:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:12 crc kubenswrapper[4758]: E0122 16:31:12.184091 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7288053-8dca-462f-b24f-6a9d8be738b3\\\",\\\"systemUUID\\\":\\\"83805c52-2bba-4705-bdbe-9101a9d1190e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:12Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.187142 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.187195 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.187205 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.187217 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.187227 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:12Z","lastTransitionTime":"2026-01-22T16:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:12 crc kubenswrapper[4758]: E0122 16:31:12.197609 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7288053-8dca-462f-b24f-6a9d8be738b3\\\",\\\"systemUUID\\\":\\\"83805c52-2bba-4705-bdbe-9101a9d1190e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:12Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.201510 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.201560 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.201571 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.201587 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.201617 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:12Z","lastTransitionTime":"2026-01-22T16:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:12 crc kubenswrapper[4758]: E0122 16:31:12.214376 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7288053-8dca-462f-b24f-6a9d8be738b3\\\",\\\"systemUUID\\\":\\\"83805c52-2bba-4705-bdbe-9101a9d1190e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:12Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:12 crc kubenswrapper[4758]: E0122 16:31:12.214530 4758 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.220914 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.220958 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.220969 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.220986 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.220997 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:12Z","lastTransitionTime":"2026-01-22T16:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.323333 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.323644 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.323830 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.324068 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.324255 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:12Z","lastTransitionTime":"2026-01-22T16:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.427594 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.427642 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.427654 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.427672 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.427684 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:12Z","lastTransitionTime":"2026-01-22T16:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.530365 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.530400 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.530408 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.530422 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.530431 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:12Z","lastTransitionTime":"2026-01-22T16:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.633871 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.633917 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.633931 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.633951 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.633964 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:12Z","lastTransitionTime":"2026-01-22T16:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.736824 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.736867 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.736878 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.736893 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.736906 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:12Z","lastTransitionTime":"2026-01-22T16:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.808006 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.808090 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.808121 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:31:12 crc kubenswrapper[4758]: E0122 16:31:12.808157 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:31:12 crc kubenswrapper[4758]: E0122 16:31:12.808294 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.808362 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:31:12 crc kubenswrapper[4758]: E0122 16:31:12.808404 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:31:12 crc kubenswrapper[4758]: E0122 16:31:12.808511 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.839506 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.839547 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.839559 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.839575 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.839594 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:12Z","lastTransitionTime":"2026-01-22T16:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.877126 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 16:34:31.464063513 +0000 UTC Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.941455 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.941500 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.941512 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.941526 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:12 crc kubenswrapper[4758]: I0122 16:31:12.941538 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:12Z","lastTransitionTime":"2026-01-22T16:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.044273 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.044310 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.044356 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.044373 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.044382 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:13Z","lastTransitionTime":"2026-01-22T16:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.147414 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.147474 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.147496 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.147522 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.147545 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:13Z","lastTransitionTime":"2026-01-22T16:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.249147 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.249173 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.249182 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.249198 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.249209 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:13Z","lastTransitionTime":"2026-01-22T16:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.351765 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.352101 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.352379 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.352464 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.352594 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:13Z","lastTransitionTime":"2026-01-22T16:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.454380 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.454427 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.454436 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.454450 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.454459 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:13Z","lastTransitionTime":"2026-01-22T16:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.557164 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.557220 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.557238 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.557260 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.557275 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:13Z","lastTransitionTime":"2026-01-22T16:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.660109 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.660147 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.660158 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.660177 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.660188 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:13Z","lastTransitionTime":"2026-01-22T16:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.763254 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.763288 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.763301 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.763318 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.763329 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:13Z","lastTransitionTime":"2026-01-22T16:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.865421 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.865459 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.865468 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.865502 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.865511 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:13Z","lastTransitionTime":"2026-01-22T16:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.878284 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 16:19:14.809662474 +0000 UTC Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.968901 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.969028 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.969044 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.969080 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:13 crc kubenswrapper[4758]: I0122 16:31:13.969093 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:13Z","lastTransitionTime":"2026-01-22T16:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.072113 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.072164 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.072174 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.072190 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.072377 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:14Z","lastTransitionTime":"2026-01-22T16:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.175251 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.175323 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.175334 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.175358 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.175370 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:14Z","lastTransitionTime":"2026-01-22T16:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.278079 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.278127 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.278139 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.278155 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.278167 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:14Z","lastTransitionTime":"2026-01-22T16:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.380052 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.380119 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.380134 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.380151 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.380164 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:14Z","lastTransitionTime":"2026-01-22T16:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.482835 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.482882 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.482893 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.482911 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.482922 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:14Z","lastTransitionTime":"2026-01-22T16:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.586052 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.586128 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.586145 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.586168 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.586186 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:14Z","lastTransitionTime":"2026-01-22T16:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.688290 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.688335 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.688380 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.688400 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.688412 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:14Z","lastTransitionTime":"2026-01-22T16:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.790955 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.791003 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.791014 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.791039 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.791050 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:14Z","lastTransitionTime":"2026-01-22T16:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.807326 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.807396 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:31:14 crc kubenswrapper[4758]: E0122 16:31:14.807461 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.807523 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:31:14 crc kubenswrapper[4758]: E0122 16:31:14.807668 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:31:14 crc kubenswrapper[4758]: E0122 16:31:14.807849 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.807880 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:31:14 crc kubenswrapper[4758]: E0122 16:31:14.807942 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.879205 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 14:29:57.946269276 +0000 UTC Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.893477 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.893582 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.893613 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.893630 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.893640 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:14Z","lastTransitionTime":"2026-01-22T16:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.996866 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.996920 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.996932 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.996949 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:14 crc kubenswrapper[4758]: I0122 16:31:14.996959 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:14Z","lastTransitionTime":"2026-01-22T16:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:15 crc kubenswrapper[4758]: I0122 16:31:15.102506 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:15 crc kubenswrapper[4758]: I0122 16:31:15.102550 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:15 crc kubenswrapper[4758]: I0122 16:31:15.102561 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:15 crc kubenswrapper[4758]: I0122 16:31:15.102576 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:15 crc kubenswrapper[4758]: I0122 16:31:15.102586 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:15Z","lastTransitionTime":"2026-01-22T16:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:15 crc kubenswrapper[4758]: I0122 16:31:15.204570 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:15 crc kubenswrapper[4758]: I0122 16:31:15.204902 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:15 crc kubenswrapper[4758]: I0122 16:31:15.204996 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:15 crc kubenswrapper[4758]: I0122 16:31:15.205086 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:15 crc kubenswrapper[4758]: I0122 16:31:15.205161 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:15Z","lastTransitionTime":"2026-01-22T16:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:15 crc kubenswrapper[4758]: I0122 16:31:15.307392 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:15 crc kubenswrapper[4758]: I0122 16:31:15.307434 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:15 crc kubenswrapper[4758]: I0122 16:31:15.307445 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:15 crc kubenswrapper[4758]: I0122 16:31:15.307459 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:15 crc kubenswrapper[4758]: I0122 16:31:15.307467 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:15Z","lastTransitionTime":"2026-01-22T16:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:15 crc kubenswrapper[4758]: I0122 16:31:15.409945 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:15 crc kubenswrapper[4758]: I0122 16:31:15.409981 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:15 crc kubenswrapper[4758]: I0122 16:31:15.409991 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:15 crc kubenswrapper[4758]: I0122 16:31:15.410006 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:15 crc kubenswrapper[4758]: I0122 16:31:15.410017 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:15Z","lastTransitionTime":"2026-01-22T16:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:15 crc kubenswrapper[4758]: I0122 16:31:15.513664 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:15 crc kubenswrapper[4758]: I0122 16:31:15.513698 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:15 crc kubenswrapper[4758]: I0122 16:31:15.513708 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:15 crc kubenswrapper[4758]: I0122 16:31:15.513723 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:15 crc kubenswrapper[4758]: I0122 16:31:15.513733 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:15Z","lastTransitionTime":"2026-01-22T16:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:15 crc kubenswrapper[4758]: I0122 16:31:15.616510 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:15 crc kubenswrapper[4758]: I0122 16:31:15.616906 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:15 crc kubenswrapper[4758]: I0122 16:31:15.617073 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:15 crc kubenswrapper[4758]: I0122 16:31:15.617260 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:15 crc kubenswrapper[4758]: I0122 16:31:15.617480 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:15Z","lastTransitionTime":"2026-01-22T16:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:15 crc kubenswrapper[4758]: I0122 16:31:15.720239 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:15 crc kubenswrapper[4758]: I0122 16:31:15.720321 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:15 crc kubenswrapper[4758]: I0122 16:31:15.720343 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:15 crc kubenswrapper[4758]: I0122 16:31:15.720378 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:15 crc kubenswrapper[4758]: I0122 16:31:15.720402 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:15Z","lastTransitionTime":"2026-01-22T16:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:15 crc kubenswrapper[4758]: I0122 16:31:15.823375 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:15 crc kubenswrapper[4758]: I0122 16:31:15.823448 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:15 crc kubenswrapper[4758]: I0122 16:31:15.823457 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:15 crc kubenswrapper[4758]: I0122 16:31:15.823479 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:15 crc kubenswrapper[4758]: I0122 16:31:15.823499 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:15Z","lastTransitionTime":"2026-01-22T16:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:15 crc kubenswrapper[4758]: I0122 16:31:15.880013 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 08:20:59.84207852 +0000 UTC Jan 22 16:31:15 crc kubenswrapper[4758]: I0122 16:31:15.926240 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:15 crc kubenswrapper[4758]: I0122 16:31:15.926291 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:15 crc kubenswrapper[4758]: I0122 16:31:15.926310 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:15 crc kubenswrapper[4758]: I0122 16:31:15.926332 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:15 crc kubenswrapper[4758]: I0122 16:31:15.926349 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:15Z","lastTransitionTime":"2026-01-22T16:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.029179 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.029459 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.029539 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.029631 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.029706 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:16Z","lastTransitionTime":"2026-01-22T16:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.132334 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.132654 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.132727 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.132825 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.132936 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:16Z","lastTransitionTime":"2026-01-22T16:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.236212 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.236276 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.236294 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.236322 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.236342 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:16Z","lastTransitionTime":"2026-01-22T16:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.339437 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.339500 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.339516 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.339540 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.339553 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:16Z","lastTransitionTime":"2026-01-22T16:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.441448 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.441488 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.441498 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.441516 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.441526 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:16Z","lastTransitionTime":"2026-01-22T16:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.546140 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.546208 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.546217 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.546233 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.546244 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:16Z","lastTransitionTime":"2026-01-22T16:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.648153 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.648233 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.648266 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.648295 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.648316 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:16Z","lastTransitionTime":"2026-01-22T16:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.750814 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.750851 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.750861 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.750879 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.750899 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:16Z","lastTransitionTime":"2026-01-22T16:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.807159 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.807213 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.807176 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.807162 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:31:16 crc kubenswrapper[4758]: E0122 16:31:16.807324 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:31:16 crc kubenswrapper[4758]: E0122 16:31:16.807422 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:31:16 crc kubenswrapper[4758]: E0122 16:31:16.807499 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:31:16 crc kubenswrapper[4758]: E0122 16:31:16.807913 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.808156 4758 scope.go:117] "RemoveContainer" containerID="7a265cc950ba85a41da92efbf8a471efa10bdc6ef7aa7837fc86c3e4e023a263" Jan 22 16:31:16 crc kubenswrapper[4758]: E0122 16:31:16.808420 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jdpck_openshift-ovn-kubernetes(9b60a09e-8bfa-4d2e-998d-e1db5dec0faa)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.852837 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.852879 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.852887 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.852902 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.852914 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:16Z","lastTransitionTime":"2026-01-22T16:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.880384 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 08:44:16.504968679 +0000 UTC Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.954922 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.954969 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.954984 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.955001 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:16 crc kubenswrapper[4758]: I0122 16:31:16.955015 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:16Z","lastTransitionTime":"2026-01-22T16:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.057938 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.057988 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.057999 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.058017 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.058027 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:17Z","lastTransitionTime":"2026-01-22T16:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.161086 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.161148 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.161166 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.161189 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.161207 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:17Z","lastTransitionTime":"2026-01-22T16:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.263666 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.263708 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.263720 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.263737 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.263765 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:17Z","lastTransitionTime":"2026-01-22T16:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.366719 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.366796 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.366817 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.366839 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.366855 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:17Z","lastTransitionTime":"2026-01-22T16:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.469540 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.469592 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.469605 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.469623 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.469637 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:17Z","lastTransitionTime":"2026-01-22T16:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.571865 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.571917 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.571928 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.571944 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.571955 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:17Z","lastTransitionTime":"2026-01-22T16:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.673799 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.673871 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.673880 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.673893 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.673902 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:17Z","lastTransitionTime":"2026-01-22T16:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.781940 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.781985 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.781996 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.782024 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.782038 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:17Z","lastTransitionTime":"2026-01-22T16:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.880764 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 17:46:51.731683649 +0000 UTC Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.884306 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.884345 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.884359 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.884375 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.884386 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:17Z","lastTransitionTime":"2026-01-22T16:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.987996 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.988039 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.988049 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.988064 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:17 crc kubenswrapper[4758]: I0122 16:31:17.988074 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:17Z","lastTransitionTime":"2026-01-22T16:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.090234 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.090289 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.090302 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.090319 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.090330 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:18Z","lastTransitionTime":"2026-01-22T16:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.192385 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.192486 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.192503 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.192525 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.192537 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:18Z","lastTransitionTime":"2026-01-22T16:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.294582 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.294629 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.294638 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.294651 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.294666 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:18Z","lastTransitionTime":"2026-01-22T16:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.397011 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.397043 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.397056 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.397074 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.397086 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:18Z","lastTransitionTime":"2026-01-22T16:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.499695 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.499756 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.499765 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.499819 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.499829 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:18Z","lastTransitionTime":"2026-01-22T16:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.602594 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.602644 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.602656 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.602675 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.602686 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:18Z","lastTransitionTime":"2026-01-22T16:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.704488 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.704528 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.704540 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.704560 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.704571 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:18Z","lastTransitionTime":"2026-01-22T16:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.806705 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.806783 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.806794 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.806809 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.806820 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:18Z","lastTransitionTime":"2026-01-22T16:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.807162 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:31:18 crc kubenswrapper[4758]: E0122 16:31:18.807264 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.807286 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.807349 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:31:18 crc kubenswrapper[4758]: E0122 16:31:18.807452 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:31:18 crc kubenswrapper[4758]: E0122 16:31:18.807505 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.807638 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:31:18 crc kubenswrapper[4758]: E0122 16:31:18.807685 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.819329 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cbszh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b21f81e8-3f11-43f9-abdb-09e8d25aeb73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0004ca3184c4311fd606fb18d3c4657d88f6212a1ac49a882c1a8ec5162c314b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5lx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e25bfe191c79389160e8c25e97ebd3bf2782cccecf01aac06c459041e083a793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5lx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cbszh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.831637 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bf18bca-54c9-46a7-ae1a-0e4cd3f2ff2f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdcb3871deb3a437bfd84b017af8233d06a10cbc0da01bb1aca18a10b40ca3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f65d6332d7a785ece6b513b6dc9c2b705475831c3d926b61070af12139bd51bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f65d6332d7a785ece6b513b6dc9c2b705475831c3d926b61070af12139bd51bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.845496 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afc42466-9bb2-4e33-abde-6a09e897045b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11980645d08b6999a3017461b48c990c4654c8def5711702ff41c9ccc4eec17e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://557099dd67191b0cc21d555b7d1d92f631020c0cb659d1f0d799701da7035b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4c5c3f4f3b6c4096685c6a1a94c461dd90d532e6c007637fe1090addd5e4ce8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.860079 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.873653 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10fc91a9777392383ea1a48bb940f13581052f2aaadce7c2d94588884a8ff832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.881287 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 16:00:04.74152173 +0000 UTC Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.893487 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385c8e25a62d5dad6aeac43a064397418c85c1b8720414cd44e3e925fa85a04d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f98a04a30984aea45235e40edb9801d2939b35a08519d1d63df0d0c6c47131a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://596bd59377fe79f228ddda88e07b73a2f24a57ce836d0f0b2ca02d6008363020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ade0d50980af81530f1be5dbb599cf39cd13941d216485b18422f8474a1d8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2bb807fa30678efaca258ed72a274a7f4e065ce20066caf601177dbc8466409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915d9141459dc9d0a72681717513aaef7a876003397a1ed89a62b755bb45dc67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a265cc950ba85a41da92efbf8a471efa10bdc6ef7aa7837fc86c3e4e023a263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a265cc950ba85a41da92efbf8a471efa10bdc6ef7aa7837fc86c3e4e023a263\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:31:04Z\\\",\\\"message\\\":\\\".417396 6826 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 16:31:04.417817 6826 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 16:31:04.418084 6826 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0122 16:31:04.418126 6826 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 16:31:04.418809 6826 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 16:31:04.418839 6826 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0122 16:31:04.418855 6826 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 16:31:04.418896 6826 factory.go:656] Stopping watch factory\\\\nI0122 16:31:04.418923 6826 ovnkube.go:599] Stopped ovnkube\\\\nI0122 16:31:04.418925 6826 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 16:31:04.418949 6826 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 16:31:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:31:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jdpck_openshift-ovn-kubernetes(9b60a09e-8bfa-4d2e-998d-e1db5dec0faa)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cfdd5744f9e8afe2a851b86ac85473f44fb49066784a282306ca8c1d621974b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96qwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jdpck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.906618 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lt6tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"090f3014-3d99-49d5-8a9d-9719b4efbcf8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a09e0ee71eddb461f883d44293ed63887153350f0f617799e7f360b5d6fdd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhkzn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lt6tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.909235 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.909349 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.909443 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.909532 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.909625 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:18Z","lastTransitionTime":"2026-01-22T16:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.919459 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d9485b50dd3fa712a0f43f04b4d3ae98e0f152d17b5db4b6f214125c1e926a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.932807 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.942847 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-g8wjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"425c9f0a-b14e-48d3-bd86-6fc510f22a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1d22788bf54b1c4a55b0c19222ad6dde207887ab282b97324717333f0280f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtrsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-g8wjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.955030 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b5f24a-19df-4969-b547-a5acc323c58a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208979f8d30765fcfd45650c760741d72bd7119bfe62ebf4d7c1554d6c6d56e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fbf5569b30ec6397014b282bf67eca77930756b413c7554ab366d2d31a4f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzkms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zsbtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.972336 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9182510-5fc6-4717-b94c-de8ca4fb7c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1b80316bb1f3b27668a5ff6e547c13c4f84ae30f40fc6d0407849fb59fb9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f3c265d367e049f27982f95524ebb792d470ac5b7a7b5fd3946513e03c8098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b573cb23026f25b32eeed63ad42fc40c8d12bbefb8a5d8bbeb002825206e5063\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19e2c9bd36ae362c851d4ebed8e9c3f883858c66e73ba525ef64ace0d35e1c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce154ea9f4c38eb3e8fb953efe771bb3d2d51bccc95ae6eda6b35a4e12cdc28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a46ded9d39ed5f3daa0bec5963896d37a97613dd4bcb238bf8d06d0a192d6263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c506252f8259e793314a9f357401a7f80740b83066071b48e4665416c9994d43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mxd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fqfn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:18 crc kubenswrapper[4758]: I0122 16:31:18.985526 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xqns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ef1c490-d5f9-458d-8b3e-8580a5f07df6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8br2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8br2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xqns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.006239 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9309c6-0336-4a15-8cbf-78178b4e57d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6824555f2019c5b0c92137ccb0a9af419b01ce0c63e1739c1d22b155a97c98a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a945d54b82518c2cda9257528f766444b687693255c50680adafb11651c792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca6e50d3a2acc2a4d43dc4a1fc1ff783ea5cb78978132377b7bb12b0dbd3e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c7268055ac9d7def228857bd8b974a53bb71fa873e1e0495d4691b8ca11902\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb71578e3eba87e91e6f6db0b03669e556cfbf38e2df367d20b6c8c79952f59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2ac18cef270d20566735c61087fc3dca4531c2118fa5bf3da8a2a2432ddfe8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa3f07fbdff09e87a47e2a0d70ac0cf314b1e249502ef1c9c88eb628cb3ea6b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670bb810ccabc5a3436ba38f17b293b3203a00e49b2b97320b0de8b4bebe03b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:19Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.012634 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.012677 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.012689 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.012707 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.012722 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:19Z","lastTransitionTime":"2026-01-22T16:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.022393 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f128c8ae-2e32-4884-a296-728579141589\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:29:51.087222 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:29:51.088631 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2674264491/tls.crt::/tmp/serving-cert-2674264491/tls.key\\\\\\\"\\\\nI0122 16:29:56.617863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:29:56.621506 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:29:56.621541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:29:56.621606 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:29:56.621634 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:29:56.631508 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:29:56.631550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631559 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:29:56.631568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:29:56.631576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0122 16:29:56.631574 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 16:29:56.631584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:29:56.631610 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 16:29:56.634157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:19Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.037627 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ba0bf6-e521-4b47-a7e5-81f19a4bf3ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f742b25c51806335d17c6c67e8ad4944228fde89626352044f62ee1e708c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0197852c20ea1961ea8cff956886a8a42967c95fad73d2ed8bd37e6f763cca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3cdc36e13e13f43cb329beb4b415f17dab3d8427338168449ea3771053d668a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://981ef0ee873407291236dfd734567e3213a9451d495eb97e1029696cc788acbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://981ef0ee873407291236dfd734567e3213a9451d495eb97e1029696cc788acbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:29:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:19Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.053084 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dfeba9911630f8c172fab9eee3a107fbc2e24407b0af1f69cd539bac18d47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:19Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.067124 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7dvfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97853b38-352d-42df-ad31-639c0e58093a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56af628fe62b476141809cfaea6a06fdd7dfa34ed41fb97425db4cdaa3ec7b4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12409cad6bedda3da41a11ce209dd58b7d15e3fc0dde575d70b3aa6c64435144\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:30:49Z\\\",\\\"message\\\":\\\"2026-01-22T16:30:04+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c3ab300e-f214-48f5-80e5-57280a3cce0f\\\\n2026-01-22T16:30:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c3ab300e-f214-48f5-80e5-57280a3cce0f to /host/opt/cni/bin/\\\\n2026-01-22T16:30:04Z [verbose] multus-daemon started\\\\n2026-01-22T16:30:04Z [verbose] Readiness Indicator file check\\\\n2026-01-22T16:30:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:30:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcrsz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7dvfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:19Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.079276 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:19Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.115047 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.115083 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.115095 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.115110 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.115122 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:19Z","lastTransitionTime":"2026-01-22T16:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.217144 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.217186 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.217196 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.217212 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.217223 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:19Z","lastTransitionTime":"2026-01-22T16:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.319487 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.319533 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.319548 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.319572 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.319589 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:19Z","lastTransitionTime":"2026-01-22T16:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.422775 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.422835 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.422852 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.422874 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.422890 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:19Z","lastTransitionTime":"2026-01-22T16:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.525821 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.525885 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.525901 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.525925 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.525942 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:19Z","lastTransitionTime":"2026-01-22T16:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.629870 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.629935 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.629980 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.630007 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.630025 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:19Z","lastTransitionTime":"2026-01-22T16:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.732820 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.732913 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.732928 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.732946 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.732958 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:19Z","lastTransitionTime":"2026-01-22T16:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.836265 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.836312 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.836324 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.836341 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.836354 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:19Z","lastTransitionTime":"2026-01-22T16:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.883163 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 02:08:48.333608496 +0000 UTC Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.939603 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.939945 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.940091 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.940188 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:19 crc kubenswrapper[4758]: I0122 16:31:19.940270 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:19Z","lastTransitionTime":"2026-01-22T16:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.043386 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.043438 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.043449 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.043466 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.043479 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:20Z","lastTransitionTime":"2026-01-22T16:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.145889 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.145955 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.145971 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.145992 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.146010 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:20Z","lastTransitionTime":"2026-01-22T16:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.248931 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.248975 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.248987 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.249001 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.249011 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:20Z","lastTransitionTime":"2026-01-22T16:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.351974 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.352402 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.352616 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.352896 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.353102 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:20Z","lastTransitionTime":"2026-01-22T16:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.456332 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.456411 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.456424 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.456471 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.456484 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:20Z","lastTransitionTime":"2026-01-22T16:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.559527 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.559596 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.559620 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.559650 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.559671 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:20Z","lastTransitionTime":"2026-01-22T16:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.664201 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.664240 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.664249 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.664263 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.664274 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:20Z","lastTransitionTime":"2026-01-22T16:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.751107 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3ef1c490-d5f9-458d-8b3e-8580a5f07df6-metrics-certs\") pod \"network-metrics-daemon-2xqns\" (UID: \"3ef1c490-d5f9-458d-8b3e-8580a5f07df6\") " pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:31:20 crc kubenswrapper[4758]: E0122 16:31:20.751314 4758 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 16:31:20 crc kubenswrapper[4758]: E0122 16:31:20.751395 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ef1c490-d5f9-458d-8b3e-8580a5f07df6-metrics-certs podName:3ef1c490-d5f9-458d-8b3e-8580a5f07df6 nodeName:}" failed. No retries permitted until 2026-01-22 16:32:24.751376393 +0000 UTC m=+166.234715678 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3ef1c490-d5f9-458d-8b3e-8580a5f07df6-metrics-certs") pod "network-metrics-daemon-2xqns" (UID: "3ef1c490-d5f9-458d-8b3e-8580a5f07df6") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.794590 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.794639 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.794656 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.794676 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.794690 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:20Z","lastTransitionTime":"2026-01-22T16:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.807325 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:31:20 crc kubenswrapper[4758]: E0122 16:31:20.807525 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.807537 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.807586 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.807804 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:31:20 crc kubenswrapper[4758]: E0122 16:31:20.808006 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:31:20 crc kubenswrapper[4758]: E0122 16:31:20.808181 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:31:20 crc kubenswrapper[4758]: E0122 16:31:20.808282 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.884234 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 19:04:43.34107048 +0000 UTC Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.897372 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.897406 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.897417 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.897433 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:20 crc kubenswrapper[4758]: I0122 16:31:20.897446 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:20Z","lastTransitionTime":"2026-01-22T16:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:20.999976 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:21.000053 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:21.000076 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:21.000106 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:21.000124 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:21Z","lastTransitionTime":"2026-01-22T16:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:21.102896 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:21.103272 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:21.103302 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:21.103327 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:21.103347 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:21Z","lastTransitionTime":"2026-01-22T16:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:21.208357 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:21.208395 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:21.208406 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:21.208421 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:21.208433 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:21Z","lastTransitionTime":"2026-01-22T16:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:21.310658 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:21.310700 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:21.310713 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:21.310730 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:21.310762 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:21Z","lastTransitionTime":"2026-01-22T16:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:21.412838 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:21.412865 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:21.412873 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:21.412885 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:21.412895 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:21Z","lastTransitionTime":"2026-01-22T16:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:21.515427 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:21.515483 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:21.515493 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:21.515506 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:21.515515 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:21Z","lastTransitionTime":"2026-01-22T16:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:21.618187 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:21.618225 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:21.618235 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:21.618250 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:21.618260 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:21Z","lastTransitionTime":"2026-01-22T16:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:21.722093 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:21.722146 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:21.722159 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:21.722177 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:21.722191 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:21Z","lastTransitionTime":"2026-01-22T16:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:21.824062 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:21.824110 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:21.824123 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:21.824141 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:21.824153 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:21Z","lastTransitionTime":"2026-01-22T16:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:21.884413 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 19:09:30.351378867 +0000 UTC Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:21.926477 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:21.926687 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:21.926776 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:21.926866 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:21 crc kubenswrapper[4758]: I0122 16:31:21.926925 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:21Z","lastTransitionTime":"2026-01-22T16:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.029946 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.030091 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.030118 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.030143 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.030172 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:22Z","lastTransitionTime":"2026-01-22T16:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.133504 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.133572 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.133592 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.133619 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.133638 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:22Z","lastTransitionTime":"2026-01-22T16:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.236508 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.236584 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.236603 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.236632 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.236649 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:22Z","lastTransitionTime":"2026-01-22T16:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.338506 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.338572 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.338585 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.338603 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.338616 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:22Z","lastTransitionTime":"2026-01-22T16:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.441794 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.441867 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.441888 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.441912 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.441933 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:22Z","lastTransitionTime":"2026-01-22T16:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.544975 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.545037 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.545055 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.545080 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.545097 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:22Z","lastTransitionTime":"2026-01-22T16:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.604161 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.604213 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.604228 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.604249 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.604267 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:22Z","lastTransitionTime":"2026-01-22T16:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:22 crc kubenswrapper[4758]: E0122 16:31:22.619197 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7288053-8dca-462f-b24f-6a9d8be738b3\\\",\\\"systemUUID\\\":\\\"83805c52-2bba-4705-bdbe-9101a9d1190e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:22Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.622464 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.622601 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.622665 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.622728 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.622840 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:22Z","lastTransitionTime":"2026-01-22T16:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:22 crc kubenswrapper[4758]: E0122 16:31:22.634289 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7288053-8dca-462f-b24f-6a9d8be738b3\\\",\\\"systemUUID\\\":\\\"83805c52-2bba-4705-bdbe-9101a9d1190e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:22Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.637452 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.637512 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.637522 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.637540 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.637549 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:22Z","lastTransitionTime":"2026-01-22T16:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:22 crc kubenswrapper[4758]: E0122 16:31:22.650838 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7288053-8dca-462f-b24f-6a9d8be738b3\\\",\\\"systemUUID\\\":\\\"83805c52-2bba-4705-bdbe-9101a9d1190e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:22Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.658508 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.658577 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.658590 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.658607 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.658618 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:22Z","lastTransitionTime":"2026-01-22T16:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:22 crc kubenswrapper[4758]: E0122 16:31:22.670858 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7288053-8dca-462f-b24f-6a9d8be738b3\\\",\\\"systemUUID\\\":\\\"83805c52-2bba-4705-bdbe-9101a9d1190e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:22Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.674359 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.674400 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.674411 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.674427 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.674439 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:22Z","lastTransitionTime":"2026-01-22T16:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:22 crc kubenswrapper[4758]: E0122 16:31:22.688768 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:31:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7288053-8dca-462f-b24f-6a9d8be738b3\\\",\\\"systemUUID\\\":\\\"83805c52-2bba-4705-bdbe-9101a9d1190e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:31:22Z is after 2025-08-24T17:21:41Z" Jan 22 16:31:22 crc kubenswrapper[4758]: E0122 16:31:22.688903 4758 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.690386 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.690416 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.690426 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.690441 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.690450 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:22Z","lastTransitionTime":"2026-01-22T16:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.793313 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.793354 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.793362 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.793377 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.793387 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:22Z","lastTransitionTime":"2026-01-22T16:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.807671 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.807698 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:31:22 crc kubenswrapper[4758]: E0122 16:31:22.807829 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.807891 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:31:22 crc kubenswrapper[4758]: E0122 16:31:22.808037 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.808067 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:31:22 crc kubenswrapper[4758]: E0122 16:31:22.808169 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:31:22 crc kubenswrapper[4758]: E0122 16:31:22.808269 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.884792 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 09:25:38.53411007 +0000 UTC Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.895871 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.895903 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.895912 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.895925 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.895935 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:22Z","lastTransitionTime":"2026-01-22T16:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.999543 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.999584 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.999594 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.999608 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:22 crc kubenswrapper[4758]: I0122 16:31:22.999617 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:22Z","lastTransitionTime":"2026-01-22T16:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:23 crc kubenswrapper[4758]: I0122 16:31:23.102092 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:23 crc kubenswrapper[4758]: I0122 16:31:23.102417 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:23 crc kubenswrapper[4758]: I0122 16:31:23.102531 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:23 crc kubenswrapper[4758]: I0122 16:31:23.102617 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:23 crc kubenswrapper[4758]: I0122 16:31:23.102677 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:23Z","lastTransitionTime":"2026-01-22T16:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:23 crc kubenswrapper[4758]: I0122 16:31:23.205646 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:23 crc kubenswrapper[4758]: I0122 16:31:23.205967 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:23 crc kubenswrapper[4758]: I0122 16:31:23.206134 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:23 crc kubenswrapper[4758]: I0122 16:31:23.206317 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:23 crc kubenswrapper[4758]: I0122 16:31:23.206508 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:23Z","lastTransitionTime":"2026-01-22T16:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:23 crc kubenswrapper[4758]: I0122 16:31:23.309382 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:23 crc kubenswrapper[4758]: I0122 16:31:23.309637 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:23 crc kubenswrapper[4758]: I0122 16:31:23.309706 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:23 crc kubenswrapper[4758]: I0122 16:31:23.309837 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:23 crc kubenswrapper[4758]: I0122 16:31:23.309925 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:23Z","lastTransitionTime":"2026-01-22T16:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:23 crc kubenswrapper[4758]: I0122 16:31:23.413240 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:23 crc kubenswrapper[4758]: I0122 16:31:23.413289 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:23 crc kubenswrapper[4758]: I0122 16:31:23.413308 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:23 crc kubenswrapper[4758]: I0122 16:31:23.413332 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:23 crc kubenswrapper[4758]: I0122 16:31:23.413349 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:23Z","lastTransitionTime":"2026-01-22T16:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:23 crc kubenswrapper[4758]: I0122 16:31:23.516194 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:23 crc kubenswrapper[4758]: I0122 16:31:23.516226 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:23 crc kubenswrapper[4758]: I0122 16:31:23.516235 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:23 crc kubenswrapper[4758]: I0122 16:31:23.516248 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:23 crc kubenswrapper[4758]: I0122 16:31:23.516256 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:23Z","lastTransitionTime":"2026-01-22T16:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:23 crc kubenswrapper[4758]: I0122 16:31:23.618900 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:23 crc kubenswrapper[4758]: I0122 16:31:23.618941 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:23 crc kubenswrapper[4758]: I0122 16:31:23.618955 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:23 crc kubenswrapper[4758]: I0122 16:31:23.618974 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:23 crc kubenswrapper[4758]: I0122 16:31:23.618988 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:23Z","lastTransitionTime":"2026-01-22T16:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:23 crc kubenswrapper[4758]: I0122 16:31:23.721092 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:23 crc kubenswrapper[4758]: I0122 16:31:23.721138 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:23 crc kubenswrapper[4758]: I0122 16:31:23.721258 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:23 crc kubenswrapper[4758]: I0122 16:31:23.721278 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:23 crc kubenswrapper[4758]: I0122 16:31:23.721290 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:23Z","lastTransitionTime":"2026-01-22T16:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:23 crc kubenswrapper[4758]: I0122 16:31:23.824239 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:23 crc kubenswrapper[4758]: I0122 16:31:23.824288 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:23 crc kubenswrapper[4758]: I0122 16:31:23.824300 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:23 crc kubenswrapper[4758]: I0122 16:31:23.824316 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:23 crc kubenswrapper[4758]: I0122 16:31:23.824335 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:23Z","lastTransitionTime":"2026-01-22T16:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:23 crc kubenswrapper[4758]: I0122 16:31:23.885831 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 03:52:09.822285261 +0000 UTC Jan 22 16:31:23 crc kubenswrapper[4758]: I0122 16:31:23.927131 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:23 crc kubenswrapper[4758]: I0122 16:31:23.927251 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:23 crc kubenswrapper[4758]: I0122 16:31:23.927263 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:23 crc kubenswrapper[4758]: I0122 16:31:23.927287 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:23 crc kubenswrapper[4758]: I0122 16:31:23.927302 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:23Z","lastTransitionTime":"2026-01-22T16:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.029701 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.029786 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.029804 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.029827 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.029845 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:24Z","lastTransitionTime":"2026-01-22T16:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.132243 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.132275 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.132301 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.132314 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.132323 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:24Z","lastTransitionTime":"2026-01-22T16:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.235148 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.235219 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.235236 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.235257 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.235273 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:24Z","lastTransitionTime":"2026-01-22T16:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.337688 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.337790 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.337809 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.337830 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.337847 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:24Z","lastTransitionTime":"2026-01-22T16:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.440766 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.440829 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.440842 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.440861 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.440878 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:24Z","lastTransitionTime":"2026-01-22T16:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.543857 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.543962 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.543986 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.544016 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.544045 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:24Z","lastTransitionTime":"2026-01-22T16:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.647273 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.647339 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.647362 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.647392 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.647415 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:24Z","lastTransitionTime":"2026-01-22T16:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.750877 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.750924 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.750943 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.750965 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.750982 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:24Z","lastTransitionTime":"2026-01-22T16:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.808056 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.808094 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.808127 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:31:24 crc kubenswrapper[4758]: E0122 16:31:24.808860 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:31:24 crc kubenswrapper[4758]: E0122 16:31:24.808806 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.808163 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:31:24 crc kubenswrapper[4758]: E0122 16:31:24.808959 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:31:24 crc kubenswrapper[4758]: E0122 16:31:24.809122 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.853933 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.853983 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.854001 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.854022 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.854037 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:24Z","lastTransitionTime":"2026-01-22T16:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.886681 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 11:00:01.420246237 +0000 UTC Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.957272 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.957322 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.957339 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.957376 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:24 crc kubenswrapper[4758]: I0122 16:31:24.957411 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:24Z","lastTransitionTime":"2026-01-22T16:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.059842 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.059895 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.059908 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.059927 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.060353 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:25Z","lastTransitionTime":"2026-01-22T16:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.163658 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.163968 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.164125 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.164280 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.164395 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:25Z","lastTransitionTime":"2026-01-22T16:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.267232 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.267300 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.267320 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.267347 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.267371 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:25Z","lastTransitionTime":"2026-01-22T16:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.370611 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.370674 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.370691 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.370715 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.370735 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:25Z","lastTransitionTime":"2026-01-22T16:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.473926 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.473998 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.474024 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.474056 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.474075 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:25Z","lastTransitionTime":"2026-01-22T16:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.576375 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.576612 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.576675 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.576760 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.576829 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:25Z","lastTransitionTime":"2026-01-22T16:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.678684 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.678731 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.678750 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.678763 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.678773 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:25Z","lastTransitionTime":"2026-01-22T16:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.781637 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.781941 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.782032 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.782126 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.782217 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:25Z","lastTransitionTime":"2026-01-22T16:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.885684 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.885750 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.885759 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.885777 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.885787 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:25Z","lastTransitionTime":"2026-01-22T16:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.887820 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 13:27:00.859090108 +0000 UTC Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.987611 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.987670 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.987687 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.987711 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:25 crc kubenswrapper[4758]: I0122 16:31:25.987728 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:25Z","lastTransitionTime":"2026-01-22T16:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:26 crc kubenswrapper[4758]: I0122 16:31:26.091320 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:26 crc kubenswrapper[4758]: I0122 16:31:26.091388 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:26 crc kubenswrapper[4758]: I0122 16:31:26.091413 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:26 crc kubenswrapper[4758]: I0122 16:31:26.091445 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:26 crc kubenswrapper[4758]: I0122 16:31:26.091469 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:26Z","lastTransitionTime":"2026-01-22T16:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:26 crc kubenswrapper[4758]: I0122 16:31:26.194615 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:26 crc kubenswrapper[4758]: I0122 16:31:26.194643 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:26 crc kubenswrapper[4758]: I0122 16:31:26.194651 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:26 crc kubenswrapper[4758]: I0122 16:31:26.194664 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:26 crc kubenswrapper[4758]: I0122 16:31:26.194673 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:26Z","lastTransitionTime":"2026-01-22T16:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:26 crc kubenswrapper[4758]: I0122 16:31:26.297198 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:26 crc kubenswrapper[4758]: I0122 16:31:26.297265 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:26 crc kubenswrapper[4758]: I0122 16:31:26.297284 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:26 crc kubenswrapper[4758]: I0122 16:31:26.297304 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:26 crc kubenswrapper[4758]: I0122 16:31:26.297321 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:26Z","lastTransitionTime":"2026-01-22T16:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:26 crc kubenswrapper[4758]: I0122 16:31:26.400517 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:26 crc kubenswrapper[4758]: I0122 16:31:26.400588 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:26 crc kubenswrapper[4758]: I0122 16:31:26.400609 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:26 crc kubenswrapper[4758]: I0122 16:31:26.400638 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:26 crc kubenswrapper[4758]: I0122 16:31:26.400660 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:26Z","lastTransitionTime":"2026-01-22T16:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:26 crc kubenswrapper[4758]: I0122 16:31:26.502942 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:26 crc kubenswrapper[4758]: I0122 16:31:26.502995 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:26 crc kubenswrapper[4758]: I0122 16:31:26.503015 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:26 crc kubenswrapper[4758]: I0122 16:31:26.503038 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:26 crc kubenswrapper[4758]: I0122 16:31:26.503055 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:26Z","lastTransitionTime":"2026-01-22T16:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:26 crc kubenswrapper[4758]: I0122 16:31:26.605817 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:26 crc kubenswrapper[4758]: I0122 16:31:26.605889 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:26 crc kubenswrapper[4758]: I0122 16:31:26.605908 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:26 crc kubenswrapper[4758]: I0122 16:31:26.605931 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:26 crc kubenswrapper[4758]: I0122 16:31:26.605953 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:26Z","lastTransitionTime":"2026-01-22T16:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:26 crc kubenswrapper[4758]: I0122 16:31:26.708555 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:26 crc kubenswrapper[4758]: I0122 16:31:26.708606 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:26 crc kubenswrapper[4758]: I0122 16:31:26.708620 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:26 crc kubenswrapper[4758]: I0122 16:31:26.708639 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:26 crc kubenswrapper[4758]: I0122 16:31:26.708650 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:26Z","lastTransitionTime":"2026-01-22T16:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:26 crc kubenswrapper[4758]: I0122 16:31:26.807559 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:31:26 crc kubenswrapper[4758]: I0122 16:31:26.807621 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:31:26 crc kubenswrapper[4758]: I0122 16:31:26.807768 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:31:26 crc kubenswrapper[4758]: E0122 16:31:26.807775 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:31:26 crc kubenswrapper[4758]: E0122 16:31:26.807839 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:31:26 crc kubenswrapper[4758]: I0122 16:31:26.807869 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:31:26 crc kubenswrapper[4758]: E0122 16:31:26.807953 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:31:26 crc kubenswrapper[4758]: E0122 16:31:26.808067 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:31:26 crc kubenswrapper[4758]: I0122 16:31:26.810834 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:26 crc kubenswrapper[4758]: I0122 16:31:26.810873 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:26 crc kubenswrapper[4758]: I0122 16:31:26.810889 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:26 crc kubenswrapper[4758]: I0122 16:31:26.810910 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:26 crc kubenswrapper[4758]: I0122 16:31:26.810924 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:26Z","lastTransitionTime":"2026-01-22T16:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:26 crc kubenswrapper[4758]: I0122 16:31:26.888974 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 01:25:45.959302666 +0000 UTC Jan 22 16:31:26 crc kubenswrapper[4758]: I0122 16:31:26.913043 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:26 crc kubenswrapper[4758]: I0122 16:31:26.913114 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:26 crc kubenswrapper[4758]: I0122 16:31:26.913138 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:26 crc kubenswrapper[4758]: I0122 16:31:26.913189 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:26 crc kubenswrapper[4758]: I0122 16:31:26.913212 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:26Z","lastTransitionTime":"2026-01-22T16:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.016684 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.016726 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.016760 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.016778 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.016789 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:27Z","lastTransitionTime":"2026-01-22T16:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.120100 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.120168 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.120191 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.120218 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.120240 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:27Z","lastTransitionTime":"2026-01-22T16:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.222607 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.222688 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.222711 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.222779 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.222810 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:27Z","lastTransitionTime":"2026-01-22T16:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.326042 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.326084 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.326097 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.326113 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.326122 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:27Z","lastTransitionTime":"2026-01-22T16:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.428711 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.428779 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.428792 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.428808 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.428822 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:27Z","lastTransitionTime":"2026-01-22T16:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.531528 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.531587 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.531597 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.531627 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.531638 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:27Z","lastTransitionTime":"2026-01-22T16:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.634190 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.634238 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.634249 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.634269 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.634281 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:27Z","lastTransitionTime":"2026-01-22T16:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.737467 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.737516 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.737529 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.737552 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.737564 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:27Z","lastTransitionTime":"2026-01-22T16:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.839989 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.840044 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.840059 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.840102 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.840150 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:27Z","lastTransitionTime":"2026-01-22T16:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.889733 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 05:51:36.045329543 +0000 UTC Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.943563 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.943868 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.943981 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.944089 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:27 crc kubenswrapper[4758]: I0122 16:31:27.944213 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:27Z","lastTransitionTime":"2026-01-22T16:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.047160 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.047208 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.047224 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.047244 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.047258 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:28Z","lastTransitionTime":"2026-01-22T16:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.150066 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.150536 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.150669 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.150779 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.150865 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:28Z","lastTransitionTime":"2026-01-22T16:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.253373 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.253412 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.253421 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.253440 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.253454 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:28Z","lastTransitionTime":"2026-01-22T16:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.356700 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.356769 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.356784 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.356804 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.356815 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:28Z","lastTransitionTime":"2026-01-22T16:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.459034 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.459079 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.459088 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.459105 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.459116 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:28Z","lastTransitionTime":"2026-01-22T16:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.562331 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.562377 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.562389 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.562407 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.562420 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:28Z","lastTransitionTime":"2026-01-22T16:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.665847 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.665905 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.665916 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.665934 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.665951 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:28Z","lastTransitionTime":"2026-01-22T16:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.769069 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.769133 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.769143 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.769162 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.769174 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:28Z","lastTransitionTime":"2026-01-22T16:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.807577 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.807671 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.807713 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.807840 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:31:28 crc kubenswrapper[4758]: E0122 16:31:28.808091 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:31:28 crc kubenswrapper[4758]: E0122 16:31:28.808160 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:31:28 crc kubenswrapper[4758]: E0122 16:31:28.807996 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:31:28 crc kubenswrapper[4758]: E0122 16:31:28.808312 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.858639 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=37.858623609 podStartE2EDuration="37.858623609s" podCreationTimestamp="2026-01-22 16:30:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:31:28.839971418 +0000 UTC m=+110.323310703" watchObservedRunningTime="2026-01-22 16:31:28.858623609 +0000 UTC m=+110.341962884" Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.871602 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.871943 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.872095 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.872248 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.872530 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:28Z","lastTransitionTime":"2026-01-22T16:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.877164 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=92.877145516 podStartE2EDuration="1m32.877145516s" podCreationTimestamp="2026-01-22 16:29:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:31:28.863030019 +0000 UTC m=+110.346369314" watchObservedRunningTime="2026-01-22 16:31:28.877145516 +0000 UTC m=+110.360484831" Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.890381 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 19:23:24.235590572 +0000 UTC Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.956562 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cbszh" podStartSLOduration=86.956544422 podStartE2EDuration="1m26.956544422s" podCreationTimestamp="2026-01-22 16:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:31:28.941995662 +0000 UTC m=+110.425334947" watchObservedRunningTime="2026-01-22 16:31:28.956544422 +0000 UTC m=+110.439883717" Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.975408 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.975466 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.975477 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.975492 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.975502 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:28Z","lastTransitionTime":"2026-01-22T16:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:28 crc kubenswrapper[4758]: I0122 16:31:28.981795 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-g8wjx" podStartSLOduration=86.981773783 podStartE2EDuration="1m26.981773783s" podCreationTimestamp="2026-01-22 16:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:31:28.981446134 +0000 UTC m=+110.464785419" watchObservedRunningTime="2026-01-22 16:31:28.981773783 +0000 UTC m=+110.465113098" Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.011969 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podStartSLOduration=87.011952219 podStartE2EDuration="1m27.011952219s" podCreationTimestamp="2026-01-22 16:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:31:28.995080377 +0000 UTC m=+110.478419662" watchObservedRunningTime="2026-01-22 16:31:29.011952219 +0000 UTC m=+110.495291504" Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.012056 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-fqfn9" podStartSLOduration=87.012053542 podStartE2EDuration="1m27.012053542s" podCreationTimestamp="2026-01-22 16:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:31:29.01160654 +0000 UTC m=+110.494945845" watchObservedRunningTime="2026-01-22 16:31:29.012053542 +0000 UTC m=+110.495392827" Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.024600 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-lt6tl" podStartSLOduration=87.024580845 podStartE2EDuration="1m27.024580845s" podCreationTimestamp="2026-01-22 16:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:31:29.024532724 +0000 UTC m=+110.507872009" watchObservedRunningTime="2026-01-22 16:31:29.024580845 +0000 UTC m=+110.507920130" Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.048665 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=88.048647865 podStartE2EDuration="1m28.048647865s" podCreationTimestamp="2026-01-22 16:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:31:29.048322846 +0000 UTC m=+110.531662151" watchObservedRunningTime="2026-01-22 16:31:29.048647865 +0000 UTC m=+110.531987150" Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.064384 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=93.064364046 podStartE2EDuration="1m33.064364046s" podCreationTimestamp="2026-01-22 16:29:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:31:29.063687397 +0000 UTC m=+110.547026682" watchObservedRunningTime="2026-01-22 16:31:29.064364046 +0000 UTC m=+110.547703331" Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.075388 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=64.075371427 podStartE2EDuration="1m4.075371427s" podCreationTimestamp="2026-01-22 16:30:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:31:29.074844552 +0000 UTC m=+110.558183857" watchObservedRunningTime="2026-01-22 16:31:29.075371427 +0000 UTC m=+110.558710712" Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.078498 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.078533 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.078542 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.078556 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.078566 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:29Z","lastTransitionTime":"2026-01-22T16:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.102461 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-7dvfg" podStartSLOduration=87.102442289 podStartE2EDuration="1m27.102442289s" podCreationTimestamp="2026-01-22 16:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:31:29.102228403 +0000 UTC m=+110.585567698" watchObservedRunningTime="2026-01-22 16:31:29.102442289 +0000 UTC m=+110.585781574" Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.180602 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.180658 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.180675 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.180699 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.180715 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:29Z","lastTransitionTime":"2026-01-22T16:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.282848 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.282884 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.282892 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.282907 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.282918 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:29Z","lastTransitionTime":"2026-01-22T16:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.384599 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.384636 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.384645 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.384657 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.384667 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:29Z","lastTransitionTime":"2026-01-22T16:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.487676 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.487708 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.487717 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.487732 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.487814 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:29Z","lastTransitionTime":"2026-01-22T16:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.591306 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.591362 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.591371 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.591394 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.591406 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:29Z","lastTransitionTime":"2026-01-22T16:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.693831 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.693883 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.693895 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.693911 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.693924 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:29Z","lastTransitionTime":"2026-01-22T16:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.796945 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.796991 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.797001 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.797016 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.797024 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:29Z","lastTransitionTime":"2026-01-22T16:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.808529 4758 scope.go:117] "RemoveContainer" containerID="7a265cc950ba85a41da92efbf8a471efa10bdc6ef7aa7837fc86c3e4e023a263" Jan 22 16:31:29 crc kubenswrapper[4758]: E0122 16:31:29.808790 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jdpck_openshift-ovn-kubernetes(9b60a09e-8bfa-4d2e-998d-e1db5dec0faa)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.891159 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 04:11:26.497980792 +0000 UTC Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.899150 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.899210 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.899227 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.899251 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:29 crc kubenswrapper[4758]: I0122 16:31:29.899268 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:29Z","lastTransitionTime":"2026-01-22T16:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.001637 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.001674 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.001683 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.001696 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.001704 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:30Z","lastTransitionTime":"2026-01-22T16:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.104617 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.104700 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.104720 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.105162 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.105223 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:30Z","lastTransitionTime":"2026-01-22T16:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.208619 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.208667 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.208683 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.208705 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.208722 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:30Z","lastTransitionTime":"2026-01-22T16:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.312326 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.312419 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.312446 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.312479 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.312502 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:30Z","lastTransitionTime":"2026-01-22T16:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.415534 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.416071 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.416290 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.416455 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.416609 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:30Z","lastTransitionTime":"2026-01-22T16:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.519566 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.519625 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.519642 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.519666 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.519687 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:30Z","lastTransitionTime":"2026-01-22T16:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.622597 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.622631 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.622643 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.622660 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.622673 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:30Z","lastTransitionTime":"2026-01-22T16:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.725103 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.725143 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.725152 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.725167 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.725180 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:30Z","lastTransitionTime":"2026-01-22T16:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.807518 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.807544 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.807905 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:31:30 crc kubenswrapper[4758]: E0122 16:31:30.808006 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.807935 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:31:30 crc kubenswrapper[4758]: E0122 16:31:30.808151 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:31:30 crc kubenswrapper[4758]: E0122 16:31:30.808380 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:31:30 crc kubenswrapper[4758]: E0122 16:31:30.808413 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.827159 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.827223 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.827233 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.827249 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.827260 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:30Z","lastTransitionTime":"2026-01-22T16:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.891607 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 15:14:41.719481085 +0000 UTC Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.930316 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.930362 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.930404 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.930421 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:30 crc kubenswrapper[4758]: I0122 16:31:30.930432 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:30Z","lastTransitionTime":"2026-01-22T16:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.032700 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.032792 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.032810 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.032832 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.032848 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:31Z","lastTransitionTime":"2026-01-22T16:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.135431 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.135470 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.135480 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.135493 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.135501 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:31Z","lastTransitionTime":"2026-01-22T16:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.238143 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.238206 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.238217 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.238248 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.238269 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:31Z","lastTransitionTime":"2026-01-22T16:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.341445 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.341497 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.341509 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.341528 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.341543 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:31Z","lastTransitionTime":"2026-01-22T16:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.443923 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.443950 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.443958 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.443970 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.443979 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:31Z","lastTransitionTime":"2026-01-22T16:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.547074 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.547118 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.547130 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.547145 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.547181 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:31Z","lastTransitionTime":"2026-01-22T16:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.649848 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.649885 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.649896 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.649912 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.649924 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:31Z","lastTransitionTime":"2026-01-22T16:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.752784 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.752832 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.752848 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.752871 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.752889 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:31Z","lastTransitionTime":"2026-01-22T16:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.855593 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.855671 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.855683 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.855703 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.855717 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:31Z","lastTransitionTime":"2026-01-22T16:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.892228 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 01:09:12.407678506 +0000 UTC Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.959173 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.959231 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.959243 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.959260 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:31 crc kubenswrapper[4758]: I0122 16:31:31.959274 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:31Z","lastTransitionTime":"2026-01-22T16:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.061853 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.061900 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.061914 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.061938 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.061953 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:32Z","lastTransitionTime":"2026-01-22T16:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.164637 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.164686 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.164698 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.164719 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.164734 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:32Z","lastTransitionTime":"2026-01-22T16:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.266659 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.266695 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.266706 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.266721 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.266733 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:32Z","lastTransitionTime":"2026-01-22T16:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.369417 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.369447 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.369457 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.369472 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.369482 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:32Z","lastTransitionTime":"2026-01-22T16:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.472160 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.472200 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.472213 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.472229 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.472238 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:32Z","lastTransitionTime":"2026-01-22T16:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.575135 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.575179 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.575188 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.575202 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.575211 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:32Z","lastTransitionTime":"2026-01-22T16:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.677273 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.677345 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.677361 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.677401 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.677420 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:32Z","lastTransitionTime":"2026-01-22T16:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.780379 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.780424 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.780436 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.780454 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.780468 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:32Z","lastTransitionTime":"2026-01-22T16:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.807337 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.807376 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.807337 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:31:32 crc kubenswrapper[4758]: E0122 16:31:32.807472 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.807493 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:31:32 crc kubenswrapper[4758]: E0122 16:31:32.807574 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:31:32 crc kubenswrapper[4758]: E0122 16:31:32.807822 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:31:32 crc kubenswrapper[4758]: E0122 16:31:32.807896 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.883236 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.883283 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.883294 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.883309 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.883321 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:32Z","lastTransitionTime":"2026-01-22T16:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.892808 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 04:16:55.478981197 +0000 UTC Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.986146 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.986192 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.986211 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.986235 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:32 crc kubenswrapper[4758]: I0122 16:31:32.986252 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:32Z","lastTransitionTime":"2026-01-22T16:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:33 crc kubenswrapper[4758]: I0122 16:31:33.008656 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:31:33 crc kubenswrapper[4758]: I0122 16:31:33.008735 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:31:33 crc kubenswrapper[4758]: I0122 16:31:33.008804 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:31:33 crc kubenswrapper[4758]: I0122 16:31:33.008835 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:31:33 crc kubenswrapper[4758]: I0122 16:31:33.008856 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:31:33Z","lastTransitionTime":"2026-01-22T16:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:31:33 crc kubenswrapper[4758]: I0122 16:31:33.071114 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-pp699"] Jan 22 16:31:33 crc kubenswrapper[4758]: I0122 16:31:33.071643 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pp699" Jan 22 16:31:33 crc kubenswrapper[4758]: I0122 16:31:33.074348 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 22 16:31:33 crc kubenswrapper[4758]: I0122 16:31:33.074866 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 22 16:31:33 crc kubenswrapper[4758]: I0122 16:31:33.076647 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 22 16:31:33 crc kubenswrapper[4758]: I0122 16:31:33.077249 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 22 16:31:33 crc kubenswrapper[4758]: I0122 16:31:33.185908 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/0b6f2bc5-a395-41da-808f-4e84a941adee-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-pp699\" (UID: \"0b6f2bc5-a395-41da-808f-4e84a941adee\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pp699" Jan 22 16:31:33 crc kubenswrapper[4758]: I0122 16:31:33.185983 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b6f2bc5-a395-41da-808f-4e84a941adee-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-pp699\" (UID: \"0b6f2bc5-a395-41da-808f-4e84a941adee\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pp699" Jan 22 16:31:33 crc kubenswrapper[4758]: I0122 16:31:33.186021 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/0b6f2bc5-a395-41da-808f-4e84a941adee-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-pp699\" (UID: \"0b6f2bc5-a395-41da-808f-4e84a941adee\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pp699" Jan 22 16:31:33 crc kubenswrapper[4758]: I0122 16:31:33.186044 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b6f2bc5-a395-41da-808f-4e84a941adee-service-ca\") pod \"cluster-version-operator-5c965bbfc6-pp699\" (UID: \"0b6f2bc5-a395-41da-808f-4e84a941adee\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pp699" Jan 22 16:31:33 crc kubenswrapper[4758]: I0122 16:31:33.186074 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b6f2bc5-a395-41da-808f-4e84a941adee-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-pp699\" (UID: \"0b6f2bc5-a395-41da-808f-4e84a941adee\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pp699" Jan 22 16:31:33 crc kubenswrapper[4758]: I0122 16:31:33.286967 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/0b6f2bc5-a395-41da-808f-4e84a941adee-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-pp699\" (UID: \"0b6f2bc5-a395-41da-808f-4e84a941adee\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pp699" Jan 22 16:31:33 crc kubenswrapper[4758]: I0122 16:31:33.287306 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b6f2bc5-a395-41da-808f-4e84a941adee-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-pp699\" (UID: \"0b6f2bc5-a395-41da-808f-4e84a941adee\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pp699" Jan 22 16:31:33 crc kubenswrapper[4758]: I0122 16:31:33.287478 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/0b6f2bc5-a395-41da-808f-4e84a941adee-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-pp699\" (UID: \"0b6f2bc5-a395-41da-808f-4e84a941adee\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pp699" Jan 22 16:31:33 crc kubenswrapper[4758]: I0122 16:31:33.287605 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b6f2bc5-a395-41da-808f-4e84a941adee-service-ca\") pod \"cluster-version-operator-5c965bbfc6-pp699\" (UID: \"0b6f2bc5-a395-41da-808f-4e84a941adee\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pp699" Jan 22 16:31:33 crc kubenswrapper[4758]: I0122 16:31:33.287712 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b6f2bc5-a395-41da-808f-4e84a941adee-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-pp699\" (UID: \"0b6f2bc5-a395-41da-808f-4e84a941adee\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pp699" Jan 22 16:31:33 crc kubenswrapper[4758]: I0122 16:31:33.287098 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/0b6f2bc5-a395-41da-808f-4e84a941adee-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-pp699\" (UID: \"0b6f2bc5-a395-41da-808f-4e84a941adee\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pp699" Jan 22 16:31:33 crc kubenswrapper[4758]: I0122 16:31:33.288044 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/0b6f2bc5-a395-41da-808f-4e84a941adee-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-pp699\" (UID: \"0b6f2bc5-a395-41da-808f-4e84a941adee\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pp699" Jan 22 16:31:33 crc kubenswrapper[4758]: I0122 16:31:33.289023 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b6f2bc5-a395-41da-808f-4e84a941adee-service-ca\") pod \"cluster-version-operator-5c965bbfc6-pp699\" (UID: \"0b6f2bc5-a395-41da-808f-4e84a941adee\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pp699" Jan 22 16:31:33 crc kubenswrapper[4758]: I0122 16:31:33.294889 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b6f2bc5-a395-41da-808f-4e84a941adee-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-pp699\" (UID: \"0b6f2bc5-a395-41da-808f-4e84a941adee\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pp699" Jan 22 16:31:33 crc kubenswrapper[4758]: I0122 16:31:33.310846 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b6f2bc5-a395-41da-808f-4e84a941adee-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-pp699\" (UID: \"0b6f2bc5-a395-41da-808f-4e84a941adee\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pp699" Jan 22 16:31:33 crc kubenswrapper[4758]: I0122 16:31:33.396007 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pp699" Jan 22 16:31:33 crc kubenswrapper[4758]: W0122 16:31:33.411205 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b6f2bc5_a395_41da_808f_4e84a941adee.slice/crio-5e266d609dbd7e606a1d10186f98ebd60cf75aefbf122436b2ceb758cc52ba2d WatchSource:0}: Error finding container 5e266d609dbd7e606a1d10186f98ebd60cf75aefbf122436b2ceb758cc52ba2d: Status 404 returned error can't find the container with id 5e266d609dbd7e606a1d10186f98ebd60cf75aefbf122436b2ceb758cc52ba2d Jan 22 16:31:33 crc kubenswrapper[4758]: I0122 16:31:33.892901 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 01:25:55.154216052 +0000 UTC Jan 22 16:31:33 crc kubenswrapper[4758]: I0122 16:31:33.892954 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 22 16:31:33 crc kubenswrapper[4758]: I0122 16:31:33.900923 4758 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 22 16:31:34 crc kubenswrapper[4758]: I0122 16:31:34.377043 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pp699" event={"ID":"0b6f2bc5-a395-41da-808f-4e84a941adee","Type":"ContainerStarted","Data":"20f10a6efbcf017ee3412618850e68beb3ea52022ac4344c0e8b2d904bf5a7bb"} Jan 22 16:31:34 crc kubenswrapper[4758]: I0122 16:31:34.377362 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pp699" event={"ID":"0b6f2bc5-a395-41da-808f-4e84a941adee","Type":"ContainerStarted","Data":"5e266d609dbd7e606a1d10186f98ebd60cf75aefbf122436b2ceb758cc52ba2d"} Jan 22 16:31:34 crc kubenswrapper[4758]: I0122 16:31:34.390147 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pp699" podStartSLOduration=92.390126153 podStartE2EDuration="1m32.390126153s" podCreationTimestamp="2026-01-22 16:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:31:34.389772283 +0000 UTC m=+115.873111588" watchObservedRunningTime="2026-01-22 16:31:34.390126153 +0000 UTC m=+115.873465438" Jan 22 16:31:34 crc kubenswrapper[4758]: I0122 16:31:34.807250 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:31:34 crc kubenswrapper[4758]: I0122 16:31:34.807289 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:31:34 crc kubenswrapper[4758]: I0122 16:31:34.807292 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:31:34 crc kubenswrapper[4758]: E0122 16:31:34.807746 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:31:34 crc kubenswrapper[4758]: E0122 16:31:34.807775 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:31:34 crc kubenswrapper[4758]: I0122 16:31:34.807325 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:31:34 crc kubenswrapper[4758]: E0122 16:31:34.808027 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:31:34 crc kubenswrapper[4758]: E0122 16:31:34.807837 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:31:36 crc kubenswrapper[4758]: I0122 16:31:36.385528 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7dvfg_97853b38-352d-42df-ad31-639c0e58093a/kube-multus/1.log" Jan 22 16:31:36 crc kubenswrapper[4758]: I0122 16:31:36.386279 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7dvfg_97853b38-352d-42df-ad31-639c0e58093a/kube-multus/0.log" Jan 22 16:31:36 crc kubenswrapper[4758]: I0122 16:31:36.386309 4758 generic.go:334] "Generic (PLEG): container finished" podID="97853b38-352d-42df-ad31-639c0e58093a" containerID="56af628fe62b476141809cfaea6a06fdd7dfa34ed41fb97425db4cdaa3ec7b4e" exitCode=1 Jan 22 16:31:36 crc kubenswrapper[4758]: I0122 16:31:36.386341 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7dvfg" event={"ID":"97853b38-352d-42df-ad31-639c0e58093a","Type":"ContainerDied","Data":"56af628fe62b476141809cfaea6a06fdd7dfa34ed41fb97425db4cdaa3ec7b4e"} Jan 22 16:31:36 crc kubenswrapper[4758]: I0122 16:31:36.386370 4758 scope.go:117] "RemoveContainer" containerID="12409cad6bedda3da41a11ce209dd58b7d15e3fc0dde575d70b3aa6c64435144" Jan 22 16:31:36 crc kubenswrapper[4758]: I0122 16:31:36.386767 4758 scope.go:117] "RemoveContainer" containerID="56af628fe62b476141809cfaea6a06fdd7dfa34ed41fb97425db4cdaa3ec7b4e" Jan 22 16:31:36 crc kubenswrapper[4758]: E0122 16:31:36.386983 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-7dvfg_openshift-multus(97853b38-352d-42df-ad31-639c0e58093a)\"" pod="openshift-multus/multus-7dvfg" podUID="97853b38-352d-42df-ad31-639c0e58093a" Jan 22 16:31:36 crc kubenswrapper[4758]: I0122 16:31:36.807560 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:31:36 crc kubenswrapper[4758]: I0122 16:31:36.807572 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:31:36 crc kubenswrapper[4758]: I0122 16:31:36.807639 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:31:36 crc kubenswrapper[4758]: I0122 16:31:36.807700 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:31:36 crc kubenswrapper[4758]: E0122 16:31:36.807874 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:31:36 crc kubenswrapper[4758]: E0122 16:31:36.808040 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:31:36 crc kubenswrapper[4758]: E0122 16:31:36.808140 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:31:36 crc kubenswrapper[4758]: E0122 16:31:36.808193 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:31:37 crc kubenswrapper[4758]: I0122 16:31:37.391498 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7dvfg_97853b38-352d-42df-ad31-639c0e58093a/kube-multus/1.log" Jan 22 16:31:38 crc kubenswrapper[4758]: I0122 16:31:38.807550 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:31:38 crc kubenswrapper[4758]: I0122 16:31:38.807583 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:31:38 crc kubenswrapper[4758]: I0122 16:31:38.807562 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:31:38 crc kubenswrapper[4758]: I0122 16:31:38.807562 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:31:38 crc kubenswrapper[4758]: E0122 16:31:38.808657 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:31:38 crc kubenswrapper[4758]: E0122 16:31:38.808710 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:31:38 crc kubenswrapper[4758]: E0122 16:31:38.808907 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:31:38 crc kubenswrapper[4758]: E0122 16:31:38.808986 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:31:38 crc kubenswrapper[4758]: E0122 16:31:38.844360 4758 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 22 16:31:38 crc kubenswrapper[4758]: E0122 16:31:38.893766 4758 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 16:31:40 crc kubenswrapper[4758]: I0122 16:31:40.807837 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:31:40 crc kubenswrapper[4758]: I0122 16:31:40.807861 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:31:40 crc kubenswrapper[4758]: I0122 16:31:40.807886 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:31:40 crc kubenswrapper[4758]: I0122 16:31:40.807903 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:31:40 crc kubenswrapper[4758]: E0122 16:31:40.807973 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:31:40 crc kubenswrapper[4758]: E0122 16:31:40.808083 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:31:40 crc kubenswrapper[4758]: E0122 16:31:40.808132 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:31:40 crc kubenswrapper[4758]: E0122 16:31:40.808196 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:31:42 crc kubenswrapper[4758]: I0122 16:31:42.807905 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:31:42 crc kubenswrapper[4758]: I0122 16:31:42.807973 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:31:42 crc kubenswrapper[4758]: I0122 16:31:42.808118 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:31:42 crc kubenswrapper[4758]: E0122 16:31:42.808349 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:31:42 crc kubenswrapper[4758]: I0122 16:31:42.808393 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:31:42 crc kubenswrapper[4758]: E0122 16:31:42.808522 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:31:42 crc kubenswrapper[4758]: E0122 16:31:42.808632 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:31:42 crc kubenswrapper[4758]: E0122 16:31:42.808796 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:31:43 crc kubenswrapper[4758]: I0122 16:31:43.807820 4758 scope.go:117] "RemoveContainer" containerID="7a265cc950ba85a41da92efbf8a471efa10bdc6ef7aa7837fc86c3e4e023a263" Jan 22 16:31:43 crc kubenswrapper[4758]: E0122 16:31:43.807969 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jdpck_openshift-ovn-kubernetes(9b60a09e-8bfa-4d2e-998d-e1db5dec0faa)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" Jan 22 16:31:43 crc kubenswrapper[4758]: E0122 16:31:43.895082 4758 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 16:31:44 crc kubenswrapper[4758]: I0122 16:31:44.807303 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:31:44 crc kubenswrapper[4758]: I0122 16:31:44.807360 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:31:44 crc kubenswrapper[4758]: I0122 16:31:44.807378 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:31:44 crc kubenswrapper[4758]: I0122 16:31:44.807378 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:31:44 crc kubenswrapper[4758]: E0122 16:31:44.807492 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:31:44 crc kubenswrapper[4758]: E0122 16:31:44.807610 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:31:44 crc kubenswrapper[4758]: E0122 16:31:44.807755 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:31:44 crc kubenswrapper[4758]: E0122 16:31:44.807940 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:31:46 crc kubenswrapper[4758]: I0122 16:31:46.807248 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:31:46 crc kubenswrapper[4758]: I0122 16:31:46.807315 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:31:46 crc kubenswrapper[4758]: E0122 16:31:46.807430 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:31:46 crc kubenswrapper[4758]: I0122 16:31:46.807481 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:31:46 crc kubenswrapper[4758]: E0122 16:31:46.807602 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:31:46 crc kubenswrapper[4758]: E0122 16:31:46.807636 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:31:46 crc kubenswrapper[4758]: I0122 16:31:46.808632 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:31:46 crc kubenswrapper[4758]: E0122 16:31:46.808966 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:31:48 crc kubenswrapper[4758]: I0122 16:31:48.808070 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:31:48 crc kubenswrapper[4758]: I0122 16:31:48.809068 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:31:48 crc kubenswrapper[4758]: I0122 16:31:48.809068 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:31:48 crc kubenswrapper[4758]: I0122 16:31:48.809275 4758 scope.go:117] "RemoveContainer" containerID="56af628fe62b476141809cfaea6a06fdd7dfa34ed41fb97425db4cdaa3ec7b4e" Jan 22 16:31:48 crc kubenswrapper[4758]: E0122 16:31:48.809375 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:31:48 crc kubenswrapper[4758]: I0122 16:31:48.809472 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:31:48 crc kubenswrapper[4758]: E0122 16:31:48.809606 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:31:48 crc kubenswrapper[4758]: E0122 16:31:48.809730 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:31:48 crc kubenswrapper[4758]: E0122 16:31:48.809842 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:31:48 crc kubenswrapper[4758]: E0122 16:31:48.895448 4758 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 16:31:49 crc kubenswrapper[4758]: I0122 16:31:49.432476 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7dvfg_97853b38-352d-42df-ad31-639c0e58093a/kube-multus/1.log" Jan 22 16:31:49 crc kubenswrapper[4758]: I0122 16:31:49.432541 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7dvfg" event={"ID":"97853b38-352d-42df-ad31-639c0e58093a","Type":"ContainerStarted","Data":"733ea95ed7d8d4ff71e143ac3734ecdaaaec088e3579e9563ae043bb871c0a3d"} Jan 22 16:31:50 crc kubenswrapper[4758]: I0122 16:31:50.807431 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:31:50 crc kubenswrapper[4758]: I0122 16:31:50.807439 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:31:50 crc kubenswrapper[4758]: I0122 16:31:50.807454 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:31:50 crc kubenswrapper[4758]: I0122 16:31:50.807545 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:31:50 crc kubenswrapper[4758]: E0122 16:31:50.807793 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:31:50 crc kubenswrapper[4758]: E0122 16:31:50.807884 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:31:50 crc kubenswrapper[4758]: E0122 16:31:50.807941 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:31:50 crc kubenswrapper[4758]: E0122 16:31:50.808125 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:31:52 crc kubenswrapper[4758]: I0122 16:31:52.807565 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:31:52 crc kubenswrapper[4758]: I0122 16:31:52.807638 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:31:52 crc kubenswrapper[4758]: I0122 16:31:52.807596 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:31:52 crc kubenswrapper[4758]: I0122 16:31:52.807589 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:31:52 crc kubenswrapper[4758]: E0122 16:31:52.807726 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:31:52 crc kubenswrapper[4758]: E0122 16:31:52.807821 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:31:52 crc kubenswrapper[4758]: E0122 16:31:52.807911 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:31:52 crc kubenswrapper[4758]: E0122 16:31:52.807982 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:31:53 crc kubenswrapper[4758]: E0122 16:31:53.897174 4758 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 16:31:54 crc kubenswrapper[4758]: I0122 16:31:54.807515 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:31:54 crc kubenswrapper[4758]: I0122 16:31:54.807585 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:31:54 crc kubenswrapper[4758]: I0122 16:31:54.807515 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:31:54 crc kubenswrapper[4758]: E0122 16:31:54.807645 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:31:54 crc kubenswrapper[4758]: I0122 16:31:54.807536 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:31:54 crc kubenswrapper[4758]: E0122 16:31:54.807722 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:31:54 crc kubenswrapper[4758]: E0122 16:31:54.807985 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:31:54 crc kubenswrapper[4758]: E0122 16:31:54.808051 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:31:56 crc kubenswrapper[4758]: I0122 16:31:56.807598 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:31:56 crc kubenswrapper[4758]: E0122 16:31:56.807759 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:31:56 crc kubenswrapper[4758]: I0122 16:31:56.807848 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:31:56 crc kubenswrapper[4758]: I0122 16:31:56.807888 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:31:56 crc kubenswrapper[4758]: E0122 16:31:56.808118 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:31:56 crc kubenswrapper[4758]: I0122 16:31:56.808385 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:31:56 crc kubenswrapper[4758]: E0122 16:31:56.808538 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:31:56 crc kubenswrapper[4758]: E0122 16:31:56.808718 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:31:58 crc kubenswrapper[4758]: I0122 16:31:58.807423 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:31:58 crc kubenswrapper[4758]: I0122 16:31:58.807518 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:31:58 crc kubenswrapper[4758]: I0122 16:31:58.808974 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:31:58 crc kubenswrapper[4758]: I0122 16:31:58.808992 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:31:58 crc kubenswrapper[4758]: E0122 16:31:58.809155 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:31:58 crc kubenswrapper[4758]: I0122 16:31:58.809392 4758 scope.go:117] "RemoveContainer" containerID="7a265cc950ba85a41da92efbf8a471efa10bdc6ef7aa7837fc86c3e4e023a263" Jan 22 16:31:58 crc kubenswrapper[4758]: E0122 16:31:58.809436 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:31:58 crc kubenswrapper[4758]: E0122 16:31:58.809597 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:31:58 crc kubenswrapper[4758]: E0122 16:31:58.810783 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:31:58 crc kubenswrapper[4758]: E0122 16:31:58.898028 4758 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 16:31:59 crc kubenswrapper[4758]: I0122 16:31:59.467005 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jdpck_9b60a09e-8bfa-4d2e-998d-e1db5dec0faa/ovnkube-controller/3.log" Jan 22 16:31:59 crc kubenswrapper[4758]: I0122 16:31:59.471201 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" event={"ID":"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa","Type":"ContainerStarted","Data":"450f07057ff0cbdff80b0a0746974a16bb12814a6720db90adaf18b0968da691"} Jan 22 16:31:59 crc kubenswrapper[4758]: I0122 16:31:59.471685 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:31:59 crc kubenswrapper[4758]: I0122 16:31:59.500849 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" podStartSLOduration=117.500778286 podStartE2EDuration="1m57.500778286s" podCreationTimestamp="2026-01-22 16:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:31:59.499314395 +0000 UTC m=+140.982653690" watchObservedRunningTime="2026-01-22 16:31:59.500778286 +0000 UTC m=+140.984117581" Jan 22 16:32:00 crc kubenswrapper[4758]: I0122 16:32:00.011375 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-2xqns"] Jan 22 16:32:00 crc kubenswrapper[4758]: I0122 16:32:00.011589 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:32:00 crc kubenswrapper[4758]: E0122 16:32:00.011719 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:32:00 crc kubenswrapper[4758]: I0122 16:32:00.808211 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:32:00 crc kubenswrapper[4758]: I0122 16:32:00.808302 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:32:00 crc kubenswrapper[4758]: E0122 16:32:00.808385 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:32:00 crc kubenswrapper[4758]: E0122 16:32:00.808842 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:32:00 crc kubenswrapper[4758]: I0122 16:32:00.809412 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:32:00 crc kubenswrapper[4758]: E0122 16:32:00.809596 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:32:01 crc kubenswrapper[4758]: E0122 16:32:01.880124 4758 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.072s" Jan 22 16:32:01 crc kubenswrapper[4758]: I0122 16:32:01.880334 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:32:01 crc kubenswrapper[4758]: E0122 16:32:01.880551 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:32:02 crc kubenswrapper[4758]: I0122 16:32:02.807408 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:32:02 crc kubenswrapper[4758]: E0122 16:32:02.807554 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:32:02 crc kubenswrapper[4758]: I0122 16:32:02.807724 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:32:02 crc kubenswrapper[4758]: E0122 16:32:02.807807 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:32:02 crc kubenswrapper[4758]: I0122 16:32:02.808016 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:32:02 crc kubenswrapper[4758]: E0122 16:32:02.808087 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:32:03 crc kubenswrapper[4758]: I0122 16:32:03.807323 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:32:03 crc kubenswrapper[4758]: E0122 16:32:03.808037 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xqns" podUID="3ef1c490-d5f9-458d-8b3e-8580a5f07df6" Jan 22 16:32:04 crc kubenswrapper[4758]: I0122 16:32:04.807342 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:32:04 crc kubenswrapper[4758]: I0122 16:32:04.808814 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:32:04 crc kubenswrapper[4758]: I0122 16:32:04.809521 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:32:04 crc kubenswrapper[4758]: I0122 16:32:04.812150 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 22 16:32:04 crc kubenswrapper[4758]: I0122 16:32:04.812173 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 22 16:32:04 crc kubenswrapper[4758]: I0122 16:32:04.813570 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 22 16:32:04 crc kubenswrapper[4758]: I0122 16:32:04.814062 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 22 16:32:04 crc kubenswrapper[4758]: I0122 16:32:04.818571 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:04 crc kubenswrapper[4758]: E0122 16:32:04.819030 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:34:06.819005111 +0000 UTC m=+268.302344416 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:04 crc kubenswrapper[4758]: I0122 16:32:04.920118 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:32:04 crc kubenswrapper[4758]: I0122 16:32:04.920197 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:32:04 crc kubenswrapper[4758]: I0122 16:32:04.920240 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:32:04 crc kubenswrapper[4758]: I0122 16:32:04.920281 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:32:04 crc kubenswrapper[4758]: I0122 16:32:04.921599 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:32:04 crc kubenswrapper[4758]: I0122 16:32:04.928183 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:32:04 crc kubenswrapper[4758]: I0122 16:32:04.928688 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:32:04 crc kubenswrapper[4758]: I0122 16:32:04.928793 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:32:05 crc kubenswrapper[4758]: I0122 16:32:05.128191 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:32:05 crc kubenswrapper[4758]: I0122 16:32:05.137540 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:32:05 crc kubenswrapper[4758]: I0122 16:32:05.146948 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:32:05 crc kubenswrapper[4758]: I0122 16:32:05.491458 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"8ad3378b41d7d4810191fdd0040ae833fbd42f682bd8c4ab6b8d5cd8ac7245ae"} Jan 22 16:32:05 crc kubenswrapper[4758]: W0122 16:32:05.678258 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-c5cca01003bd74bde381916db9e7587b0efff6ab4a15adab923aed6b113528a0 WatchSource:0}: Error finding container c5cca01003bd74bde381916db9e7587b0efff6ab4a15adab923aed6b113528a0: Status 404 returned error can't find the container with id c5cca01003bd74bde381916db9e7587b0efff6ab4a15adab923aed6b113528a0 Jan 22 16:32:05 crc kubenswrapper[4758]: W0122 16:32:05.695294 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-9bcae917361be579cc697e64e35612c9138f16d1d5a2ed21cb3e2d6d5c525047 WatchSource:0}: Error finding container 9bcae917361be579cc697e64e35612c9138f16d1d5a2ed21cb3e2d6d5c525047: Status 404 returned error can't find the container with id 9bcae917361be579cc697e64e35612c9138f16d1d5a2ed21cb3e2d6d5c525047 Jan 22 16:32:05 crc kubenswrapper[4758]: I0122 16:32:05.807531 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:32:05 crc kubenswrapper[4758]: I0122 16:32:05.809895 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 22 16:32:05 crc kubenswrapper[4758]: I0122 16:32:05.809995 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 22 16:32:06 crc kubenswrapper[4758]: I0122 16:32:06.496609 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"c55c8f25ba49ce58510c6619e0c533323342cf226554e2933b025396ef8999db"} Jan 22 16:32:06 crc kubenswrapper[4758]: I0122 16:32:06.499554 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"1a19645ea3cf25a4f5ef2ab7c9fb3e26686f48b56e25eb1693d6a3e89d117c83"} Jan 22 16:32:06 crc kubenswrapper[4758]: I0122 16:32:06.499601 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"c5cca01003bd74bde381916db9e7587b0efff6ab4a15adab923aed6b113528a0"} Jan 22 16:32:06 crc kubenswrapper[4758]: I0122 16:32:06.500178 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:32:06 crc kubenswrapper[4758]: I0122 16:32:06.502581 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"ed59d684538a87f1e3ff8fa2cc76b5f7d5c033005d57b12f1cda8b62998592b6"} Jan 22 16:32:06 crc kubenswrapper[4758]: I0122 16:32:06.503310 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"9bcae917361be579cc697e64e35612c9138f16d1d5a2ed21cb3e2d6d5c525047"} Jan 22 16:32:13 crc kubenswrapper[4758]: I0122 16:32:13.837514 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:32:13 crc kubenswrapper[4758]: I0122 16:32:13.837619 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.169511 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.205671 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9h8hv"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.206584 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-9h8hv" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.206679 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qcbh7"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.207016 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.212438 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.212892 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.213305 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.213424 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.240971 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.242168 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hwwcr"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.242568 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hwwcr" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.245857 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.246880 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.249215 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.249987 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjhq"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.250686 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjhq" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.255880 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9dx9w"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.256313 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9dx9w" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.275065 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-x45ps"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.275495 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-x45ps" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.284839 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.323773 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.323822 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.324441 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.324495 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.324701 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.325174 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-2k2wj"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.325294 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.325297 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.325332 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.325770 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-2k2wj" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.325863 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.326228 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.326253 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.326277 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.326240 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.326367 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.326391 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.326425 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.326395 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.326513 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jfncv"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.326518 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.326827 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.326957 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.326989 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jfncv" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.327013 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.327047 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.327018 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.329217 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-p5cqb"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.329861 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-p5cqb" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.330640 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.330867 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.331025 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.331277 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.332205 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-qcbh7\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.332242 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-qcbh7\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.332267 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1358317e-b558-46f9-b9f7-0fcfcc0eb2c9-encryption-config\") pod \"apiserver-7bbb656c7d-hvjhq\" (UID: \"1358317e-b558-46f9-b9f7-0fcfcc0eb2c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjhq" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.332317 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-qcbh7\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.332343 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0eb1a077-ff54-4f67-9cd5-e2c056ef807e-config\") pod \"apiserver-76f77b778f-9h8hv\" (UID: \"0eb1a077-ff54-4f67-9cd5-e2c056ef807e\") " pod="openshift-apiserver/apiserver-76f77b778f-9h8hv" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.332411 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac22080d-c713-4917-9254-d103edaa0c3e-config\") pod \"controller-manager-879f6c89f-hwwcr\" (UID: \"ac22080d-c713-4917-9254-d103edaa0c3e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hwwcr" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.332430 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0eb1a077-ff54-4f67-9cd5-e2c056ef807e-image-import-ca\") pod \"apiserver-76f77b778f-9h8hv\" (UID: \"0eb1a077-ff54-4f67-9cd5-e2c056ef807e\") " pod="openshift-apiserver/apiserver-76f77b778f-9h8hv" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.332484 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0eb1a077-ff54-4f67-9cd5-e2c056ef807e-audit\") pod \"apiserver-76f77b778f-9h8hv\" (UID: \"0eb1a077-ff54-4f67-9cd5-e2c056ef807e\") " pod="openshift-apiserver/apiserver-76f77b778f-9h8hv" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.332506 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxldr\" (UniqueName: \"kubernetes.io/projected/1358317e-b558-46f9-b9f7-0fcfcc0eb2c9-kube-api-access-hxldr\") pod \"apiserver-7bbb656c7d-hvjhq\" (UID: \"1358317e-b558-46f9-b9f7-0fcfcc0eb2c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjhq" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.332524 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh9sv\" (UniqueName: \"kubernetes.io/projected/ac22080d-c713-4917-9254-d103edaa0c3e-kube-api-access-fh9sv\") pod \"controller-manager-879f6c89f-hwwcr\" (UID: \"ac22080d-c713-4917-9254-d103edaa0c3e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hwwcr" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.332568 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-qcbh7\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.332592 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-qcbh7\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.332682 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ac22080d-c713-4917-9254-d103edaa0c3e-client-ca\") pod \"controller-manager-879f6c89f-hwwcr\" (UID: \"ac22080d-c713-4917-9254-d103edaa0c3e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hwwcr" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.332759 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-qcbh7\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.332787 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e926035e-0af8-45eb-9451-19c8827363c3-audit-policies\") pod \"oauth-openshift-558db77b4-qcbh7\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.332811 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1358317e-b558-46f9-b9f7-0fcfcc0eb2c9-audit-dir\") pod \"apiserver-7bbb656c7d-hvjhq\" (UID: \"1358317e-b558-46f9-b9f7-0fcfcc0eb2c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjhq" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.332829 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7dff7fd-8dda-42ec-a6c8-2eb3d675a830-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-9dx9w\" (UID: \"b7dff7fd-8dda-42ec-a6c8-2eb3d675a830\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9dx9w" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.332846 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ac22080d-c713-4917-9254-d103edaa0c3e-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hwwcr\" (UID: \"ac22080d-c713-4917-9254-d103edaa0c3e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hwwcr" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.332865 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-qcbh7\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.332889 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0eb1a077-ff54-4f67-9cd5-e2c056ef807e-audit-dir\") pod \"apiserver-76f77b778f-9h8hv\" (UID: \"0eb1a077-ff54-4f67-9cd5-e2c056ef807e\") " pod="openshift-apiserver/apiserver-76f77b778f-9h8hv" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.332906 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1358317e-b558-46f9-b9f7-0fcfcc0eb2c9-serving-cert\") pod \"apiserver-7bbb656c7d-hvjhq\" (UID: \"1358317e-b558-46f9-b9f7-0fcfcc0eb2c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjhq" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.332930 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-qcbh7\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.332946 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwfxm\" (UniqueName: \"kubernetes.io/projected/e926035e-0af8-45eb-9451-19c8827363c3-kube-api-access-cwfxm\") pod \"oauth-openshift-558db77b4-qcbh7\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.332960 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7dff7fd-8dda-42ec-a6c8-2eb3d675a830-config\") pod \"openshift-apiserver-operator-796bbdcf4f-9dx9w\" (UID: \"b7dff7fd-8dda-42ec-a6c8-2eb3d675a830\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9dx9w" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.332978 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-qcbh7\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.332993 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vszfq\" (UniqueName: \"kubernetes.io/projected/b7dff7fd-8dda-42ec-a6c8-2eb3d675a830-kube-api-access-vszfq\") pod \"openshift-apiserver-operator-796bbdcf4f-9dx9w\" (UID: \"b7dff7fd-8dda-42ec-a6c8-2eb3d675a830\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9dx9w" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.333007 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac22080d-c713-4917-9254-d103edaa0c3e-serving-cert\") pod \"controller-manager-879f6c89f-hwwcr\" (UID: \"ac22080d-c713-4917-9254-d103edaa0c3e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hwwcr" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.333024 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0eb1a077-ff54-4f67-9cd5-e2c056ef807e-etcd-client\") pod \"apiserver-76f77b778f-9h8hv\" (UID: \"0eb1a077-ff54-4f67-9cd5-e2c056ef807e\") " pod="openshift-apiserver/apiserver-76f77b778f-9h8hv" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.333040 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0eb1a077-ff54-4f67-9cd5-e2c056ef807e-encryption-config\") pod \"apiserver-76f77b778f-9h8hv\" (UID: \"0eb1a077-ff54-4f67-9cd5-e2c056ef807e\") " pod="openshift-apiserver/apiserver-76f77b778f-9h8hv" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.333056 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1358317e-b558-46f9-b9f7-0fcfcc0eb2c9-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-hvjhq\" (UID: \"1358317e-b558-46f9-b9f7-0fcfcc0eb2c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjhq" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.333070 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0eb1a077-ff54-4f67-9cd5-e2c056ef807e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9h8hv\" (UID: \"0eb1a077-ff54-4f67-9cd5-e2c056ef807e\") " pod="openshift-apiserver/apiserver-76f77b778f-9h8hv" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.333079 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.333102 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-qcbh7\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.333119 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0eb1a077-ff54-4f67-9cd5-e2c056ef807e-etcd-serving-ca\") pod \"apiserver-76f77b778f-9h8hv\" (UID: \"0eb1a077-ff54-4f67-9cd5-e2c056ef807e\") " pod="openshift-apiserver/apiserver-76f77b778f-9h8hv" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.333135 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwd46\" (UniqueName: \"kubernetes.io/projected/0eb1a077-ff54-4f67-9cd5-e2c056ef807e-kube-api-access-kwd46\") pod \"apiserver-76f77b778f-9h8hv\" (UID: \"0eb1a077-ff54-4f67-9cd5-e2c056ef807e\") " pod="openshift-apiserver/apiserver-76f77b778f-9h8hv" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.333160 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0eb1a077-ff54-4f67-9cd5-e2c056ef807e-serving-cert\") pod \"apiserver-76f77b778f-9h8hv\" (UID: \"0eb1a077-ff54-4f67-9cd5-e2c056ef807e\") " pod="openshift-apiserver/apiserver-76f77b778f-9h8hv" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.333184 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1358317e-b558-46f9-b9f7-0fcfcc0eb2c9-etcd-client\") pod \"apiserver-7bbb656c7d-hvjhq\" (UID: \"1358317e-b558-46f9-b9f7-0fcfcc0eb2c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjhq" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.333203 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-qcbh7\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.333220 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0eb1a077-ff54-4f67-9cd5-e2c056ef807e-node-pullsecrets\") pod \"apiserver-76f77b778f-9h8hv\" (UID: \"0eb1a077-ff54-4f67-9cd5-e2c056ef807e\") " pod="openshift-apiserver/apiserver-76f77b778f-9h8hv" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.333236 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1358317e-b558-46f9-b9f7-0fcfcc0eb2c9-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-hvjhq\" (UID: \"1358317e-b558-46f9-b9f7-0fcfcc0eb2c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjhq" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.333253 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e926035e-0af8-45eb-9451-19c8827363c3-audit-dir\") pod \"oauth-openshift-558db77b4-qcbh7\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.333269 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1358317e-b558-46f9-b9f7-0fcfcc0eb2c9-audit-policies\") pod \"apiserver-7bbb656c7d-hvjhq\" (UID: \"1358317e-b558-46f9-b9f7-0fcfcc0eb2c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjhq" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.334573 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.338731 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.339200 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.341188 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.353004 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.353073 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.353530 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.355099 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-2q4t5"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.355756 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2q4t5" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.356108 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qc9q5"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.356463 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qc9q5" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.357164 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wxbnz"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.357623 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wxbnz" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.359855 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.360866 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.372398 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.372958 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.373419 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.373770 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.374236 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.375000 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.375530 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.375910 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.376378 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.376973 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.413716 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.413897 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-lnj88"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.414625 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lnj88" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.415168 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-rc8wq"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.415342 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.415495 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.415630 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-rc8wq" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.416214 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-n2kln"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.416493 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-n2kln" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.416893 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.417133 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.417300 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.417429 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-zlnf7"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.417683 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.417817 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.418021 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.418043 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-zlnf7" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.418101 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.418279 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.418793 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.419435 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kd79d"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.419955 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.420347 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.420524 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6fjnz"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.420597 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.420714 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.421137 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6fjnz" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.422423 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-kqd5s"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.422896 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqd5s" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.424103 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.424432 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.424728 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-svrdl"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.424859 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.424968 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.424982 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.425070 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-svrdl" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.430804 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.437490 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vszfq\" (UniqueName: \"kubernetes.io/projected/b7dff7fd-8dda-42ec-a6c8-2eb3d675a830-kube-api-access-vszfq\") pod \"openshift-apiserver-operator-796bbdcf4f-9dx9w\" (UID: \"b7dff7fd-8dda-42ec-a6c8-2eb3d675a830\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9dx9w" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.437546 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac22080d-c713-4917-9254-d103edaa0c3e-serving-cert\") pod \"controller-manager-879f6c89f-hwwcr\" (UID: \"ac22080d-c713-4917-9254-d103edaa0c3e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hwwcr" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.437572 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8add5c64-8462-48d4-8ac6-6ea831d7a535-serving-cert\") pod \"console-operator-58897d9998-x45ps\" (UID: \"8add5c64-8462-48d4-8ac6-6ea831d7a535\") " pod="openshift-console-operator/console-operator-58897d9998-x45ps" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.437603 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fprq8\" (UniqueName: \"kubernetes.io/projected/8add5c64-8462-48d4-8ac6-6ea831d7a535-kube-api-access-fprq8\") pod \"console-operator-58897d9998-x45ps\" (UID: \"8add5c64-8462-48d4-8ac6-6ea831d7a535\") " pod="openshift-console-operator/console-operator-58897d9998-x45ps" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.437626 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7a8b9092-45e9-456e-b1bc-e997c96a9836-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jfncv\" (UID: \"7a8b9092-45e9-456e-b1bc-e997c96a9836\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jfncv" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.437653 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0eb1a077-ff54-4f67-9cd5-e2c056ef807e-etcd-client\") pod \"apiserver-76f77b778f-9h8hv\" (UID: \"0eb1a077-ff54-4f67-9cd5-e2c056ef807e\") " pod="openshift-apiserver/apiserver-76f77b778f-9h8hv" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.437676 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/652cdabf-3f77-4cff-aae4-1f51ed209be0-auth-proxy-config\") pod \"machine-approver-56656f9798-2q4t5\" (UID: \"652cdabf-3f77-4cff-aae4-1f51ed209be0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2q4t5" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.437698 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0eb1a077-ff54-4f67-9cd5-e2c056ef807e-encryption-config\") pod \"apiserver-76f77b778f-9h8hv\" (UID: \"0eb1a077-ff54-4f67-9cd5-e2c056ef807e\") " pod="openshift-apiserver/apiserver-76f77b778f-9h8hv" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.437721 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1358317e-b558-46f9-b9f7-0fcfcc0eb2c9-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-hvjhq\" (UID: \"1358317e-b558-46f9-b9f7-0fcfcc0eb2c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjhq" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.437774 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0eb1a077-ff54-4f67-9cd5-e2c056ef807e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9h8hv\" (UID: \"0eb1a077-ff54-4f67-9cd5-e2c056ef807e\") " pod="openshift-apiserver/apiserver-76f77b778f-9h8hv" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.437803 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-qcbh7\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.437825 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0eb1a077-ff54-4f67-9cd5-e2c056ef807e-etcd-serving-ca\") pod \"apiserver-76f77b778f-9h8hv\" (UID: \"0eb1a077-ff54-4f67-9cd5-e2c056ef807e\") " pod="openshift-apiserver/apiserver-76f77b778f-9h8hv" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.437845 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwd46\" (UniqueName: \"kubernetes.io/projected/0eb1a077-ff54-4f67-9cd5-e2c056ef807e-kube-api-access-kwd46\") pod \"apiserver-76f77b778f-9h8hv\" (UID: \"0eb1a077-ff54-4f67-9cd5-e2c056ef807e\") " pod="openshift-apiserver/apiserver-76f77b778f-9h8hv" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.437871 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8add5c64-8462-48d4-8ac6-6ea831d7a535-config\") pod \"console-operator-58897d9998-x45ps\" (UID: \"8add5c64-8462-48d4-8ac6-6ea831d7a535\") " pod="openshift-console-operator/console-operator-58897d9998-x45ps" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.437895 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c1b56bc8-fee3-4990-88c8-12d557ea0639-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-wxbnz\" (UID: \"c1b56bc8-fee3-4990-88c8-12d557ea0639\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wxbnz" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.437920 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0eb1a077-ff54-4f67-9cd5-e2c056ef807e-serving-cert\") pod \"apiserver-76f77b778f-9h8hv\" (UID: \"0eb1a077-ff54-4f67-9cd5-e2c056ef807e\") " pod="openshift-apiserver/apiserver-76f77b778f-9h8hv" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.437941 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/652cdabf-3f77-4cff-aae4-1f51ed209be0-config\") pod \"machine-approver-56656f9798-2q4t5\" (UID: \"652cdabf-3f77-4cff-aae4-1f51ed209be0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2q4t5" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.437965 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk82n\" (UniqueName: \"kubernetes.io/projected/327d43d9-41eb-4ef4-9df0-d38e0739b7df-kube-api-access-xk82n\") pod \"downloads-7954f5f757-p5cqb\" (UID: \"327d43d9-41eb-4ef4-9df0-d38e0739b7df\") " pod="openshift-console/downloads-7954f5f757-p5cqb" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.437987 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c1b56bc8-fee3-4990-88c8-12d557ea0639-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-wxbnz\" (UID: \"c1b56bc8-fee3-4990-88c8-12d557ea0639\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wxbnz" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.438008 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e82bca83-9360-4ff6-b0d8-dcaeb20cdcf7-config\") pod \"machine-api-operator-5694c8668f-2k2wj\" (UID: \"e82bca83-9360-4ff6-b0d8-dcaeb20cdcf7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2k2wj" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.438033 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1358317e-b558-46f9-b9f7-0fcfcc0eb2c9-etcd-client\") pod \"apiserver-7bbb656c7d-hvjhq\" (UID: \"1358317e-b558-46f9-b9f7-0fcfcc0eb2c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjhq" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.438055 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0eb1a077-ff54-4f67-9cd5-e2c056ef807e-node-pullsecrets\") pod \"apiserver-76f77b778f-9h8hv\" (UID: \"0eb1a077-ff54-4f67-9cd5-e2c056ef807e\") " pod="openshift-apiserver/apiserver-76f77b778f-9h8hv" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.438079 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d7a9e04-71e1-4090-96af-395ad7e823ac-config\") pod \"route-controller-manager-6576b87f9c-qc9q5\" (UID: \"7d7a9e04-71e1-4090-96af-395ad7e823ac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qc9q5" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.438102 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-qcbh7\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.438123 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1358317e-b558-46f9-b9f7-0fcfcc0eb2c9-audit-policies\") pod \"apiserver-7bbb656c7d-hvjhq\" (UID: \"1358317e-b558-46f9-b9f7-0fcfcc0eb2c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjhq" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.438146 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1358317e-b558-46f9-b9f7-0fcfcc0eb2c9-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-hvjhq\" (UID: \"1358317e-b558-46f9-b9f7-0fcfcc0eb2c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjhq" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.438167 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/652cdabf-3f77-4cff-aae4-1f51ed209be0-machine-approver-tls\") pod \"machine-approver-56656f9798-2q4t5\" (UID: \"652cdabf-3f77-4cff-aae4-1f51ed209be0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2q4t5" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.438188 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbmlw\" (UniqueName: \"kubernetes.io/projected/652cdabf-3f77-4cff-aae4-1f51ed209be0-kube-api-access-lbmlw\") pod \"machine-approver-56656f9798-2q4t5\" (UID: \"652cdabf-3f77-4cff-aae4-1f51ed209be0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2q4t5" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.438211 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e926035e-0af8-45eb-9451-19c8827363c3-audit-dir\") pod \"oauth-openshift-558db77b4-qcbh7\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.438236 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-qcbh7\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.438262 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e82bca83-9360-4ff6-b0d8-dcaeb20cdcf7-images\") pod \"machine-api-operator-5694c8668f-2k2wj\" (UID: \"e82bca83-9360-4ff6-b0d8-dcaeb20cdcf7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2k2wj" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.438287 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-qcbh7\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.438314 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-qcbh7\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.438339 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1358317e-b558-46f9-b9f7-0fcfcc0eb2c9-encryption-config\") pod \"apiserver-7bbb656c7d-hvjhq\" (UID: \"1358317e-b558-46f9-b9f7-0fcfcc0eb2c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjhq" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.438361 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c1b56bc8-fee3-4990-88c8-12d557ea0639-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-wxbnz\" (UID: \"c1b56bc8-fee3-4990-88c8-12d557ea0639\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wxbnz" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.438387 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkn59\" (UniqueName: \"kubernetes.io/projected/7a8b9092-45e9-456e-b1bc-e997c96a9836-kube-api-access-xkn59\") pod \"cluster-samples-operator-665b6dd947-jfncv\" (UID: \"7a8b9092-45e9-456e-b1bc-e997c96a9836\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jfncv" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.438410 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac22080d-c713-4917-9254-d103edaa0c3e-config\") pod \"controller-manager-879f6c89f-hwwcr\" (UID: \"ac22080d-c713-4917-9254-d103edaa0c3e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hwwcr" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.438436 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0eb1a077-ff54-4f67-9cd5-e2c056ef807e-config\") pod \"apiserver-76f77b778f-9h8hv\" (UID: \"0eb1a077-ff54-4f67-9cd5-e2c056ef807e\") " pod="openshift-apiserver/apiserver-76f77b778f-9h8hv" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.438457 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0eb1a077-ff54-4f67-9cd5-e2c056ef807e-audit\") pod \"apiserver-76f77b778f-9h8hv\" (UID: \"0eb1a077-ff54-4f67-9cd5-e2c056ef807e\") " pod="openshift-apiserver/apiserver-76f77b778f-9h8hv" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.438477 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0eb1a077-ff54-4f67-9cd5-e2c056ef807e-image-import-ca\") pod \"apiserver-76f77b778f-9h8hv\" (UID: \"0eb1a077-ff54-4f67-9cd5-e2c056ef807e\") " pod="openshift-apiserver/apiserver-76f77b778f-9h8hv" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.438502 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxldr\" (UniqueName: \"kubernetes.io/projected/1358317e-b558-46f9-b9f7-0fcfcc0eb2c9-kube-api-access-hxldr\") pod \"apiserver-7bbb656c7d-hvjhq\" (UID: \"1358317e-b558-46f9-b9f7-0fcfcc0eb2c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjhq" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.438524 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ac22080d-c713-4917-9254-d103edaa0c3e-client-ca\") pod \"controller-manager-879f6c89f-hwwcr\" (UID: \"ac22080d-c713-4917-9254-d103edaa0c3e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hwwcr" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.438557 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fh9sv\" (UniqueName: \"kubernetes.io/projected/ac22080d-c713-4917-9254-d103edaa0c3e-kube-api-access-fh9sv\") pod \"controller-manager-879f6c89f-hwwcr\" (UID: \"ac22080d-c713-4917-9254-d103edaa0c3e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hwwcr" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.438583 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6fxt\" (UniqueName: \"kubernetes.io/projected/7d7a9e04-71e1-4090-96af-395ad7e823ac-kube-api-access-j6fxt\") pod \"route-controller-manager-6576b87f9c-qc9q5\" (UID: \"7d7a9e04-71e1-4090-96af-395ad7e823ac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qc9q5" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.438609 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-qcbh7\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.438632 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-qcbh7\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.438654 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-qcbh7\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.438678 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt8bg\" (UniqueName: \"kubernetes.io/projected/c1b56bc8-fee3-4990-88c8-12d557ea0639-kube-api-access-rt8bg\") pod \"cluster-image-registry-operator-dc59b4c8b-wxbnz\" (UID: \"c1b56bc8-fee3-4990-88c8-12d557ea0639\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wxbnz" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.438706 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e926035e-0af8-45eb-9451-19c8827363c3-audit-policies\") pod \"oauth-openshift-558db77b4-qcbh7\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.438732 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d7a9e04-71e1-4090-96af-395ad7e823ac-client-ca\") pod \"route-controller-manager-6576b87f9c-qc9q5\" (UID: \"7d7a9e04-71e1-4090-96af-395ad7e823ac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qc9q5" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.438786 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8add5c64-8462-48d4-8ac6-6ea831d7a535-trusted-ca\") pod \"console-operator-58897d9998-x45ps\" (UID: \"8add5c64-8462-48d4-8ac6-6ea831d7a535\") " pod="openshift-console-operator/console-operator-58897d9998-x45ps" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.438809 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1358317e-b558-46f9-b9f7-0fcfcc0eb2c9-audit-dir\") pod \"apiserver-7bbb656c7d-hvjhq\" (UID: \"1358317e-b558-46f9-b9f7-0fcfcc0eb2c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjhq" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.438831 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd7c4\" (UniqueName: \"kubernetes.io/projected/e82bca83-9360-4ff6-b0d8-dcaeb20cdcf7-kube-api-access-jd7c4\") pod \"machine-api-operator-5694c8668f-2k2wj\" (UID: \"e82bca83-9360-4ff6-b0d8-dcaeb20cdcf7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2k2wj" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.438854 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7dff7fd-8dda-42ec-a6c8-2eb3d675a830-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-9dx9w\" (UID: \"b7dff7fd-8dda-42ec-a6c8-2eb3d675a830\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9dx9w" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.438876 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ac22080d-c713-4917-9254-d103edaa0c3e-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hwwcr\" (UID: \"ac22080d-c713-4917-9254-d103edaa0c3e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hwwcr" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.438899 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-qcbh7\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.438924 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e82bca83-9360-4ff6-b0d8-dcaeb20cdcf7-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-2k2wj\" (UID: \"e82bca83-9360-4ff6-b0d8-dcaeb20cdcf7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2k2wj" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.438947 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1358317e-b558-46f9-b9f7-0fcfcc0eb2c9-serving-cert\") pod \"apiserver-7bbb656c7d-hvjhq\" (UID: \"1358317e-b558-46f9-b9f7-0fcfcc0eb2c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjhq" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.438968 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d7a9e04-71e1-4090-96af-395ad7e823ac-serving-cert\") pod \"route-controller-manager-6576b87f9c-qc9q5\" (UID: \"7d7a9e04-71e1-4090-96af-395ad7e823ac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qc9q5" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.438991 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0eb1a077-ff54-4f67-9cd5-e2c056ef807e-audit-dir\") pod \"apiserver-76f77b778f-9h8hv\" (UID: \"0eb1a077-ff54-4f67-9cd5-e2c056ef807e\") " pod="openshift-apiserver/apiserver-76f77b778f-9h8hv" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.439018 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-qcbh7\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.439042 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwfxm\" (UniqueName: \"kubernetes.io/projected/e926035e-0af8-45eb-9451-19c8827363c3-kube-api-access-cwfxm\") pod \"oauth-openshift-558db77b4-qcbh7\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.439063 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7dff7fd-8dda-42ec-a6c8-2eb3d675a830-config\") pod \"openshift-apiserver-operator-796bbdcf4f-9dx9w\" (UID: \"b7dff7fd-8dda-42ec-a6c8-2eb3d675a830\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9dx9w" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.439088 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-qcbh7\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.446830 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e926035e-0af8-45eb-9451-19c8827363c3-audit-policies\") pod \"oauth-openshift-558db77b4-qcbh7\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.447471 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1358317e-b558-46f9-b9f7-0fcfcc0eb2c9-audit-policies\") pod \"apiserver-7bbb656c7d-hvjhq\" (UID: \"1358317e-b558-46f9-b9f7-0fcfcc0eb2c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjhq" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.447906 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1358317e-b558-46f9-b9f7-0fcfcc0eb2c9-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-hvjhq\" (UID: \"1358317e-b558-46f9-b9f7-0fcfcc0eb2c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjhq" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.448460 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1358317e-b558-46f9-b9f7-0fcfcc0eb2c9-etcd-client\") pod \"apiserver-7bbb656c7d-hvjhq\" (UID: \"1358317e-b558-46f9-b9f7-0fcfcc0eb2c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjhq" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.448530 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0eb1a077-ff54-4f67-9cd5-e2c056ef807e-node-pullsecrets\") pod \"apiserver-76f77b778f-9h8hv\" (UID: \"0eb1a077-ff54-4f67-9cd5-e2c056ef807e\") " pod="openshift-apiserver/apiserver-76f77b778f-9h8hv" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.449255 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7dff7fd-8dda-42ec-a6c8-2eb3d675a830-config\") pod \"openshift-apiserver-operator-796bbdcf4f-9dx9w\" (UID: \"b7dff7fd-8dda-42ec-a6c8-2eb3d675a830\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9dx9w" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.449351 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e926035e-0af8-45eb-9451-19c8827363c3-audit-dir\") pod \"oauth-openshift-558db77b4-qcbh7\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.449601 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0eb1a077-ff54-4f67-9cd5-e2c056ef807e-image-import-ca\") pod \"apiserver-76f77b778f-9h8hv\" (UID: \"0eb1a077-ff54-4f67-9cd5-e2c056ef807e\") " pod="openshift-apiserver/apiserver-76f77b778f-9h8hv" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.449821 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1358317e-b558-46f9-b9f7-0fcfcc0eb2c9-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-hvjhq\" (UID: \"1358317e-b558-46f9-b9f7-0fcfcc0eb2c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjhq" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.450019 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-qcbh7\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.450934 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ac22080d-c713-4917-9254-d103edaa0c3e-client-ca\") pod \"controller-manager-879f6c89f-hwwcr\" (UID: \"ac22080d-c713-4917-9254-d103edaa0c3e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hwwcr" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.451071 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1358317e-b558-46f9-b9f7-0fcfcc0eb2c9-audit-dir\") pod \"apiserver-7bbb656c7d-hvjhq\" (UID: \"1358317e-b558-46f9-b9f7-0fcfcc0eb2c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjhq" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.452578 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.452786 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.453954 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac22080d-c713-4917-9254-d103edaa0c3e-config\") pod \"controller-manager-879f6c89f-hwwcr\" (UID: \"ac22080d-c713-4917-9254-d103edaa0c3e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hwwcr" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.456452 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-qcbh7\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.456480 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0eb1a077-ff54-4f67-9cd5-e2c056ef807e-serving-cert\") pod \"apiserver-76f77b778f-9h8hv\" (UID: \"0eb1a077-ff54-4f67-9cd5-e2c056ef807e\") " pod="openshift-apiserver/apiserver-76f77b778f-9h8hv" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.457691 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0eb1a077-ff54-4f67-9cd5-e2c056ef807e-config\") pod \"apiserver-76f77b778f-9h8hv\" (UID: \"0eb1a077-ff54-4f67-9cd5-e2c056ef807e\") " pod="openshift-apiserver/apiserver-76f77b778f-9h8hv" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.457962 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0eb1a077-ff54-4f67-9cd5-e2c056ef807e-audit-dir\") pod \"apiserver-76f77b778f-9h8hv\" (UID: \"0eb1a077-ff54-4f67-9cd5-e2c056ef807e\") " pod="openshift-apiserver/apiserver-76f77b778f-9h8hv" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.458131 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0eb1a077-ff54-4f67-9cd5-e2c056ef807e-etcd-serving-ca\") pod \"apiserver-76f77b778f-9h8hv\" (UID: \"0eb1a077-ff54-4f67-9cd5-e2c056ef807e\") " pod="openshift-apiserver/apiserver-76f77b778f-9h8hv" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.458654 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0eb1a077-ff54-4f67-9cd5-e2c056ef807e-audit\") pod \"apiserver-76f77b778f-9h8hv\" (UID: \"0eb1a077-ff54-4f67-9cd5-e2c056ef807e\") " pod="openshift-apiserver/apiserver-76f77b778f-9h8hv" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.458841 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-qcbh7\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.459188 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.460494 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ac22080d-c713-4917-9254-d103edaa0c3e-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hwwcr\" (UID: \"ac22080d-c713-4917-9254-d103edaa0c3e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hwwcr" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.467707 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1358317e-b558-46f9-b9f7-0fcfcc0eb2c9-serving-cert\") pod \"apiserver-7bbb656c7d-hvjhq\" (UID: \"1358317e-b558-46f9-b9f7-0fcfcc0eb2c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjhq" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.468400 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0eb1a077-ff54-4f67-9cd5-e2c056ef807e-etcd-client\") pod \"apiserver-76f77b778f-9h8hv\" (UID: \"0eb1a077-ff54-4f67-9cd5-e2c056ef807e\") " pod="openshift-apiserver/apiserver-76f77b778f-9h8hv" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.469555 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-qcbh7\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.465374 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-qcbh7\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.469864 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.469963 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0eb1a077-ff54-4f67-9cd5-e2c056ef807e-encryption-config\") pod \"apiserver-76f77b778f-9h8hv\" (UID: \"0eb1a077-ff54-4f67-9cd5-e2c056ef807e\") " pod="openshift-apiserver/apiserver-76f77b778f-9h8hv" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.470098 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.470144 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.470158 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.470232 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.470286 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.470486 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.470672 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.470729 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1358317e-b558-46f9-b9f7-0fcfcc0eb2c9-encryption-config\") pod \"apiserver-7bbb656c7d-hvjhq\" (UID: \"1358317e-b558-46f9-b9f7-0fcfcc0eb2c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjhq" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.470885 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.471031 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.471115 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.471375 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-qcbh7\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.471683 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.471819 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.471970 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.472093 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.472157 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.472190 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.472248 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.472274 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.472316 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.472389 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.472421 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.472902 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.473872 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-7jtcn"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.489182 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.489402 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.489555 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.490555 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-qcbh7\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.490900 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-qcbh7\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.491375 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7dff7fd-8dda-42ec-a6c8-2eb3d675a830-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-9dx9w\" (UID: \"b7dff7fd-8dda-42ec-a6c8-2eb3d675a830\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9dx9w" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.491998 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-qcbh7\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.492264 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-qcbh7\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.492887 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.502448 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac22080d-c713-4917-9254-d103edaa0c3e-serving-cert\") pod \"controller-manager-879f6c89f-hwwcr\" (UID: \"ac22080d-c713-4917-9254-d103edaa0c3e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hwwcr" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.504357 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.506522 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-7jtcn" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.506576 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.508111 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jflvh"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.508620 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jflvh" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.508793 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-km5pw"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.509692 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-km5pw" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.509978 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.510983 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bgbsx"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.511328 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bgbsx" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.511892 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-crm27"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.512223 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-crm27" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.512911 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.513544 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-k254w"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.514194 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k254w" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.514230 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.515698 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-zw8x5"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.516312 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0eb1a077-ff54-4f67-9cd5-e2c056ef807e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9h8hv\" (UID: \"0eb1a077-ff54-4f67-9cd5-e2c056ef807e\") " pod="openshift-apiserver/apiserver-76f77b778f-9h8hv" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.517186 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-zw8x5" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.518412 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wszfq"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.518928 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wszfq" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.519702 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-65j2c"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.520212 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-65j2c" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.520377 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-qcbh7\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.520881 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-trk29"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.521538 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-trk29" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.522011 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5z9sw"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.523260 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hwwcr"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.523323 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5z9sw" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.524162 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rjlbg"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.524956 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rjlbg" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.526189 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cvjnm"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.526562 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cvjnm" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.529059 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.529523 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qcbh7"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.531295 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fjsgm"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.531965 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fjsgm" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.533216 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-54h94"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.533656 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-54h94" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.534733 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-654gb"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.535213 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-654gb" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.536273 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9h8hv"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.537662 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-sbtxv"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.538178 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-sbtxv" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.538975 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484990-bjkct"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.539670 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-bjkct" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.540314 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jfncv"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.540455 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk82n\" (UniqueName: \"kubernetes.io/projected/327d43d9-41eb-4ef4-9df0-d38e0739b7df-kube-api-access-xk82n\") pod \"downloads-7954f5f757-p5cqb\" (UID: \"327d43d9-41eb-4ef4-9df0-d38e0739b7df\") " pod="openshift-console/downloads-7954f5f757-p5cqb" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.540482 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/652cdabf-3f77-4cff-aae4-1f51ed209be0-config\") pod \"machine-approver-56656f9798-2q4t5\" (UID: \"652cdabf-3f77-4cff-aae4-1f51ed209be0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2q4t5" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.540501 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e82bca83-9360-4ff6-b0d8-dcaeb20cdcf7-config\") pod \"machine-api-operator-5694c8668f-2k2wj\" (UID: \"e82bca83-9360-4ff6-b0d8-dcaeb20cdcf7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2k2wj" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.540517 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c1b56bc8-fee3-4990-88c8-12d557ea0639-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-wxbnz\" (UID: \"c1b56bc8-fee3-4990-88c8-12d557ea0639\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wxbnz" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.540534 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d7a9e04-71e1-4090-96af-395ad7e823ac-config\") pod \"route-controller-manager-6576b87f9c-qc9q5\" (UID: \"7d7a9e04-71e1-4090-96af-395ad7e823ac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qc9q5" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.540551 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/652cdabf-3f77-4cff-aae4-1f51ed209be0-machine-approver-tls\") pod \"machine-approver-56656f9798-2q4t5\" (UID: \"652cdabf-3f77-4cff-aae4-1f51ed209be0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2q4t5" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.540567 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbmlw\" (UniqueName: \"kubernetes.io/projected/652cdabf-3f77-4cff-aae4-1f51ed209be0-kube-api-access-lbmlw\") pod \"machine-approver-56656f9798-2q4t5\" (UID: \"652cdabf-3f77-4cff-aae4-1f51ed209be0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2q4t5" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.540584 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e82bca83-9360-4ff6-b0d8-dcaeb20cdcf7-images\") pod \"machine-api-operator-5694c8668f-2k2wj\" (UID: \"e82bca83-9360-4ff6-b0d8-dcaeb20cdcf7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2k2wj" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.540599 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c1b56bc8-fee3-4990-88c8-12d557ea0639-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-wxbnz\" (UID: \"c1b56bc8-fee3-4990-88c8-12d557ea0639\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wxbnz" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.540616 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkn59\" (UniqueName: \"kubernetes.io/projected/7a8b9092-45e9-456e-b1bc-e997c96a9836-kube-api-access-xkn59\") pod \"cluster-samples-operator-665b6dd947-jfncv\" (UID: \"7a8b9092-45e9-456e-b1bc-e997c96a9836\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jfncv" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.540644 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6fxt\" (UniqueName: \"kubernetes.io/projected/7d7a9e04-71e1-4090-96af-395ad7e823ac-kube-api-access-j6fxt\") pod \"route-controller-manager-6576b87f9c-qc9q5\" (UID: \"7d7a9e04-71e1-4090-96af-395ad7e823ac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qc9q5" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.540660 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rt8bg\" (UniqueName: \"kubernetes.io/projected/c1b56bc8-fee3-4990-88c8-12d557ea0639-kube-api-access-rt8bg\") pod \"cluster-image-registry-operator-dc59b4c8b-wxbnz\" (UID: \"c1b56bc8-fee3-4990-88c8-12d557ea0639\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wxbnz" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.540676 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d7a9e04-71e1-4090-96af-395ad7e823ac-client-ca\") pod \"route-controller-manager-6576b87f9c-qc9q5\" (UID: \"7d7a9e04-71e1-4090-96af-395ad7e823ac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qc9q5" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.540693 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8add5c64-8462-48d4-8ac6-6ea831d7a535-trusted-ca\") pod \"console-operator-58897d9998-x45ps\" (UID: \"8add5c64-8462-48d4-8ac6-6ea831d7a535\") " pod="openshift-console-operator/console-operator-58897d9998-x45ps" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.540710 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jd7c4\" (UniqueName: \"kubernetes.io/projected/e82bca83-9360-4ff6-b0d8-dcaeb20cdcf7-kube-api-access-jd7c4\") pod \"machine-api-operator-5694c8668f-2k2wj\" (UID: \"e82bca83-9360-4ff6-b0d8-dcaeb20cdcf7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2k2wj" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.540725 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e82bca83-9360-4ff6-b0d8-dcaeb20cdcf7-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-2k2wj\" (UID: \"e82bca83-9360-4ff6-b0d8-dcaeb20cdcf7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2k2wj" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.540799 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d7a9e04-71e1-4090-96af-395ad7e823ac-serving-cert\") pod \"route-controller-manager-6576b87f9c-qc9q5\" (UID: \"7d7a9e04-71e1-4090-96af-395ad7e823ac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qc9q5" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.540839 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8add5c64-8462-48d4-8ac6-6ea831d7a535-serving-cert\") pod \"console-operator-58897d9998-x45ps\" (UID: \"8add5c64-8462-48d4-8ac6-6ea831d7a535\") " pod="openshift-console-operator/console-operator-58897d9998-x45ps" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.540860 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fprq8\" (UniqueName: \"kubernetes.io/projected/8add5c64-8462-48d4-8ac6-6ea831d7a535-kube-api-access-fprq8\") pod \"console-operator-58897d9998-x45ps\" (UID: \"8add5c64-8462-48d4-8ac6-6ea831d7a535\") " pod="openshift-console-operator/console-operator-58897d9998-x45ps" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.540881 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7a8b9092-45e9-456e-b1bc-e997c96a9836-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jfncv\" (UID: \"7a8b9092-45e9-456e-b1bc-e997c96a9836\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jfncv" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.540902 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/652cdabf-3f77-4cff-aae4-1f51ed209be0-auth-proxy-config\") pod \"machine-approver-56656f9798-2q4t5\" (UID: \"652cdabf-3f77-4cff-aae4-1f51ed209be0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2q4t5" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.540945 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8add5c64-8462-48d4-8ac6-6ea831d7a535-config\") pod \"console-operator-58897d9998-x45ps\" (UID: \"8add5c64-8462-48d4-8ac6-6ea831d7a535\") " pod="openshift-console-operator/console-operator-58897d9998-x45ps" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.540966 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c1b56bc8-fee3-4990-88c8-12d557ea0639-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-wxbnz\" (UID: \"c1b56bc8-fee3-4990-88c8-12d557ea0639\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wxbnz" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.541905 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/652cdabf-3f77-4cff-aae4-1f51ed209be0-config\") pod \"machine-approver-56656f9798-2q4t5\" (UID: \"652cdabf-3f77-4cff-aae4-1f51ed209be0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2q4t5" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.542350 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e82bca83-9360-4ff6-b0d8-dcaeb20cdcf7-config\") pod \"machine-api-operator-5694c8668f-2k2wj\" (UID: \"e82bca83-9360-4ff6-b0d8-dcaeb20cdcf7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2k2wj" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.542984 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-p5cqb"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.543016 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-x45ps"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.544037 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/652cdabf-3f77-4cff-aae4-1f51ed209be0-auth-proxy-config\") pod \"machine-approver-56656f9798-2q4t5\" (UID: \"652cdabf-3f77-4cff-aae4-1f51ed209be0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2q4t5" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.544050 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8add5c64-8462-48d4-8ac6-6ea831d7a535-trusted-ca\") pod \"console-operator-58897d9998-x45ps\" (UID: \"8add5c64-8462-48d4-8ac6-6ea831d7a535\") " pod="openshift-console-operator/console-operator-58897d9998-x45ps" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.544493 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e82bca83-9360-4ff6-b0d8-dcaeb20cdcf7-images\") pod \"machine-api-operator-5694c8668f-2k2wj\" (UID: \"e82bca83-9360-4ff6-b0d8-dcaeb20cdcf7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2k2wj" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.544662 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8add5c64-8462-48d4-8ac6-6ea831d7a535-config\") pod \"console-operator-58897d9998-x45ps\" (UID: \"8add5c64-8462-48d4-8ac6-6ea831d7a535\") " pod="openshift-console-operator/console-operator-58897d9998-x45ps" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.544834 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjhq"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.545342 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-2k2wj"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.545406 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c1b56bc8-fee3-4990-88c8-12d557ea0639-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-wxbnz\" (UID: \"c1b56bc8-fee3-4990-88c8-12d557ea0639\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wxbnz" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.545557 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d7a9e04-71e1-4090-96af-395ad7e823ac-config\") pod \"route-controller-manager-6576b87f9c-qc9q5\" (UID: \"7d7a9e04-71e1-4090-96af-395ad7e823ac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qc9q5" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.545732 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d7a9e04-71e1-4090-96af-395ad7e823ac-client-ca\") pod \"route-controller-manager-6576b87f9c-qc9q5\" (UID: \"7d7a9e04-71e1-4090-96af-395ad7e823ac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qc9q5" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.546216 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d7a9e04-71e1-4090-96af-395ad7e823ac-serving-cert\") pod \"route-controller-manager-6576b87f9c-qc9q5\" (UID: \"7d7a9e04-71e1-4090-96af-395ad7e823ac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qc9q5" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.546545 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7a8b9092-45e9-456e-b1bc-e997c96a9836-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jfncv\" (UID: \"7a8b9092-45e9-456e-b1bc-e997c96a9836\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jfncv" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.546705 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8add5c64-8462-48d4-8ac6-6ea831d7a535-serving-cert\") pod \"console-operator-58897d9998-x45ps\" (UID: \"8add5c64-8462-48d4-8ac6-6ea831d7a535\") " pod="openshift-console-operator/console-operator-58897d9998-x45ps" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.548021 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c1b56bc8-fee3-4990-88c8-12d557ea0639-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-wxbnz\" (UID: \"c1b56bc8-fee3-4990-88c8-12d557ea0639\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wxbnz" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.548866 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.548400 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/652cdabf-3f77-4cff-aae4-1f51ed209be0-machine-approver-tls\") pod \"machine-approver-56656f9798-2q4t5\" (UID: \"652cdabf-3f77-4cff-aae4-1f51ed209be0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2q4t5" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.569244 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.629815 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.648722 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.668872 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.689304 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.827471 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.827642 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e82bca83-9360-4ff6-b0d8-dcaeb20cdcf7-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-2k2wj\" (UID: \"e82bca83-9360-4ff6-b0d8-dcaeb20cdcf7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2k2wj" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.827980 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.853695 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vszfq\" (UniqueName: \"kubernetes.io/projected/b7dff7fd-8dda-42ec-a6c8-2eb3d675a830-kube-api-access-vszfq\") pod \"openshift-apiserver-operator-796bbdcf4f-9dx9w\" (UID: \"b7dff7fd-8dda-42ec-a6c8-2eb3d675a830\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9dx9w" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.856041 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fh9sv\" (UniqueName: \"kubernetes.io/projected/ac22080d-c713-4917-9254-d103edaa0c3e-kube-api-access-fh9sv\") pod \"controller-manager-879f6c89f-hwwcr\" (UID: \"ac22080d-c713-4917-9254-d103edaa0c3e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hwwcr" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.858445 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwfxm\" (UniqueName: \"kubernetes.io/projected/e926035e-0af8-45eb-9451-19c8827363c3-kube-api-access-cwfxm\") pod \"oauth-openshift-558db77b4-qcbh7\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.859212 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.862664 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxldr\" (UniqueName: \"kubernetes.io/projected/1358317e-b558-46f9-b9f7-0fcfcc0eb2c9-kube-api-access-hxldr\") pod \"apiserver-7bbb656c7d-hvjhq\" (UID: \"1358317e-b558-46f9-b9f7-0fcfcc0eb2c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjhq" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.863125 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwd46\" (UniqueName: \"kubernetes.io/projected/0eb1a077-ff54-4f67-9cd5-e2c056ef807e-kube-api-access-kwd46\") pod \"apiserver-76f77b778f-9h8hv\" (UID: \"0eb1a077-ff54-4f67-9cd5-e2c056ef807e\") " pod="openshift-apiserver/apiserver-76f77b778f-9h8hv" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.865029 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-zlnf7"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.865074 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-n2kln"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.865092 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jflvh"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.866339 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kd79d"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.869024 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.869423 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjhq" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.870256 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wxbnz"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.872785 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-k254w"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.874664 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-km5pw"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.876412 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9dx9w"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.877845 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9dx9w" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.878754 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qc9q5"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.880784 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-rc8wq"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.882307 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-l5xjz"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.883310 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-l5xjz" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.884010 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-kqd5s"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.885484 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-crm27"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.886973 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-lnj88"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.890940 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-zl4zm"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.891146 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.891471 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-rfv8b"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.891712 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-zl4zm" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.892047 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-rfv8b" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.892961 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-zw8x5"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.894328 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bgbsx"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.895608 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-zl4zm"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.897011 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-sbtxv"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.898345 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-654gb"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.899736 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484990-bjkct"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.901064 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cvjnm"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.905441 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-svrdl"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.905494 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wszfq"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.906881 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-65j2c"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.908060 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.908192 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-l5xjz"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.912403 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5z9sw"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.912475 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rjlbg"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.912496 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6fjnz"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.914521 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-trk29"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.916188 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-54h94"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.919110 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-zj7cr"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.919846 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-zj7cr" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.921238 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-rfv8b"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.924012 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fjsgm"] Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.929437 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.948794 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.969361 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 22 16:32:14 crc kubenswrapper[4758]: I0122 16:32:14.988423 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.009559 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.029136 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.048861 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.068925 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.088905 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.108406 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.127227 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.129245 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.142820 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-9h8hv" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.152097 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.160621 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hwwcr" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.168451 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.191548 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.209626 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.236768 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.250064 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.268666 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.289024 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.309002 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.315070 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9dx9w"] Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.329306 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 22 16:32:15 crc kubenswrapper[4758]: W0122 16:32:15.333801 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7dff7fd_8dda_42ec_a6c8_2eb3d675a830.slice/crio-5c9161f68a89cc3827141a76a592ceff8a3b5cbc5b10c4aae1c3ccbb61b148e9 WatchSource:0}: Error finding container 5c9161f68a89cc3827141a76a592ceff8a3b5cbc5b10c4aae1c3ccbb61b148e9: Status 404 returned error can't find the container with id 5c9161f68a89cc3827141a76a592ceff8a3b5cbc5b10c4aae1c3ccbb61b148e9 Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.358086 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.368986 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.390414 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.408416 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.428611 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.449005 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.468428 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.490649 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.508729 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.527441 4758 request.go:700] Waited for 1.008226352s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serviceaccount-dockercfg-rq7zk&limit=500&resourceVersion=0 Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.529633 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.538799 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9dx9w" event={"ID":"b7dff7fd-8dda-42ec-a6c8-2eb3d675a830","Type":"ContainerStarted","Data":"0502cef21b2cd340350f062025593067c06047839cf02be5cd184a603e90851f"} Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.538853 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9dx9w" event={"ID":"b7dff7fd-8dda-42ec-a6c8-2eb3d675a830","Type":"ContainerStarted","Data":"5c9161f68a89cc3827141a76a592ceff8a3b5cbc5b10c4aae1c3ccbb61b148e9"} Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.549155 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.570138 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.589903 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.596128 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjhq"] Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.608427 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.632469 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.634456 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hwwcr"] Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.639925 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qcbh7"] Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.651189 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.652540 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9h8hv"] Jan 22 16:32:15 crc kubenswrapper[4758]: W0122 16:32:15.653801 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac22080d_c713_4917_9254_d103edaa0c3e.slice/crio-92755c40a94b140798e4303171dc6c8a96905bcead76099262baa56656e94f94 WatchSource:0}: Error finding container 92755c40a94b140798e4303171dc6c8a96905bcead76099262baa56656e94f94: Status 404 returned error can't find the container with id 92755c40a94b140798e4303171dc6c8a96905bcead76099262baa56656e94f94 Jan 22 16:32:15 crc kubenswrapper[4758]: W0122 16:32:15.656798 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode926035e_0af8_45eb_9451_19c8827363c3.slice/crio-06fe3b48de957488ed3233fc44b2211826eaf0f4701de5c80c870b4221289206 WatchSource:0}: Error finding container 06fe3b48de957488ed3233fc44b2211826eaf0f4701de5c80c870b4221289206: Status 404 returned error can't find the container with id 06fe3b48de957488ed3233fc44b2211826eaf0f4701de5c80c870b4221289206 Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.668525 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 22 16:32:15 crc kubenswrapper[4758]: W0122 16:32:15.672828 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0eb1a077_ff54_4f67_9cd5_e2c056ef807e.slice/crio-338099b7d1cb101237538bea453a28eb9bb5c4beab3fe588658644246ca56062 WatchSource:0}: Error finding container 338099b7d1cb101237538bea453a28eb9bb5c4beab3fe588658644246ca56062: Status 404 returned error can't find the container with id 338099b7d1cb101237538bea453a28eb9bb5c4beab3fe588658644246ca56062 Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.690375 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.708811 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.729500 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.750371 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.769445 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.790071 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.810152 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.836937 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.848588 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.869864 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.888442 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.908387 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.928465 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.949527 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.969304 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 22 16:32:15 crc kubenswrapper[4758]: I0122 16:32:15.989334 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.009030 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.029194 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.049116 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.068184 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.088866 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.109338 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.144759 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c1b56bc8-fee3-4990-88c8-12d557ea0639-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-wxbnz\" (UID: \"c1b56bc8-fee3-4990-88c8-12d557ea0639\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wxbnz" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.183646 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk82n\" (UniqueName: \"kubernetes.io/projected/327d43d9-41eb-4ef4-9df0-d38e0739b7df-kube-api-access-xk82n\") pod \"downloads-7954f5f757-p5cqb\" (UID: \"327d43d9-41eb-4ef4-9df0-d38e0739b7df\") " pod="openshift-console/downloads-7954f5f757-p5cqb" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.196006 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt8bg\" (UniqueName: \"kubernetes.io/projected/c1b56bc8-fee3-4990-88c8-12d557ea0639-kube-api-access-rt8bg\") pod \"cluster-image-registry-operator-dc59b4c8b-wxbnz\" (UID: \"c1b56bc8-fee3-4990-88c8-12d557ea0639\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wxbnz" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.203939 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fprq8\" (UniqueName: \"kubernetes.io/projected/8add5c64-8462-48d4-8ac6-6ea831d7a535-kube-api-access-fprq8\") pod \"console-operator-58897d9998-x45ps\" (UID: \"8add5c64-8462-48d4-8ac6-6ea831d7a535\") " pod="openshift-console-operator/console-operator-58897d9998-x45ps" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.221609 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkn59\" (UniqueName: \"kubernetes.io/projected/7a8b9092-45e9-456e-b1bc-e997c96a9836-kube-api-access-xkn59\") pod \"cluster-samples-operator-665b6dd947-jfncv\" (UID: \"7a8b9092-45e9-456e-b1bc-e997c96a9836\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jfncv" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.225387 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-p5cqb" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.249051 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jd7c4\" (UniqueName: \"kubernetes.io/projected/e82bca83-9360-4ff6-b0d8-dcaeb20cdcf7-kube-api-access-jd7c4\") pod \"machine-api-operator-5694c8668f-2k2wj\" (UID: \"e82bca83-9360-4ff6-b0d8-dcaeb20cdcf7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2k2wj" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.266579 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6fxt\" (UniqueName: \"kubernetes.io/projected/7d7a9e04-71e1-4090-96af-395ad7e823ac-kube-api-access-j6fxt\") pod \"route-controller-manager-6576b87f9c-qc9q5\" (UID: \"7d7a9e04-71e1-4090-96af-395ad7e823ac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qc9q5" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.285710 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbmlw\" (UniqueName: \"kubernetes.io/projected/652cdabf-3f77-4cff-aae4-1f51ed209be0-kube-api-access-lbmlw\") pod \"machine-approver-56656f9798-2q4t5\" (UID: \"652cdabf-3f77-4cff-aae4-1f51ed209be0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2q4t5" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.385289 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.396803 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-x45ps" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.397771 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wxbnz" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.397867 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qc9q5" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.398013 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2q4t5" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.398205 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1c983b09-f715-422e-960d-36dcc610c30b-installation-pull-secrets\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.398237 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1c983b09-f715-422e-960d-36dcc610c30b-trusted-ca\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.398258 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c7ef802-c8dd-48a5-a7c5-5cf646b633f2-serving-cert\") pod \"etcd-operator-b45778765-rc8wq\" (UID: \"7c7ef802-c8dd-48a5-a7c5-5cf646b633f2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rc8wq" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.398506 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzxhk\" (UniqueName: \"kubernetes.io/projected/3fcd001e-7c62-4167-adbd-afd79a1dd594-kube-api-access-qzxhk\") pod \"ingress-operator-5b745b69d9-kqd5s\" (UID: \"3fcd001e-7c62-4167-adbd-afd79a1dd594\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqd5s" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.398536 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hj7d\" (UniqueName: \"kubernetes.io/projected/1d2c5bee-e237-4043-9f8a-73bb67ebf355-kube-api-access-6hj7d\") pod \"router-default-5444994796-7jtcn\" (UID: \"1d2c5bee-e237-4043-9f8a-73bb67ebf355\") " pod="openshift-ingress/router-default-5444994796-7jtcn" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.398559 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/1d2c5bee-e237-4043-9f8a-73bb67ebf355-default-certificate\") pod \"router-default-5444994796-7jtcn\" (UID: \"1d2c5bee-e237-4043-9f8a-73bb67ebf355\") " pod="openshift-ingress/router-default-5444994796-7jtcn" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.398579 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d2c5bee-e237-4043-9f8a-73bb67ebf355-metrics-certs\") pod \"router-default-5444994796-7jtcn\" (UID: \"1d2c5bee-e237-4043-9f8a-73bb67ebf355\") " pod="openshift-ingress/router-default-5444994796-7jtcn" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.398600 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltwfg\" (UniqueName: \"kubernetes.io/projected/7006bfc3-2fa7-483c-8bcf-7ded310328a9-kube-api-access-ltwfg\") pod \"openshift-controller-manager-operator-756b6f6bc6-6fjnz\" (UID: \"7006bfc3-2fa7-483c-8bcf-7ded310328a9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6fjnz" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.398624 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvltd\" (UniqueName: \"kubernetes.io/projected/fcc6018a-27ef-4a30-98f2-90d2e6e454be-kube-api-access-kvltd\") pod \"dns-operator-744455d44c-zlnf7\" (UID: \"fcc6018a-27ef-4a30-98f2-90d2e6e454be\") " pod="openshift-dns-operator/dns-operator-744455d44c-zlnf7" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.398644 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7006bfc3-2fa7-483c-8bcf-7ded310328a9-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-6fjnz\" (UID: \"7006bfc3-2fa7-483c-8bcf-7ded310328a9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6fjnz" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.398666 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1c983b09-f715-422e-960d-36dcc610c30b-registry-certificates\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.398696 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5lhh\" (UniqueName: \"kubernetes.io/projected/1c983b09-f715-422e-960d-36dcc610c30b-kube-api-access-f5lhh\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.398726 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d2c5bee-e237-4043-9f8a-73bb67ebf355-service-ca-bundle\") pod \"router-default-5444994796-7jtcn\" (UID: \"1d2c5bee-e237-4043-9f8a-73bb67ebf355\") " pod="openshift-ingress/router-default-5444994796-7jtcn" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.398759 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c8bd5414-72ea-40f8-8cf2-a6bf81e1258a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-lnj88\" (UID: \"c8bd5414-72ea-40f8-8cf2-a6bf81e1258a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lnj88" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.398795 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8f67259d-8eec-4f78-929f-01e7abe893f6-oauth-serving-cert\") pod \"console-f9d7485db-n2kln\" (UID: \"8f67259d-8eec-4f78-929f-01e7abe893f6\") " pod="openshift-console/console-f9d7485db-n2kln" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.398825 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv2x7\" (UniqueName: \"kubernetes.io/projected/8f67259d-8eec-4f78-929f-01e7abe893f6-kube-api-access-dv2x7\") pod \"console-f9d7485db-n2kln\" (UID: \"8f67259d-8eec-4f78-929f-01e7abe893f6\") " pod="openshift-console/console-f9d7485db-n2kln" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.398846 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1c983b09-f715-422e-960d-36dcc610c30b-ca-trust-extracted\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.398867 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3fcd001e-7c62-4167-adbd-afd79a1dd594-metrics-tls\") pod \"ingress-operator-5b745b69d9-kqd5s\" (UID: \"3fcd001e-7c62-4167-adbd-afd79a1dd594\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqd5s" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.398892 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/1d2c5bee-e237-4043-9f8a-73bb67ebf355-stats-auth\") pod \"router-default-5444994796-7jtcn\" (UID: \"1d2c5bee-e237-4043-9f8a-73bb67ebf355\") " pod="openshift-ingress/router-default-5444994796-7jtcn" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.398926 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7c7ef802-c8dd-48a5-a7c5-5cf646b633f2-etcd-client\") pod \"etcd-operator-b45778765-rc8wq\" (UID: \"7c7ef802-c8dd-48a5-a7c5-5cf646b633f2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rc8wq" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.398950 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8f67259d-8eec-4f78-929f-01e7abe893f6-console-oauth-config\") pod \"console-f9d7485db-n2kln\" (UID: \"8f67259d-8eec-4f78-929f-01e7abe893f6\") " pod="openshift-console/console-f9d7485db-n2kln" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.398985 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3fcd001e-7c62-4167-adbd-afd79a1dd594-trusted-ca\") pod \"ingress-operator-5b745b69d9-kqd5s\" (UID: \"3fcd001e-7c62-4167-adbd-afd79a1dd594\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqd5s" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.399005 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5760aa2c-cda7-44bb-8458-d31b09eb2de5-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-svrdl\" (UID: \"5760aa2c-cda7-44bb-8458-d31b09eb2de5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-svrdl" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.399024 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8f67259d-8eec-4f78-929f-01e7abe893f6-service-ca\") pod \"console-f9d7485db-n2kln\" (UID: \"8f67259d-8eec-4f78-929f-01e7abe893f6\") " pod="openshift-console/console-f9d7485db-n2kln" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.399054 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.399103 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c7ef802-c8dd-48a5-a7c5-5cf646b633f2-config\") pod \"etcd-operator-b45778765-rc8wq\" (UID: \"7c7ef802-c8dd-48a5-a7c5-5cf646b633f2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rc8wq" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.399126 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8f67259d-8eec-4f78-929f-01e7abe893f6-console-config\") pod \"console-f9d7485db-n2kln\" (UID: \"8f67259d-8eec-4f78-929f-01e7abe893f6\") " pod="openshift-console/console-f9d7485db-n2kln" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.399149 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7006bfc3-2fa7-483c-8bcf-7ded310328a9-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-6fjnz\" (UID: \"7006bfc3-2fa7-483c-8bcf-7ded310328a9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6fjnz" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.399169 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/7c7ef802-c8dd-48a5-a7c5-5cf646b633f2-etcd-ca\") pod \"etcd-operator-b45778765-rc8wq\" (UID: \"7c7ef802-c8dd-48a5-a7c5-5cf646b633f2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rc8wq" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.399186 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/7c7ef802-c8dd-48a5-a7c5-5cf646b633f2-etcd-service-ca\") pod \"etcd-operator-b45778765-rc8wq\" (UID: \"7c7ef802-c8dd-48a5-a7c5-5cf646b633f2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rc8wq" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.399208 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8f67259d-8eec-4f78-929f-01e7abe893f6-console-serving-cert\") pod \"console-f9d7485db-n2kln\" (UID: \"8f67259d-8eec-4f78-929f-01e7abe893f6\") " pod="openshift-console/console-f9d7485db-n2kln" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.399227 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8bd5414-72ea-40f8-8cf2-a6bf81e1258a-serving-cert\") pod \"openshift-config-operator-7777fb866f-lnj88\" (UID: \"c8bd5414-72ea-40f8-8cf2-a6bf81e1258a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lnj88" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.399248 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5760aa2c-cda7-44bb-8458-d31b09eb2de5-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-svrdl\" (UID: \"5760aa2c-cda7-44bb-8458-d31b09eb2de5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-svrdl" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.399288 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f67259d-8eec-4f78-929f-01e7abe893f6-trusted-ca-bundle\") pod \"console-f9d7485db-n2kln\" (UID: \"8f67259d-8eec-4f78-929f-01e7abe893f6\") " pod="openshift-console/console-f9d7485db-n2kln" Jan 22 16:32:16 crc kubenswrapper[4758]: E0122 16:32:16.399679 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:16.89966417 +0000 UTC m=+158.383003455 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.399909 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn2z7\" (UniqueName: \"kubernetes.io/projected/7c7ef802-c8dd-48a5-a7c5-5cf646b633f2-kube-api-access-wn2z7\") pod \"etcd-operator-b45778765-rc8wq\" (UID: \"7c7ef802-c8dd-48a5-a7c5-5cf646b633f2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rc8wq" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.399946 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fcc6018a-27ef-4a30-98f2-90d2e6e454be-metrics-tls\") pod \"dns-operator-744455d44c-zlnf7\" (UID: \"fcc6018a-27ef-4a30-98f2-90d2e6e454be\") " pod="openshift-dns-operator/dns-operator-744455d44c-zlnf7" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.400128 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3fcd001e-7c62-4167-adbd-afd79a1dd594-bound-sa-token\") pod \"ingress-operator-5b745b69d9-kqd5s\" (UID: \"3fcd001e-7c62-4167-adbd-afd79a1dd594\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqd5s" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.400286 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1c983b09-f715-422e-960d-36dcc610c30b-registry-tls\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.400358 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5760aa2c-cda7-44bb-8458-d31b09eb2de5-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-svrdl\" (UID: \"5760aa2c-cda7-44bb-8458-d31b09eb2de5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-svrdl" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.400412 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1c983b09-f715-422e-960d-36dcc610c30b-bound-sa-token\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.400455 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsrdj\" (UniqueName: \"kubernetes.io/projected/c8bd5414-72ea-40f8-8cf2-a6bf81e1258a-kube-api-access-rsrdj\") pod \"openshift-config-operator-7777fb866f-lnj88\" (UID: \"c8bd5414-72ea-40f8-8cf2-a6bf81e1258a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lnj88" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.469208 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.469447 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.469605 4758 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.473600 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-2k2wj" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.478093 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.478226 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.478429 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.478568 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.480874 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jfncv" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.495920 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.501559 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.501901 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8f67259d-8eec-4f78-929f-01e7abe893f6-console-serving-cert\") pod \"console-f9d7485db-n2kln\" (UID: \"8f67259d-8eec-4f78-929f-01e7abe893f6\") " pod="openshift-console/console-f9d7485db-n2kln" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.501929 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngx4q\" (UniqueName: \"kubernetes.io/projected/5caed3c6-9037-4ecf-b0db-778db52bd3ee-kube-api-access-ngx4q\") pod \"marketplace-operator-79b997595-fjsgm\" (UID: \"5caed3c6-9037-4ecf-b0db-778db52bd3ee\") " pod="openshift-marketplace/marketplace-operator-79b997595-fjsgm" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.501947 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d20604ed-3385-44c3-8dfd-b212005182d2-proxy-tls\") pod \"machine-config-operator-74547568cd-trk29\" (UID: \"d20604ed-3385-44c3-8dfd-b212005182d2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-trk29" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.501962 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb5ead16-7592-4bd3-9ebb-ee8499eb639b-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bgbsx\" (UID: \"fb5ead16-7592-4bd3-9ebb-ee8499eb639b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bgbsx" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.501977 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb5ead16-7592-4bd3-9ebb-ee8499eb639b-config\") pod \"kube-apiserver-operator-766d6c64bb-bgbsx\" (UID: \"fb5ead16-7592-4bd3-9ebb-ee8499eb639b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bgbsx" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.501993 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srzhm\" (UniqueName: \"kubernetes.io/projected/22aa04a2-a400-462b-b73e-ba9b37664490-kube-api-access-srzhm\") pod \"service-ca-9c57cc56f-54h94\" (UID: \"22aa04a2-a400-462b-b73e-ba9b37664490\") " pod="openshift-service-ca/service-ca-9c57cc56f-54h94" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502009 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37871634-4204-40b5-850b-5789bb71caf6-config\") pod \"kube-controller-manager-operator-78b949d7b-jflvh\" (UID: \"37871634-4204-40b5-850b-5789bb71caf6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jflvh" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502028 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn2z7\" (UniqueName: \"kubernetes.io/projected/7c7ef802-c8dd-48a5-a7c5-5cf646b633f2-kube-api-access-wn2z7\") pod \"etcd-operator-b45778765-rc8wq\" (UID: \"7c7ef802-c8dd-48a5-a7c5-5cf646b633f2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rc8wq" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502043 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e7f0b8-5889-4e1b-8bc4-4b7b533d9fb4-serving-cert\") pod \"authentication-operator-69f744f599-zw8x5\" (UID: \"e7e7f0b8-5889-4e1b-8bc4-4b7b533d9fb4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zw8x5" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502059 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3fcd001e-7c62-4167-adbd-afd79a1dd594-bound-sa-token\") pod \"ingress-operator-5b745b69d9-kqd5s\" (UID: \"3fcd001e-7c62-4167-adbd-afd79a1dd594\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqd5s" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502073 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e7f0b8-5889-4e1b-8bc4-4b7b533d9fb4-config\") pod \"authentication-operator-69f744f599-zw8x5\" (UID: \"e7e7f0b8-5889-4e1b-8bc4-4b7b533d9fb4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zw8x5" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502091 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5caed3c6-9037-4ecf-b0db-778db52bd3ee-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fjsgm\" (UID: \"5caed3c6-9037-4ecf-b0db-778db52bd3ee\") " pod="openshift-marketplace/marketplace-operator-79b997595-fjsgm" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502105 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68864ea7-f5a1-40f4-80c1-09bd344ef4f7-serving-cert\") pod \"service-ca-operator-777779d784-sbtxv\" (UID: \"68864ea7-f5a1-40f4-80c1-09bd344ef4f7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-sbtxv" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502120 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7e7f0b8-5889-4e1b-8bc4-4b7b533d9fb4-service-ca-bundle\") pod \"authentication-operator-69f744f599-zw8x5\" (UID: \"e7e7f0b8-5889-4e1b-8bc4-4b7b533d9fb4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zw8x5" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502148 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4qbf\" (UniqueName: \"kubernetes.io/projected/1772eca5-cae4-40ba-94c7-d00f0c70636f-kube-api-access-f4qbf\") pod \"multus-admission-controller-857f4d67dd-65j2c\" (UID: \"1772eca5-cae4-40ba-94c7-d00f0c70636f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-65j2c" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502164 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5760aa2c-cda7-44bb-8458-d31b09eb2de5-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-svrdl\" (UID: \"5760aa2c-cda7-44bb-8458-d31b09eb2de5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-svrdl" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502178 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1c983b09-f715-422e-960d-36dcc610c30b-bound-sa-token\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502192 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5caed3c6-9037-4ecf-b0db-778db52bd3ee-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fjsgm\" (UID: \"5caed3c6-9037-4ecf-b0db-778db52bd3ee\") " pod="openshift-marketplace/marketplace-operator-79b997595-fjsgm" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502207 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7baaa22f-75fb-4524-91fa-89eb385e0ad5-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-crm27\" (UID: \"7baaa22f-75fb-4524-91fa-89eb385e0ad5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-crm27" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502233 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7baaa22f-75fb-4524-91fa-89eb385e0ad5-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-crm27\" (UID: \"7baaa22f-75fb-4524-91fa-89eb385e0ad5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-crm27" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502258 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1c983b09-f715-422e-960d-36dcc610c30b-installation-pull-secrets\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502276 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0cb4bda1-2b7b-4c94-8735-dde72faef39e-cert\") pod \"ingress-canary-zl4zm\" (UID: \"0cb4bda1-2b7b-4c94-8735-dde72faef39e\") " pod="openshift-ingress-canary/ingress-canary-zl4zm" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502290 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1c983b09-f715-422e-960d-36dcc610c30b-trusted-ca\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502306 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sfkr\" (UniqueName: \"kubernetes.io/projected/68864ea7-f5a1-40f4-80c1-09bd344ef4f7-kube-api-access-6sfkr\") pod \"service-ca-operator-777779d784-sbtxv\" (UID: \"68864ea7-f5a1-40f4-80c1-09bd344ef4f7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-sbtxv" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502321 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkppt\" (UniqueName: \"kubernetes.io/projected/06a279e1-00f2-4ae0-9bc4-6481c53c14f1-kube-api-access-tkppt\") pod \"control-plane-machine-set-operator-78cbb6b69f-cvjnm\" (UID: \"06a279e1-00f2-4ae0-9bc4-6481c53c14f1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cvjnm" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502338 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f6422cff-e2d5-4935-81b3-85fbb721a86b-plugins-dir\") pod \"csi-hostpathplugin-l5xjz\" (UID: \"f6422cff-e2d5-4935-81b3-85fbb721a86b\") " pod="hostpath-provisioner/csi-hostpathplugin-l5xjz" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502354 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/1d2c5bee-e237-4043-9f8a-73bb67ebf355-default-certificate\") pod \"router-default-5444994796-7jtcn\" (UID: \"1d2c5bee-e237-4043-9f8a-73bb67ebf355\") " pod="openshift-ingress/router-default-5444994796-7jtcn" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502370 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvltd\" (UniqueName: \"kubernetes.io/projected/fcc6018a-27ef-4a30-98f2-90d2e6e454be-kube-api-access-kvltd\") pod \"dns-operator-744455d44c-zlnf7\" (UID: \"fcc6018a-27ef-4a30-98f2-90d2e6e454be\") " pod="openshift-dns-operator/dns-operator-744455d44c-zlnf7" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502384 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rhrr\" (UniqueName: \"kubernetes.io/projected/7baaa22f-75fb-4524-91fa-89eb385e0ad5-kube-api-access-4rhrr\") pod \"kube-storage-version-migrator-operator-b67b599dd-crm27\" (UID: \"7baaa22f-75fb-4524-91fa-89eb385e0ad5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-crm27" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502399 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/050aa7a5-1385-4d83-baae-173bb748aed6-srv-cert\") pod \"catalog-operator-68c6474976-5z9sw\" (UID: \"050aa7a5-1385-4d83-baae-173bb748aed6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5z9sw" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502418 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c8bd5414-72ea-40f8-8cf2-a6bf81e1258a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-lnj88\" (UID: \"c8bd5414-72ea-40f8-8cf2-a6bf81e1258a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lnj88" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502435 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/1d2c5bee-e237-4043-9f8a-73bb67ebf355-stats-auth\") pod \"router-default-5444994796-7jtcn\" (UID: \"1d2c5bee-e237-4043-9f8a-73bb67ebf355\") " pod="openshift-ingress/router-default-5444994796-7jtcn" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502450 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c33d209a-fda4-44bd-944f-95cc380f4173-proxy-tls\") pod \"machine-config-controller-84d6567774-k254w\" (UID: \"c33d209a-fda4-44bd-944f-95cc380f4173\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k254w" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502466 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb5ead16-7592-4bd3-9ebb-ee8499eb639b-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bgbsx\" (UID: \"fb5ead16-7592-4bd3-9ebb-ee8499eb639b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bgbsx" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502480 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f3ef3c9d-b2ad-4d97-8ee3-1064b88bc903-tmpfs\") pod \"packageserver-d55dfcdfc-654gb\" (UID: \"f3ef3c9d-b2ad-4d97-8ee3-1064b88bc903\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-654gb" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502504 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/22aa04a2-a400-462b-b73e-ba9b37664490-signing-key\") pod \"service-ca-9c57cc56f-54h94\" (UID: \"22aa04a2-a400-462b-b73e-ba9b37664490\") " pod="openshift-service-ca/service-ca-9c57cc56f-54h94" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502521 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3fcd001e-7c62-4167-adbd-afd79a1dd594-trusted-ca\") pod \"ingress-operator-5b745b69d9-kqd5s\" (UID: \"3fcd001e-7c62-4167-adbd-afd79a1dd594\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqd5s" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502536 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5760aa2c-cda7-44bb-8458-d31b09eb2de5-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-svrdl\" (UID: \"5760aa2c-cda7-44bb-8458-d31b09eb2de5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-svrdl" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502551 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/37871634-4204-40b5-850b-5789bb71caf6-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-jflvh\" (UID: \"37871634-4204-40b5-850b-5789bb71caf6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jflvh" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502568 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c7ef802-c8dd-48a5-a7c5-5cf646b633f2-config\") pod \"etcd-operator-b45778765-rc8wq\" (UID: \"7c7ef802-c8dd-48a5-a7c5-5cf646b633f2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rc8wq" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502583 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24vgp\" (UniqueName: \"kubernetes.io/projected/0cb4bda1-2b7b-4c94-8735-dde72faef39e-kube-api-access-24vgp\") pod \"ingress-canary-zl4zm\" (UID: \"0cb4bda1-2b7b-4c94-8735-dde72faef39e\") " pod="openshift-ingress-canary/ingress-canary-zl4zm" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502596 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdzwn\" (UniqueName: \"kubernetes.io/projected/f3ef3c9d-b2ad-4d97-8ee3-1064b88bc903-kube-api-access-rdzwn\") pod \"packageserver-d55dfcdfc-654gb\" (UID: \"f3ef3c9d-b2ad-4d97-8ee3-1064b88bc903\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-654gb" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502620 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7006bfc3-2fa7-483c-8bcf-7ded310328a9-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-6fjnz\" (UID: \"7006bfc3-2fa7-483c-8bcf-7ded310328a9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6fjnz" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502635 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f3ef3c9d-b2ad-4d97-8ee3-1064b88bc903-webhook-cert\") pod \"packageserver-d55dfcdfc-654gb\" (UID: \"f3ef3c9d-b2ad-4d97-8ee3-1064b88bc903\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-654gb" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502660 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/7c7ef802-c8dd-48a5-a7c5-5cf646b633f2-etcd-service-ca\") pod \"etcd-operator-b45778765-rc8wq\" (UID: \"7c7ef802-c8dd-48a5-a7c5-5cf646b633f2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rc8wq" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502677 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f6422cff-e2d5-4935-81b3-85fbb721a86b-socket-dir\") pod \"csi-hostpathplugin-l5xjz\" (UID: \"f6422cff-e2d5-4935-81b3-85fbb721a86b\") " pod="hostpath-provisioner/csi-hostpathplugin-l5xjz" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502693 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8bd5414-72ea-40f8-8cf2-a6bf81e1258a-serving-cert\") pod \"openshift-config-operator-7777fb866f-lnj88\" (UID: \"c8bd5414-72ea-40f8-8cf2-a6bf81e1258a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lnj88" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502707 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f3ef3c9d-b2ad-4d97-8ee3-1064b88bc903-apiservice-cert\") pod \"packageserver-d55dfcdfc-654gb\" (UID: \"f3ef3c9d-b2ad-4d97-8ee3-1064b88bc903\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-654gb" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502725 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5760aa2c-cda7-44bb-8458-d31b09eb2de5-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-svrdl\" (UID: \"5760aa2c-cda7-44bb-8458-d31b09eb2de5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-svrdl" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502780 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f67259d-8eec-4f78-929f-01e7abe893f6-trusted-ca-bundle\") pod \"console-f9d7485db-n2kln\" (UID: \"8f67259d-8eec-4f78-929f-01e7abe893f6\") " pod="openshift-console/console-f9d7485db-n2kln" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502800 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48ntx\" (UniqueName: \"kubernetes.io/projected/d20604ed-3385-44c3-8dfd-b212005182d2-kube-api-access-48ntx\") pod \"machine-config-operator-74547568cd-trk29\" (UID: \"d20604ed-3385-44c3-8dfd-b212005182d2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-trk29" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502821 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fcc6018a-27ef-4a30-98f2-90d2e6e454be-metrics-tls\") pod \"dns-operator-744455d44c-zlnf7\" (UID: \"fcc6018a-27ef-4a30-98f2-90d2e6e454be\") " pod="openshift-dns-operator/dns-operator-744455d44c-zlnf7" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502836 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ca51b6f8-d0ec-4d8e-bec4-e55b4c591dd4-node-bootstrap-token\") pod \"machine-config-server-zj7cr\" (UID: \"ca51b6f8-d0ec-4d8e-bec4-e55b4c591dd4\") " pod="openshift-machine-config-operator/machine-config-server-zj7cr" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502851 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkh9n\" (UniqueName: \"kubernetes.io/projected/f6422cff-e2d5-4935-81b3-85fbb721a86b-kube-api-access-tkh9n\") pod \"csi-hostpathplugin-l5xjz\" (UID: \"f6422cff-e2d5-4935-81b3-85fbb721a86b\") " pod="hostpath-provisioner/csi-hostpathplugin-l5xjz" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502867 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/5000e0b7-97a0-4868-a61c-281d1e2ab6ea-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rjlbg\" (UID: \"5000e0b7-97a0-4868-a61c-281d1e2ab6ea\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rjlbg" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502882 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf96g\" (UniqueName: \"kubernetes.io/projected/050aa7a5-1385-4d83-baae-173bb748aed6-kube-api-access-rf96g\") pod \"catalog-operator-68c6474976-5z9sw\" (UID: \"050aa7a5-1385-4d83-baae-173bb748aed6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5z9sw" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502898 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/06a279e1-00f2-4ae0-9bc4-6481c53c14f1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-cvjnm\" (UID: \"06a279e1-00f2-4ae0-9bc4-6481c53c14f1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cvjnm" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502915 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1c983b09-f715-422e-960d-36dcc610c30b-registry-tls\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502929 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7c9j\" (UniqueName: \"kubernetes.io/projected/ca51b6f8-d0ec-4d8e-bec4-e55b4c591dd4-kube-api-access-p7c9j\") pod \"machine-config-server-zj7cr\" (UID: \"ca51b6f8-d0ec-4d8e-bec4-e55b4c591dd4\") " pod="openshift-machine-config-operator/machine-config-server-zj7cr" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502945 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsrdj\" (UniqueName: \"kubernetes.io/projected/c8bd5414-72ea-40f8-8cf2-a6bf81e1258a-kube-api-access-rsrdj\") pod \"openshift-config-operator-7777fb866f-lnj88\" (UID: \"c8bd5414-72ea-40f8-8cf2-a6bf81e1258a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lnj88" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502959 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cec5698b-f4e0-4c73-abe0-f999df35f0c6-secret-volume\") pod \"collect-profiles-29484990-bjkct\" (UID: \"cec5698b-f4e0-4c73-abe0-f999df35f0c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-bjkct" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502976 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk9ft\" (UniqueName: \"kubernetes.io/projected/e7e7f0b8-5889-4e1b-8bc4-4b7b533d9fb4-kube-api-access-qk9ft\") pod \"authentication-operator-69f744f599-zw8x5\" (UID: \"e7e7f0b8-5889-4e1b-8bc4-4b7b533d9fb4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zw8x5" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.502990 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f6422cff-e2d5-4935-81b3-85fbb721a86b-csi-data-dir\") pod \"csi-hostpathplugin-l5xjz\" (UID: \"f6422cff-e2d5-4935-81b3-85fbb721a86b\") " pod="hostpath-provisioner/csi-hostpathplugin-l5xjz" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.503005 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c7ef802-c8dd-48a5-a7c5-5cf646b633f2-serving-cert\") pod \"etcd-operator-b45778765-rc8wq\" (UID: \"7c7ef802-c8dd-48a5-a7c5-5cf646b633f2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rc8wq" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.503021 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/db717b97-58b5-402c-983f-9bf1e88c40a4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-wszfq\" (UID: \"db717b97-58b5-402c-983f-9bf1e88c40a4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wszfq" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.503036 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzxhk\" (UniqueName: \"kubernetes.io/projected/3fcd001e-7c62-4167-adbd-afd79a1dd594-kube-api-access-qzxhk\") pod \"ingress-operator-5b745b69d9-kqd5s\" (UID: \"3fcd001e-7c62-4167-adbd-afd79a1dd594\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqd5s" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.503052 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hj7d\" (UniqueName: \"kubernetes.io/projected/1d2c5bee-e237-4043-9f8a-73bb67ebf355-kube-api-access-6hj7d\") pod \"router-default-5444994796-7jtcn\" (UID: \"1d2c5bee-e237-4043-9f8a-73bb67ebf355\") " pod="openshift-ingress/router-default-5444994796-7jtcn" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.503067 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d20604ed-3385-44c3-8dfd-b212005182d2-images\") pod \"machine-config-operator-74547568cd-trk29\" (UID: \"d20604ed-3385-44c3-8dfd-b212005182d2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-trk29" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.503081 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37871634-4204-40b5-850b-5789bb71caf6-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-jflvh\" (UID: \"37871634-4204-40b5-850b-5789bb71caf6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jflvh" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.503098 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d2c5bee-e237-4043-9f8a-73bb67ebf355-metrics-certs\") pod \"router-default-5444994796-7jtcn\" (UID: \"1d2c5bee-e237-4043-9f8a-73bb67ebf355\") " pod="openshift-ingress/router-default-5444994796-7jtcn" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.503113 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltwfg\" (UniqueName: \"kubernetes.io/projected/7006bfc3-2fa7-483c-8bcf-7ded310328a9-kube-api-access-ltwfg\") pod \"openshift-controller-manager-operator-756b6f6bc6-6fjnz\" (UID: \"7006bfc3-2fa7-483c-8bcf-7ded310328a9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6fjnz" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.503129 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s47cd\" (UniqueName: \"kubernetes.io/projected/c33d209a-fda4-44bd-944f-95cc380f4173-kube-api-access-s47cd\") pod \"machine-config-controller-84d6567774-k254w\" (UID: \"c33d209a-fda4-44bd-944f-95cc380f4173\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k254w" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.503144 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1772eca5-cae4-40ba-94c7-d00f0c70636f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-65j2c\" (UID: \"1772eca5-cae4-40ba-94c7-d00f0c70636f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-65j2c" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.503159 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7006bfc3-2fa7-483c-8bcf-7ded310328a9-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-6fjnz\" (UID: \"7006bfc3-2fa7-483c-8bcf-7ded310328a9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6fjnz" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.503176 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68864ea7-f5a1-40f4-80c1-09bd344ef4f7-config\") pod \"service-ca-operator-777779d784-sbtxv\" (UID: \"68864ea7-f5a1-40f4-80c1-09bd344ef4f7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-sbtxv" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.503191 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d20604ed-3385-44c3-8dfd-b212005182d2-auth-proxy-config\") pod \"machine-config-operator-74547568cd-trk29\" (UID: \"d20604ed-3385-44c3-8dfd-b212005182d2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-trk29" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.503206 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1bc85282-8493-4e92-91eb-3a2072c87514-metrics-tls\") pod \"dns-default-rfv8b\" (UID: \"1bc85282-8493-4e92-91eb-3a2072c87514\") " pod="openshift-dns/dns-default-rfv8b" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.503232 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1c983b09-f715-422e-960d-36dcc610c30b-registry-certificates\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.503246 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1bc85282-8493-4e92-91eb-3a2072c87514-config-volume\") pod \"dns-default-rfv8b\" (UID: \"1bc85282-8493-4e92-91eb-3a2072c87514\") " pod="openshift-dns/dns-default-rfv8b" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.503260 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6zbg\" (UniqueName: \"kubernetes.io/projected/1bc85282-8493-4e92-91eb-3a2072c87514-kube-api-access-d6zbg\") pod \"dns-default-rfv8b\" (UID: \"1bc85282-8493-4e92-91eb-3a2072c87514\") " pod="openshift-dns/dns-default-rfv8b" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.503276 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c33d209a-fda4-44bd-944f-95cc380f4173-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-k254w\" (UID: \"c33d209a-fda4-44bd-944f-95cc380f4173\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k254w" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.503290 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxtbm\" (UniqueName: \"kubernetes.io/projected/db717b97-58b5-402c-983f-9bf1e88c40a4-kube-api-access-cxtbm\") pod \"olm-operator-6b444d44fb-wszfq\" (UID: \"db717b97-58b5-402c-983f-9bf1e88c40a4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wszfq" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.503314 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5lhh\" (UniqueName: \"kubernetes.io/projected/1c983b09-f715-422e-960d-36dcc610c30b-kube-api-access-f5lhh\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.503329 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f6422cff-e2d5-4935-81b3-85fbb721a86b-mountpoint-dir\") pod \"csi-hostpathplugin-l5xjz\" (UID: \"f6422cff-e2d5-4935-81b3-85fbb721a86b\") " pod="hostpath-provisioner/csi-hostpathplugin-l5xjz" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.503344 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/22aa04a2-a400-462b-b73e-ba9b37664490-signing-cabundle\") pod \"service-ca-9c57cc56f-54h94\" (UID: \"22aa04a2-a400-462b-b73e-ba9b37664490\") " pod="openshift-service-ca/service-ca-9c57cc56f-54h94" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.503360 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d2c5bee-e237-4043-9f8a-73bb67ebf355-service-ca-bundle\") pod \"router-default-5444994796-7jtcn\" (UID: \"1d2c5bee-e237-4043-9f8a-73bb67ebf355\") " pod="openshift-ingress/router-default-5444994796-7jtcn" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.503376 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jcd4\" (UniqueName: \"kubernetes.io/projected/cec5698b-f4e0-4c73-abe0-f999df35f0c6-kube-api-access-2jcd4\") pod \"collect-profiles-29484990-bjkct\" (UID: \"cec5698b-f4e0-4c73-abe0-f999df35f0c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-bjkct" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.503400 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8f67259d-8eec-4f78-929f-01e7abe893f6-oauth-serving-cert\") pod \"console-f9d7485db-n2kln\" (UID: \"8f67259d-8eec-4f78-929f-01e7abe893f6\") " pod="openshift-console/console-f9d7485db-n2kln" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.503415 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv2x7\" (UniqueName: \"kubernetes.io/projected/8f67259d-8eec-4f78-929f-01e7abe893f6-kube-api-access-dv2x7\") pod \"console-f9d7485db-n2kln\" (UID: \"8f67259d-8eec-4f78-929f-01e7abe893f6\") " pod="openshift-console/console-f9d7485db-n2kln" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.503429 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ca51b6f8-d0ec-4d8e-bec4-e55b4c591dd4-certs\") pod \"machine-config-server-zj7cr\" (UID: \"ca51b6f8-d0ec-4d8e-bec4-e55b4c591dd4\") " pod="openshift-machine-config-operator/machine-config-server-zj7cr" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.503444 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1c983b09-f715-422e-960d-36dcc610c30b-ca-trust-extracted\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.503460 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3fcd001e-7c62-4167-adbd-afd79a1dd594-metrics-tls\") pod \"ingress-operator-5b745b69d9-kqd5s\" (UID: \"3fcd001e-7c62-4167-adbd-afd79a1dd594\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqd5s" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.503475 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7e7f0b8-5889-4e1b-8bc4-4b7b533d9fb4-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-zw8x5\" (UID: \"e7e7f0b8-5889-4e1b-8bc4-4b7b533d9fb4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zw8x5" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.503491 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7c7ef802-c8dd-48a5-a7c5-5cf646b633f2-etcd-client\") pod \"etcd-operator-b45778765-rc8wq\" (UID: \"7c7ef802-c8dd-48a5-a7c5-5cf646b633f2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rc8wq" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.503507 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vlqm\" (UniqueName: \"kubernetes.io/projected/914dc40a-791a-4d15-83b6-fb5f4002f786-kube-api-access-5vlqm\") pod \"migrator-59844c95c7-km5pw\" (UID: \"914dc40a-791a-4d15-83b6-fb5f4002f786\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-km5pw" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.503522 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8f67259d-8eec-4f78-929f-01e7abe893f6-console-oauth-config\") pod \"console-f9d7485db-n2kln\" (UID: \"8f67259d-8eec-4f78-929f-01e7abe893f6\") " pod="openshift-console/console-f9d7485db-n2kln" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.503537 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cec5698b-f4e0-4c73-abe0-f999df35f0c6-config-volume\") pod \"collect-profiles-29484990-bjkct\" (UID: \"cec5698b-f4e0-4c73-abe0-f999df35f0c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-bjkct" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.503553 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8f67259d-8eec-4f78-929f-01e7abe893f6-service-ca\") pod \"console-f9d7485db-n2kln\" (UID: \"8f67259d-8eec-4f78-929f-01e7abe893f6\") " pod="openshift-console/console-f9d7485db-n2kln" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.503589 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/db717b97-58b5-402c-983f-9bf1e88c40a4-srv-cert\") pod \"olm-operator-6b444d44fb-wszfq\" (UID: \"db717b97-58b5-402c-983f-9bf1e88c40a4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wszfq" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.503613 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8f67259d-8eec-4f78-929f-01e7abe893f6-console-config\") pod \"console-f9d7485db-n2kln\" (UID: \"8f67259d-8eec-4f78-929f-01e7abe893f6\") " pod="openshift-console/console-f9d7485db-n2kln" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.503628 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f6422cff-e2d5-4935-81b3-85fbb721a86b-registration-dir\") pod \"csi-hostpathplugin-l5xjz\" (UID: \"f6422cff-e2d5-4935-81b3-85fbb721a86b\") " pod="hostpath-provisioner/csi-hostpathplugin-l5xjz" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.503644 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/050aa7a5-1385-4d83-baae-173bb748aed6-profile-collector-cert\") pod \"catalog-operator-68c6474976-5z9sw\" (UID: \"050aa7a5-1385-4d83-baae-173bb748aed6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5z9sw" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.503670 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/7c7ef802-c8dd-48a5-a7c5-5cf646b633f2-etcd-ca\") pod \"etcd-operator-b45778765-rc8wq\" (UID: \"7c7ef802-c8dd-48a5-a7c5-5cf646b633f2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rc8wq" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.503685 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq85s\" (UniqueName: \"kubernetes.io/projected/5000e0b7-97a0-4868-a61c-281d1e2ab6ea-kube-api-access-mq85s\") pod \"package-server-manager-789f6589d5-rjlbg\" (UID: \"5000e0b7-97a0-4868-a61c-281d1e2ab6ea\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rjlbg" Jan 22 16:32:16 crc kubenswrapper[4758]: E0122 16:32:16.503834 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:17.00381681 +0000 UTC m=+158.487156095 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.532000 4758 request.go:700] Waited for 1.639667888s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-default-metrics-tls&limit=500&resourceVersion=0 Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.534495 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c7ef802-c8dd-48a5-a7c5-5cf646b633f2-config\") pod \"etcd-operator-b45778765-rc8wq\" (UID: \"7c7ef802-c8dd-48a5-a7c5-5cf646b633f2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rc8wq" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.535218 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d2c5bee-e237-4043-9f8a-73bb67ebf355-service-ca-bundle\") pod \"router-default-5444994796-7jtcn\" (UID: \"1d2c5bee-e237-4043-9f8a-73bb67ebf355\") " pod="openshift-ingress/router-default-5444994796-7jtcn" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.535374 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7006bfc3-2fa7-483c-8bcf-7ded310328a9-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-6fjnz\" (UID: \"7006bfc3-2fa7-483c-8bcf-7ded310328a9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6fjnz" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.535849 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.536130 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/7c7ef802-c8dd-48a5-a7c5-5cf646b633f2-etcd-service-ca\") pod \"etcd-operator-b45778765-rc8wq\" (UID: \"7c7ef802-c8dd-48a5-a7c5-5cf646b633f2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rc8wq" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.537073 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5760aa2c-cda7-44bb-8458-d31b09eb2de5-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-svrdl\" (UID: \"5760aa2c-cda7-44bb-8458-d31b09eb2de5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-svrdl" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.537265 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c7ef802-c8dd-48a5-a7c5-5cf646b633f2-serving-cert\") pod \"etcd-operator-b45778765-rc8wq\" (UID: \"7c7ef802-c8dd-48a5-a7c5-5cf646b633f2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rc8wq" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.575046 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8f67259d-8eec-4f78-929f-01e7abe893f6-console-config\") pod \"console-f9d7485db-n2kln\" (UID: \"8f67259d-8eec-4f78-929f-01e7abe893f6\") " pod="openshift-console/console-f9d7485db-n2kln" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.575176 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8bd5414-72ea-40f8-8cf2-a6bf81e1258a-serving-cert\") pod \"openshift-config-operator-7777fb866f-lnj88\" (UID: \"c8bd5414-72ea-40f8-8cf2-a6bf81e1258a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lnj88" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.575250 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.575841 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1c983b09-f715-422e-960d-36dcc610c30b-installation-pull-secrets\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.576068 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1c983b09-f715-422e-960d-36dcc610c30b-trusted-ca\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.576994 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8f67259d-8eec-4f78-929f-01e7abe893f6-console-serving-cert\") pod \"console-f9d7485db-n2kln\" (UID: \"8f67259d-8eec-4f78-929f-01e7abe893f6\") " pod="openshift-console/console-f9d7485db-n2kln" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.577994 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d2c5bee-e237-4043-9f8a-73bb67ebf355-metrics-certs\") pod \"router-default-5444994796-7jtcn\" (UID: \"1d2c5bee-e237-4043-9f8a-73bb67ebf355\") " pod="openshift-ingress/router-default-5444994796-7jtcn" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.578398 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c8bd5414-72ea-40f8-8cf2-a6bf81e1258a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-lnj88\" (UID: \"c8bd5414-72ea-40f8-8cf2-a6bf81e1258a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lnj88" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.578794 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5760aa2c-cda7-44bb-8458-d31b09eb2de5-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-svrdl\" (UID: \"5760aa2c-cda7-44bb-8458-d31b09eb2de5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-svrdl" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.579297 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1c983b09-f715-422e-960d-36dcc610c30b-ca-trust-extracted\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.581757 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.583396 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3fcd001e-7c62-4167-adbd-afd79a1dd594-metrics-tls\") pod \"ingress-operator-5b745b69d9-kqd5s\" (UID: \"3fcd001e-7c62-4167-adbd-afd79a1dd594\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqd5s" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.584495 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7006bfc3-2fa7-483c-8bcf-7ded310328a9-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-6fjnz\" (UID: \"7006bfc3-2fa7-483c-8bcf-7ded310328a9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6fjnz" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.585242 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8f67259d-8eec-4f78-929f-01e7abe893f6-console-oauth-config\") pod \"console-f9d7485db-n2kln\" (UID: \"8f67259d-8eec-4f78-929f-01e7abe893f6\") " pod="openshift-console/console-f9d7485db-n2kln" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.588239 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fcc6018a-27ef-4a30-98f2-90d2e6e454be-metrics-tls\") pod \"dns-operator-744455d44c-zlnf7\" (UID: \"fcc6018a-27ef-4a30-98f2-90d2e6e454be\") " pod="openshift-dns-operator/dns-operator-744455d44c-zlnf7" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.588556 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1c983b09-f715-422e-960d-36dcc610c30b-registry-tls\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.591663 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8f67259d-8eec-4f78-929f-01e7abe893f6-oauth-serving-cert\") pod \"console-f9d7485db-n2kln\" (UID: \"8f67259d-8eec-4f78-929f-01e7abe893f6\") " pod="openshift-console/console-f9d7485db-n2kln" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.593380 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.616204 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/1d2c5bee-e237-4043-9f8a-73bb67ebf355-default-certificate\") pod \"router-default-5444994796-7jtcn\" (UID: \"1d2c5bee-e237-4043-9f8a-73bb67ebf355\") " pod="openshift-ingress/router-default-5444994796-7jtcn" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.641006 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/7c7ef802-c8dd-48a5-a7c5-5cf646b633f2-etcd-ca\") pod \"etcd-operator-b45778765-rc8wq\" (UID: \"7c7ef802-c8dd-48a5-a7c5-5cf646b633f2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rc8wq" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.642246 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7c7ef802-c8dd-48a5-a7c5-5cf646b633f2-etcd-client\") pod \"etcd-operator-b45778765-rc8wq\" (UID: \"7c7ef802-c8dd-48a5-a7c5-5cf646b633f2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rc8wq" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.657951 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3fcd001e-7c62-4167-adbd-afd79a1dd594-trusted-ca\") pod \"ingress-operator-5b745b69d9-kqd5s\" (UID: \"3fcd001e-7c62-4167-adbd-afd79a1dd594\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqd5s" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.669971 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1c983b09-f715-422e-960d-36dcc610c30b-registry-certificates\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.678949 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8f67259d-8eec-4f78-929f-01e7abe893f6-service-ca\") pod \"console-f9d7485db-n2kln\" (UID: \"8f67259d-8eec-4f78-929f-01e7abe893f6\") " pod="openshift-console/console-f9d7485db-n2kln" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.744126 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/1d2c5bee-e237-4043-9f8a-73bb67ebf355-stats-auth\") pod \"router-default-5444994796-7jtcn\" (UID: \"1d2c5bee-e237-4043-9f8a-73bb67ebf355\") " pod="openshift-ingress/router-default-5444994796-7jtcn" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.748498 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f67259d-8eec-4f78-929f-01e7abe893f6-trusted-ca-bundle\") pod \"console-f9d7485db-n2kln\" (UID: \"8f67259d-8eec-4f78-929f-01e7abe893f6\") " pod="openshift-console/console-f9d7485db-n2kln" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.752702 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn2z7\" (UniqueName: \"kubernetes.io/projected/7c7ef802-c8dd-48a5-a7c5-5cf646b633f2-kube-api-access-wn2z7\") pod \"etcd-operator-b45778765-rc8wq\" (UID: \"7c7ef802-c8dd-48a5-a7c5-5cf646b633f2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rc8wq" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.754427 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f6422cff-e2d5-4935-81b3-85fbb721a86b-mountpoint-dir\") pod \"csi-hostpathplugin-l5xjz\" (UID: \"f6422cff-e2d5-4935-81b3-85fbb721a86b\") " pod="hostpath-provisioner/csi-hostpathplugin-l5xjz" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.754458 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/22aa04a2-a400-462b-b73e-ba9b37664490-signing-cabundle\") pod \"service-ca-9c57cc56f-54h94\" (UID: \"22aa04a2-a400-462b-b73e-ba9b37664490\") " pod="openshift-service-ca/service-ca-9c57cc56f-54h94" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.754480 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jcd4\" (UniqueName: \"kubernetes.io/projected/cec5698b-f4e0-4c73-abe0-f999df35f0c6-kube-api-access-2jcd4\") pod \"collect-profiles-29484990-bjkct\" (UID: \"cec5698b-f4e0-4c73-abe0-f999df35f0c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-bjkct" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.754506 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ca51b6f8-d0ec-4d8e-bec4-e55b4c591dd4-certs\") pod \"machine-config-server-zj7cr\" (UID: \"ca51b6f8-d0ec-4d8e-bec4-e55b4c591dd4\") " pod="openshift-machine-config-operator/machine-config-server-zj7cr" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.754528 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7e7f0b8-5889-4e1b-8bc4-4b7b533d9fb4-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-zw8x5\" (UID: \"e7e7f0b8-5889-4e1b-8bc4-4b7b533d9fb4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zw8x5" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.754545 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vlqm\" (UniqueName: \"kubernetes.io/projected/914dc40a-791a-4d15-83b6-fb5f4002f786-kube-api-access-5vlqm\") pod \"migrator-59844c95c7-km5pw\" (UID: \"914dc40a-791a-4d15-83b6-fb5f4002f786\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-km5pw" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.754564 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cec5698b-f4e0-4c73-abe0-f999df35f0c6-config-volume\") pod \"collect-profiles-29484990-bjkct\" (UID: \"cec5698b-f4e0-4c73-abe0-f999df35f0c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-bjkct" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.754590 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.754608 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/db717b97-58b5-402c-983f-9bf1e88c40a4-srv-cert\") pod \"olm-operator-6b444d44fb-wszfq\" (UID: \"db717b97-58b5-402c-983f-9bf1e88c40a4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wszfq" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.754628 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f6422cff-e2d5-4935-81b3-85fbb721a86b-registration-dir\") pod \"csi-hostpathplugin-l5xjz\" (UID: \"f6422cff-e2d5-4935-81b3-85fbb721a86b\") " pod="hostpath-provisioner/csi-hostpathplugin-l5xjz" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.754646 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/050aa7a5-1385-4d83-baae-173bb748aed6-profile-collector-cert\") pod \"catalog-operator-68c6474976-5z9sw\" (UID: \"050aa7a5-1385-4d83-baae-173bb748aed6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5z9sw" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.754665 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq85s\" (UniqueName: \"kubernetes.io/projected/5000e0b7-97a0-4868-a61c-281d1e2ab6ea-kube-api-access-mq85s\") pod \"package-server-manager-789f6589d5-rjlbg\" (UID: \"5000e0b7-97a0-4868-a61c-281d1e2ab6ea\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rjlbg" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.754682 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngx4q\" (UniqueName: \"kubernetes.io/projected/5caed3c6-9037-4ecf-b0db-778db52bd3ee-kube-api-access-ngx4q\") pod \"marketplace-operator-79b997595-fjsgm\" (UID: \"5caed3c6-9037-4ecf-b0db-778db52bd3ee\") " pod="openshift-marketplace/marketplace-operator-79b997595-fjsgm" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.754701 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb5ead16-7592-4bd3-9ebb-ee8499eb639b-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bgbsx\" (UID: \"fb5ead16-7592-4bd3-9ebb-ee8499eb639b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bgbsx" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.754717 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb5ead16-7592-4bd3-9ebb-ee8499eb639b-config\") pod \"kube-apiserver-operator-766d6c64bb-bgbsx\" (UID: \"fb5ead16-7592-4bd3-9ebb-ee8499eb639b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bgbsx" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.754732 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d20604ed-3385-44c3-8dfd-b212005182d2-proxy-tls\") pod \"machine-config-operator-74547568cd-trk29\" (UID: \"d20604ed-3385-44c3-8dfd-b212005182d2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-trk29" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.754766 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srzhm\" (UniqueName: \"kubernetes.io/projected/22aa04a2-a400-462b-b73e-ba9b37664490-kube-api-access-srzhm\") pod \"service-ca-9c57cc56f-54h94\" (UID: \"22aa04a2-a400-462b-b73e-ba9b37664490\") " pod="openshift-service-ca/service-ca-9c57cc56f-54h94" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.754789 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37871634-4204-40b5-850b-5789bb71caf6-config\") pod \"kube-controller-manager-operator-78b949d7b-jflvh\" (UID: \"37871634-4204-40b5-850b-5789bb71caf6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jflvh" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.754813 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e7f0b8-5889-4e1b-8bc4-4b7b533d9fb4-serving-cert\") pod \"authentication-operator-69f744f599-zw8x5\" (UID: \"e7e7f0b8-5889-4e1b-8bc4-4b7b533d9fb4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zw8x5" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.754833 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e7f0b8-5889-4e1b-8bc4-4b7b533d9fb4-config\") pod \"authentication-operator-69f744f599-zw8x5\" (UID: \"e7e7f0b8-5889-4e1b-8bc4-4b7b533d9fb4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zw8x5" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.754858 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5caed3c6-9037-4ecf-b0db-778db52bd3ee-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fjsgm\" (UID: \"5caed3c6-9037-4ecf-b0db-778db52bd3ee\") " pod="openshift-marketplace/marketplace-operator-79b997595-fjsgm" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.754873 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68864ea7-f5a1-40f4-80c1-09bd344ef4f7-serving-cert\") pod \"service-ca-operator-777779d784-sbtxv\" (UID: \"68864ea7-f5a1-40f4-80c1-09bd344ef4f7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-sbtxv" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.754894 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4qbf\" (UniqueName: \"kubernetes.io/projected/1772eca5-cae4-40ba-94c7-d00f0c70636f-kube-api-access-f4qbf\") pod \"multus-admission-controller-857f4d67dd-65j2c\" (UID: \"1772eca5-cae4-40ba-94c7-d00f0c70636f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-65j2c" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.754909 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7e7f0b8-5889-4e1b-8bc4-4b7b533d9fb4-service-ca-bundle\") pod \"authentication-operator-69f744f599-zw8x5\" (UID: \"e7e7f0b8-5889-4e1b-8bc4-4b7b533d9fb4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zw8x5" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.754928 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7baaa22f-75fb-4524-91fa-89eb385e0ad5-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-crm27\" (UID: \"7baaa22f-75fb-4524-91fa-89eb385e0ad5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-crm27" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.754947 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5caed3c6-9037-4ecf-b0db-778db52bd3ee-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fjsgm\" (UID: \"5caed3c6-9037-4ecf-b0db-778db52bd3ee\") " pod="openshift-marketplace/marketplace-operator-79b997595-fjsgm" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.754961 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7baaa22f-75fb-4524-91fa-89eb385e0ad5-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-crm27\" (UID: \"7baaa22f-75fb-4524-91fa-89eb385e0ad5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-crm27" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.754980 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0cb4bda1-2b7b-4c94-8735-dde72faef39e-cert\") pod \"ingress-canary-zl4zm\" (UID: \"0cb4bda1-2b7b-4c94-8735-dde72faef39e\") " pod="openshift-ingress-canary/ingress-canary-zl4zm" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.755196 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sfkr\" (UniqueName: \"kubernetes.io/projected/68864ea7-f5a1-40f4-80c1-09bd344ef4f7-kube-api-access-6sfkr\") pod \"service-ca-operator-777779d784-sbtxv\" (UID: \"68864ea7-f5a1-40f4-80c1-09bd344ef4f7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-sbtxv" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.755223 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkppt\" (UniqueName: \"kubernetes.io/projected/06a279e1-00f2-4ae0-9bc4-6481c53c14f1-kube-api-access-tkppt\") pod \"control-plane-machine-set-operator-78cbb6b69f-cvjnm\" (UID: \"06a279e1-00f2-4ae0-9bc4-6481c53c14f1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cvjnm" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.755241 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f6422cff-e2d5-4935-81b3-85fbb721a86b-plugins-dir\") pod \"csi-hostpathplugin-l5xjz\" (UID: \"f6422cff-e2d5-4935-81b3-85fbb721a86b\") " pod="hostpath-provisioner/csi-hostpathplugin-l5xjz" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.755266 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rhrr\" (UniqueName: \"kubernetes.io/projected/7baaa22f-75fb-4524-91fa-89eb385e0ad5-kube-api-access-4rhrr\") pod \"kube-storage-version-migrator-operator-b67b599dd-crm27\" (UID: \"7baaa22f-75fb-4524-91fa-89eb385e0ad5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-crm27" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.755283 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/050aa7a5-1385-4d83-baae-173bb748aed6-srv-cert\") pod \"catalog-operator-68c6474976-5z9sw\" (UID: \"050aa7a5-1385-4d83-baae-173bb748aed6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5z9sw" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.756172 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c33d209a-fda4-44bd-944f-95cc380f4173-proxy-tls\") pod \"machine-config-controller-84d6567774-k254w\" (UID: \"c33d209a-fda4-44bd-944f-95cc380f4173\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k254w" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.756199 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb5ead16-7592-4bd3-9ebb-ee8499eb639b-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bgbsx\" (UID: \"fb5ead16-7592-4bd3-9ebb-ee8499eb639b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bgbsx" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.756227 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/22aa04a2-a400-462b-b73e-ba9b37664490-signing-key\") pod \"service-ca-9c57cc56f-54h94\" (UID: \"22aa04a2-a400-462b-b73e-ba9b37664490\") " pod="openshift-service-ca/service-ca-9c57cc56f-54h94" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.756245 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f3ef3c9d-b2ad-4d97-8ee3-1064b88bc903-tmpfs\") pod \"packageserver-d55dfcdfc-654gb\" (UID: \"f3ef3c9d-b2ad-4d97-8ee3-1064b88bc903\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-654gb" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.756272 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24vgp\" (UniqueName: \"kubernetes.io/projected/0cb4bda1-2b7b-4c94-8735-dde72faef39e-kube-api-access-24vgp\") pod \"ingress-canary-zl4zm\" (UID: \"0cb4bda1-2b7b-4c94-8735-dde72faef39e\") " pod="openshift-ingress-canary/ingress-canary-zl4zm" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.756308 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdzwn\" (UniqueName: \"kubernetes.io/projected/f3ef3c9d-b2ad-4d97-8ee3-1064b88bc903-kube-api-access-rdzwn\") pod \"packageserver-d55dfcdfc-654gb\" (UID: \"f3ef3c9d-b2ad-4d97-8ee3-1064b88bc903\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-654gb" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.756325 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/37871634-4204-40b5-850b-5789bb71caf6-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-jflvh\" (UID: \"37871634-4204-40b5-850b-5789bb71caf6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jflvh" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.756345 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f6422cff-e2d5-4935-81b3-85fbb721a86b-socket-dir\") pod \"csi-hostpathplugin-l5xjz\" (UID: \"f6422cff-e2d5-4935-81b3-85fbb721a86b\") " pod="hostpath-provisioner/csi-hostpathplugin-l5xjz" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.756362 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f3ef3c9d-b2ad-4d97-8ee3-1064b88bc903-webhook-cert\") pod \"packageserver-d55dfcdfc-654gb\" (UID: \"f3ef3c9d-b2ad-4d97-8ee3-1064b88bc903\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-654gb" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.756381 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f3ef3c9d-b2ad-4d97-8ee3-1064b88bc903-apiservice-cert\") pod \"packageserver-d55dfcdfc-654gb\" (UID: \"f3ef3c9d-b2ad-4d97-8ee3-1064b88bc903\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-654gb" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.756409 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48ntx\" (UniqueName: \"kubernetes.io/projected/d20604ed-3385-44c3-8dfd-b212005182d2-kube-api-access-48ntx\") pod \"machine-config-operator-74547568cd-trk29\" (UID: \"d20604ed-3385-44c3-8dfd-b212005182d2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-trk29" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.756435 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ca51b6f8-d0ec-4d8e-bec4-e55b4c591dd4-node-bootstrap-token\") pod \"machine-config-server-zj7cr\" (UID: \"ca51b6f8-d0ec-4d8e-bec4-e55b4c591dd4\") " pod="openshift-machine-config-operator/machine-config-server-zj7cr" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.756453 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkh9n\" (UniqueName: \"kubernetes.io/projected/f6422cff-e2d5-4935-81b3-85fbb721a86b-kube-api-access-tkh9n\") pod \"csi-hostpathplugin-l5xjz\" (UID: \"f6422cff-e2d5-4935-81b3-85fbb721a86b\") " pod="hostpath-provisioner/csi-hostpathplugin-l5xjz" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.756470 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/5000e0b7-97a0-4868-a61c-281d1e2ab6ea-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rjlbg\" (UID: \"5000e0b7-97a0-4868-a61c-281d1e2ab6ea\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rjlbg" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.756490 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf96g\" (UniqueName: \"kubernetes.io/projected/050aa7a5-1385-4d83-baae-173bb748aed6-kube-api-access-rf96g\") pod \"catalog-operator-68c6474976-5z9sw\" (UID: \"050aa7a5-1385-4d83-baae-173bb748aed6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5z9sw" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.756506 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/06a279e1-00f2-4ae0-9bc4-6481c53c14f1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-cvjnm\" (UID: \"06a279e1-00f2-4ae0-9bc4-6481c53c14f1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cvjnm" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.756525 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7c9j\" (UniqueName: \"kubernetes.io/projected/ca51b6f8-d0ec-4d8e-bec4-e55b4c591dd4-kube-api-access-p7c9j\") pod \"machine-config-server-zj7cr\" (UID: \"ca51b6f8-d0ec-4d8e-bec4-e55b4c591dd4\") " pod="openshift-machine-config-operator/machine-config-server-zj7cr" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.756543 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cec5698b-f4e0-4c73-abe0-f999df35f0c6-secret-volume\") pod \"collect-profiles-29484990-bjkct\" (UID: \"cec5698b-f4e0-4c73-abe0-f999df35f0c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-bjkct" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.756576 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk9ft\" (UniqueName: \"kubernetes.io/projected/e7e7f0b8-5889-4e1b-8bc4-4b7b533d9fb4-kube-api-access-qk9ft\") pod \"authentication-operator-69f744f599-zw8x5\" (UID: \"e7e7f0b8-5889-4e1b-8bc4-4b7b533d9fb4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zw8x5" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.756592 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f6422cff-e2d5-4935-81b3-85fbb721a86b-csi-data-dir\") pod \"csi-hostpathplugin-l5xjz\" (UID: \"f6422cff-e2d5-4935-81b3-85fbb721a86b\") " pod="hostpath-provisioner/csi-hostpathplugin-l5xjz" Jan 22 16:32:16 crc kubenswrapper[4758]: E0122 16:32:16.756624 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:17.256599215 +0000 UTC m=+158.739938500 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.756661 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/db717b97-58b5-402c-983f-9bf1e88c40a4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-wszfq\" (UID: \"db717b97-58b5-402c-983f-9bf1e88c40a4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wszfq" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.756684 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f6422cff-e2d5-4935-81b3-85fbb721a86b-csi-data-dir\") pod \"csi-hostpathplugin-l5xjz\" (UID: \"f6422cff-e2d5-4935-81b3-85fbb721a86b\") " pod="hostpath-provisioner/csi-hostpathplugin-l5xjz" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.756696 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d20604ed-3385-44c3-8dfd-b212005182d2-images\") pod \"machine-config-operator-74547568cd-trk29\" (UID: \"d20604ed-3385-44c3-8dfd-b212005182d2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-trk29" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.756728 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s47cd\" (UniqueName: \"kubernetes.io/projected/c33d209a-fda4-44bd-944f-95cc380f4173-kube-api-access-s47cd\") pod \"machine-config-controller-84d6567774-k254w\" (UID: \"c33d209a-fda4-44bd-944f-95cc380f4173\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k254w" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.756783 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1772eca5-cae4-40ba-94c7-d00f0c70636f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-65j2c\" (UID: \"1772eca5-cae4-40ba-94c7-d00f0c70636f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-65j2c" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.756801 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37871634-4204-40b5-850b-5789bb71caf6-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-jflvh\" (UID: \"37871634-4204-40b5-850b-5789bb71caf6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jflvh" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.756819 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68864ea7-f5a1-40f4-80c1-09bd344ef4f7-config\") pod \"service-ca-operator-777779d784-sbtxv\" (UID: \"68864ea7-f5a1-40f4-80c1-09bd344ef4f7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-sbtxv" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.756836 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d20604ed-3385-44c3-8dfd-b212005182d2-auth-proxy-config\") pod \"machine-config-operator-74547568cd-trk29\" (UID: \"d20604ed-3385-44c3-8dfd-b212005182d2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-trk29" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.756853 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1bc85282-8493-4e92-91eb-3a2072c87514-metrics-tls\") pod \"dns-default-rfv8b\" (UID: \"1bc85282-8493-4e92-91eb-3a2072c87514\") " pod="openshift-dns/dns-default-rfv8b" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.756870 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1bc85282-8493-4e92-91eb-3a2072c87514-config-volume\") pod \"dns-default-rfv8b\" (UID: \"1bc85282-8493-4e92-91eb-3a2072c87514\") " pod="openshift-dns/dns-default-rfv8b" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.756889 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6zbg\" (UniqueName: \"kubernetes.io/projected/1bc85282-8493-4e92-91eb-3a2072c87514-kube-api-access-d6zbg\") pod \"dns-default-rfv8b\" (UID: \"1bc85282-8493-4e92-91eb-3a2072c87514\") " pod="openshift-dns/dns-default-rfv8b" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.756913 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c33d209a-fda4-44bd-944f-95cc380f4173-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-k254w\" (UID: \"c33d209a-fda4-44bd-944f-95cc380f4173\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k254w" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.756935 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxtbm\" (UniqueName: \"kubernetes.io/projected/db717b97-58b5-402c-983f-9bf1e88c40a4-kube-api-access-cxtbm\") pod \"olm-operator-6b444d44fb-wszfq\" (UID: \"db717b97-58b5-402c-983f-9bf1e88c40a4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wszfq" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.757120 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f6422cff-e2d5-4935-81b3-85fbb721a86b-mountpoint-dir\") pod \"csi-hostpathplugin-l5xjz\" (UID: \"f6422cff-e2d5-4935-81b3-85fbb721a86b\") " pod="hostpath-provisioner/csi-hostpathplugin-l5xjz" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.757474 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37871634-4204-40b5-850b-5789bb71caf6-config\") pod \"kube-controller-manager-operator-78b949d7b-jflvh\" (UID: \"37871634-4204-40b5-850b-5789bb71caf6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jflvh" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.757821 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/22aa04a2-a400-462b-b73e-ba9b37664490-signing-cabundle\") pod \"service-ca-9c57cc56f-54h94\" (UID: \"22aa04a2-a400-462b-b73e-ba9b37664490\") " pod="openshift-service-ca/service-ca-9c57cc56f-54h94" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.762429 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7baaa22f-75fb-4524-91fa-89eb385e0ad5-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-crm27\" (UID: \"7baaa22f-75fb-4524-91fa-89eb385e0ad5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-crm27" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.763057 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e7f0b8-5889-4e1b-8bc4-4b7b533d9fb4-config\") pod \"authentication-operator-69f744f599-zw8x5\" (UID: \"e7e7f0b8-5889-4e1b-8bc4-4b7b533d9fb4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zw8x5" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.768078 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3fcd001e-7c62-4167-adbd-afd79a1dd594-bound-sa-token\") pod \"ingress-operator-5b745b69d9-kqd5s\" (UID: \"3fcd001e-7c62-4167-adbd-afd79a1dd594\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqd5s" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.771883 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzxhk\" (UniqueName: \"kubernetes.io/projected/3fcd001e-7c62-4167-adbd-afd79a1dd594-kube-api-access-qzxhk\") pod \"ingress-operator-5b745b69d9-kqd5s\" (UID: \"3fcd001e-7c62-4167-adbd-afd79a1dd594\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqd5s" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.778174 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5caed3c6-9037-4ecf-b0db-778db52bd3ee-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fjsgm\" (UID: \"5caed3c6-9037-4ecf-b0db-778db52bd3ee\") " pod="openshift-marketplace/marketplace-operator-79b997595-fjsgm" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.780394 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dv2x7\" (UniqueName: \"kubernetes.io/projected/8f67259d-8eec-4f78-929f-01e7abe893f6-kube-api-access-dv2x7\") pod \"console-f9d7485db-n2kln\" (UID: \"8f67259d-8eec-4f78-929f-01e7abe893f6\") " pod="openshift-console/console-f9d7485db-n2kln" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.780816 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68864ea7-f5a1-40f4-80c1-09bd344ef4f7-serving-cert\") pod \"service-ca-operator-777779d784-sbtxv\" (UID: \"68864ea7-f5a1-40f4-80c1-09bd344ef4f7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-sbtxv" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.782073 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d20604ed-3385-44c3-8dfd-b212005182d2-images\") pod \"machine-config-operator-74547568cd-trk29\" (UID: \"d20604ed-3385-44c3-8dfd-b212005182d2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-trk29" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.782096 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7e7f0b8-5889-4e1b-8bc4-4b7b533d9fb4-service-ca-bundle\") pod \"authentication-operator-69f744f599-zw8x5\" (UID: \"e7e7f0b8-5889-4e1b-8bc4-4b7b533d9fb4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zw8x5" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.782621 4758 generic.go:334] "Generic (PLEG): container finished" podID="0eb1a077-ff54-4f67-9cd5-e2c056ef807e" containerID="7c8e99da8503a3255fc3adaa188c794ddd0216afa1470468297168da0cdb960d" exitCode=0 Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.782704 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9h8hv" event={"ID":"0eb1a077-ff54-4f67-9cd5-e2c056ef807e","Type":"ContainerDied","Data":"7c8e99da8503a3255fc3adaa188c794ddd0216afa1470468297168da0cdb960d"} Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.782732 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9h8hv" event={"ID":"0eb1a077-ff54-4f67-9cd5-e2c056ef807e","Type":"ContainerStarted","Data":"338099b7d1cb101237538bea453a28eb9bb5c4beab3fe588658644246ca56062"} Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.783145 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltwfg\" (UniqueName: \"kubernetes.io/projected/7006bfc3-2fa7-483c-8bcf-7ded310328a9-kube-api-access-ltwfg\") pod \"openshift-controller-manager-operator-756b6f6bc6-6fjnz\" (UID: \"7006bfc3-2fa7-483c-8bcf-7ded310328a9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6fjnz" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.785601 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/db717b97-58b5-402c-983f-9bf1e88c40a4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-wszfq\" (UID: \"db717b97-58b5-402c-983f-9bf1e88c40a4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wszfq" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.785623 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f6422cff-e2d5-4935-81b3-85fbb721a86b-registration-dir\") pod \"csi-hostpathplugin-l5xjz\" (UID: \"f6422cff-e2d5-4935-81b3-85fbb721a86b\") " pod="hostpath-provisioner/csi-hostpathplugin-l5xjz" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.756769 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f6422cff-e2d5-4935-81b3-85fbb721a86b-plugins-dir\") pod \"csi-hostpathplugin-l5xjz\" (UID: \"f6422cff-e2d5-4935-81b3-85fbb721a86b\") " pod="hostpath-provisioner/csi-hostpathplugin-l5xjz" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.790150 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2q4t5" event={"ID":"652cdabf-3f77-4cff-aae4-1f51ed209be0","Type":"ContainerStarted","Data":"c44f7cfa9d9ba4ca0d5fdf42b6b8bc006371b803bdb178be0d191481fb1e88fb"} Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.790756 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/db717b97-58b5-402c-983f-9bf1e88c40a4-srv-cert\") pod \"olm-operator-6b444d44fb-wszfq\" (UID: \"db717b97-58b5-402c-983f-9bf1e88c40a4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wszfq" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.796514 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cec5698b-f4e0-4c73-abe0-f999df35f0c6-config-volume\") pod \"collect-profiles-29484990-bjkct\" (UID: \"cec5698b-f4e0-4c73-abe0-f999df35f0c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-bjkct" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.797165 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1c983b09-f715-422e-960d-36dcc610c30b-bound-sa-token\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.797691 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f3ef3c9d-b2ad-4d97-8ee3-1064b88bc903-tmpfs\") pod \"packageserver-d55dfcdfc-654gb\" (UID: \"f3ef3c9d-b2ad-4d97-8ee3-1064b88bc903\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-654gb" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.798264 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68864ea7-f5a1-40f4-80c1-09bd344ef4f7-config\") pod \"service-ca-operator-777779d784-sbtxv\" (UID: \"68864ea7-f5a1-40f4-80c1-09bd344ef4f7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-sbtxv" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.799583 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c33d209a-fda4-44bd-944f-95cc380f4173-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-k254w\" (UID: \"c33d209a-fda4-44bd-944f-95cc380f4173\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k254w" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.808092 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c33d209a-fda4-44bd-944f-95cc380f4173-proxy-tls\") pod \"machine-config-controller-84d6567774-k254w\" (UID: \"c33d209a-fda4-44bd-944f-95cc380f4173\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k254w" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.824829 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7e7f0b8-5889-4e1b-8bc4-4b7b533d9fb4-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-zw8x5\" (UID: \"e7e7f0b8-5889-4e1b-8bc4-4b7b533d9fb4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zw8x5" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.824935 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f6422cff-e2d5-4935-81b3-85fbb721a86b-socket-dir\") pod \"csi-hostpathplugin-l5xjz\" (UID: \"f6422cff-e2d5-4935-81b3-85fbb721a86b\") " pod="hostpath-provisioner/csi-hostpathplugin-l5xjz" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.825513 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb5ead16-7592-4bd3-9ebb-ee8499eb639b-config\") pod \"kube-apiserver-operator-766d6c64bb-bgbsx\" (UID: \"fb5ead16-7592-4bd3-9ebb-ee8499eb639b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bgbsx" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.826499 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d20604ed-3385-44c3-8dfd-b212005182d2-auth-proxy-config\") pod \"machine-config-operator-74547568cd-trk29\" (UID: \"d20604ed-3385-44c3-8dfd-b212005182d2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-trk29" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.827240 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1bc85282-8493-4e92-91eb-3a2072c87514-config-volume\") pod \"dns-default-rfv8b\" (UID: \"1bc85282-8493-4e92-91eb-3a2072c87514\") " pod="openshift-dns/dns-default-rfv8b" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.828079 4758 generic.go:334] "Generic (PLEG): container finished" podID="1358317e-b558-46f9-b9f7-0fcfcc0eb2c9" containerID="c79a3ffc31dc62ee2fe2355d723f17a9aa5bfa9ce2c0a797adc024516852ef75" exitCode=0 Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.828817 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d20604ed-3385-44c3-8dfd-b212005182d2-proxy-tls\") pod \"machine-config-operator-74547568cd-trk29\" (UID: \"d20604ed-3385-44c3-8dfd-b212005182d2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-trk29" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.829068 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjhq" event={"ID":"1358317e-b558-46f9-b9f7-0fcfcc0eb2c9","Type":"ContainerDied","Data":"c79a3ffc31dc62ee2fe2355d723f17a9aa5bfa9ce2c0a797adc024516852ef75"} Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.829089 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjhq" event={"ID":"1358317e-b558-46f9-b9f7-0fcfcc0eb2c9","Type":"ContainerStarted","Data":"fb768b0d431328f529ac89834bead6f3473651771a746f7f5fd4cffdd3e8f50a"} Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.843347 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7baaa22f-75fb-4524-91fa-89eb385e0ad5-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-crm27\" (UID: \"7baaa22f-75fb-4524-91fa-89eb385e0ad5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-crm27" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.844537 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" event={"ID":"e926035e-0af8-45eb-9451-19c8827363c3","Type":"ContainerStarted","Data":"237c3a2fb8131d656e985482a0995ed58bc9dfebd0e06074bdce07f532f3f33d"} Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.844570 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" event={"ID":"e926035e-0af8-45eb-9451-19c8827363c3","Type":"ContainerStarted","Data":"06fe3b48de957488ed3233fc44b2211826eaf0f4701de5c80c870b4221289206"} Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.845215 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.845620 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvltd\" (UniqueName: \"kubernetes.io/projected/fcc6018a-27ef-4a30-98f2-90d2e6e454be-kube-api-access-kvltd\") pod \"dns-operator-744455d44c-zlnf7\" (UID: \"fcc6018a-27ef-4a30-98f2-90d2e6e454be\") " pod="openshift-dns-operator/dns-operator-744455d44c-zlnf7" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.845618 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cec5698b-f4e0-4c73-abe0-f999df35f0c6-secret-volume\") pod \"collect-profiles-29484990-bjkct\" (UID: \"cec5698b-f4e0-4c73-abe0-f999df35f0c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-bjkct" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.847492 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/06a279e1-00f2-4ae0-9bc4-6481c53c14f1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-cvjnm\" (UID: \"06a279e1-00f2-4ae0-9bc4-6481c53c14f1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cvjnm" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.851429 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5760aa2c-cda7-44bb-8458-d31b09eb2de5-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-svrdl\" (UID: \"5760aa2c-cda7-44bb-8458-d31b09eb2de5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-svrdl" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.858892 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:16 crc kubenswrapper[4758]: E0122 16:32:16.859250 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:17.35922584 +0000 UTC m=+158.842565125 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.859391 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:16 crc kubenswrapper[4758]: E0122 16:32:16.860552 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:17.360543407 +0000 UTC m=+158.843882702 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.862621 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1bc85282-8493-4e92-91eb-3a2072c87514-metrics-tls\") pod \"dns-default-rfv8b\" (UID: \"1bc85282-8493-4e92-91eb-3a2072c87514\") " pod="openshift-dns/dns-default-rfv8b" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.862628 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb5ead16-7592-4bd3-9ebb-ee8499eb639b-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bgbsx\" (UID: \"fb5ead16-7592-4bd3-9ebb-ee8499eb639b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bgbsx" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.862962 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f3ef3c9d-b2ad-4d97-8ee3-1064b88bc903-apiservice-cert\") pod \"packageserver-d55dfcdfc-654gb\" (UID: \"f3ef3c9d-b2ad-4d97-8ee3-1064b88bc903\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-654gb" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.863178 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5caed3c6-9037-4ecf-b0db-778db52bd3ee-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fjsgm\" (UID: \"5caed3c6-9037-4ecf-b0db-778db52bd3ee\") " pod="openshift-marketplace/marketplace-operator-79b997595-fjsgm" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.863732 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/050aa7a5-1385-4d83-baae-173bb748aed6-srv-cert\") pod \"catalog-operator-68c6474976-5z9sw\" (UID: \"050aa7a5-1385-4d83-baae-173bb748aed6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5z9sw" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.864384 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hwwcr" event={"ID":"ac22080d-c713-4917-9254-d103edaa0c3e","Type":"ContainerStarted","Data":"9e309c4a67c8b39e3006925b03a415d1536de2550e843e2d041cfc7def210548"} Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.865629 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-hwwcr" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.865695 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hwwcr" event={"ID":"ac22080d-c713-4917-9254-d103edaa0c3e","Type":"ContainerStarted","Data":"92755c40a94b140798e4303171dc6c8a96905bcead76099262baa56656e94f94"} Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.869709 4758 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-hwwcr container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.869776 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-hwwcr" podUID="ac22080d-c713-4917-9254-d103edaa0c3e" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.870265 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e7f0b8-5889-4e1b-8bc4-4b7b533d9fb4-serving-cert\") pod \"authentication-operator-69f744f599-zw8x5\" (UID: \"e7e7f0b8-5889-4e1b-8bc4-4b7b533d9fb4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zw8x5" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.870490 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ca51b6f8-d0ec-4d8e-bec4-e55b4c591dd4-certs\") pod \"machine-config-server-zj7cr\" (UID: \"ca51b6f8-d0ec-4d8e-bec4-e55b4c591dd4\") " pod="openshift-machine-config-operator/machine-config-server-zj7cr" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.870577 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/5000e0b7-97a0-4868-a61c-281d1e2ab6ea-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rjlbg\" (UID: \"5000e0b7-97a0-4868-a61c-281d1e2ab6ea\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rjlbg" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.870700 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsrdj\" (UniqueName: \"kubernetes.io/projected/c8bd5414-72ea-40f8-8cf2-a6bf81e1258a-kube-api-access-rsrdj\") pod \"openshift-config-operator-7777fb866f-lnj88\" (UID: \"c8bd5414-72ea-40f8-8cf2-a6bf81e1258a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lnj88" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.872972 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1772eca5-cae4-40ba-94c7-d00f0c70636f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-65j2c\" (UID: \"1772eca5-cae4-40ba-94c7-d00f0c70636f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-65j2c" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.873575 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ca51b6f8-d0ec-4d8e-bec4-e55b4c591dd4-node-bootstrap-token\") pod \"machine-config-server-zj7cr\" (UID: \"ca51b6f8-d0ec-4d8e-bec4-e55b4c591dd4\") " pod="openshift-machine-config-operator/machine-config-server-zj7cr" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.882144 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37871634-4204-40b5-850b-5789bb71caf6-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-jflvh\" (UID: \"37871634-4204-40b5-850b-5789bb71caf6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jflvh" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.884335 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/050aa7a5-1385-4d83-baae-173bb748aed6-profile-collector-cert\") pod \"catalog-operator-68c6474976-5z9sw\" (UID: \"050aa7a5-1385-4d83-baae-173bb748aed6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5z9sw" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.884563 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5lhh\" (UniqueName: \"kubernetes.io/projected/1c983b09-f715-422e-960d-36dcc610c30b-kube-api-access-f5lhh\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.885685 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0cb4bda1-2b7b-4c94-8735-dde72faef39e-cert\") pod \"ingress-canary-zl4zm\" (UID: \"0cb4bda1-2b7b-4c94-8735-dde72faef39e\") " pod="openshift-ingress-canary/ingress-canary-zl4zm" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.893251 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/22aa04a2-a400-462b-b73e-ba9b37664490-signing-key\") pod \"service-ca-9c57cc56f-54h94\" (UID: \"22aa04a2-a400-462b-b73e-ba9b37664490\") " pod="openshift-service-ca/service-ca-9c57cc56f-54h94" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.895689 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f3ef3c9d-b2ad-4d97-8ee3-1064b88bc903-webhook-cert\") pod \"packageserver-d55dfcdfc-654gb\" (UID: \"f3ef3c9d-b2ad-4d97-8ee3-1064b88bc903\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-654gb" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.895844 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.899372 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srzhm\" (UniqueName: \"kubernetes.io/projected/22aa04a2-a400-462b-b73e-ba9b37664490-kube-api-access-srzhm\") pod \"service-ca-9c57cc56f-54h94\" (UID: \"22aa04a2-a400-462b-b73e-ba9b37664490\") " pod="openshift-service-ca/service-ca-9c57cc56f-54h94" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.900673 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hj7d\" (UniqueName: \"kubernetes.io/projected/1d2c5bee-e237-4043-9f8a-73bb67ebf355-kube-api-access-6hj7d\") pod \"router-default-5444994796-7jtcn\" (UID: \"1d2c5bee-e237-4043-9f8a-73bb67ebf355\") " pod="openshift-ingress/router-default-5444994796-7jtcn" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.902338 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rhrr\" (UniqueName: \"kubernetes.io/projected/7baaa22f-75fb-4524-91fa-89eb385e0ad5-kube-api-access-4rhrr\") pod \"kube-storage-version-migrator-operator-b67b599dd-crm27\" (UID: \"7baaa22f-75fb-4524-91fa-89eb385e0ad5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-crm27" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.922886 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxtbm\" (UniqueName: \"kubernetes.io/projected/db717b97-58b5-402c-983f-9bf1e88c40a4-kube-api-access-cxtbm\") pod \"olm-operator-6b444d44fb-wszfq\" (UID: \"db717b97-58b5-402c-983f-9bf1e88c40a4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wszfq" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.924576 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-54h94" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.931243 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jcd4\" (UniqueName: \"kubernetes.io/projected/cec5698b-f4e0-4c73-abe0-f999df35f0c6-kube-api-access-2jcd4\") pod \"collect-profiles-29484990-bjkct\" (UID: \"cec5698b-f4e0-4c73-abe0-f999df35f0c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-bjkct" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.941596 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4qbf\" (UniqueName: \"kubernetes.io/projected/1772eca5-cae4-40ba-94c7-d00f0c70636f-kube-api-access-f4qbf\") pod \"multus-admission-controller-857f4d67dd-65j2c\" (UID: \"1772eca5-cae4-40ba-94c7-d00f0c70636f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-65j2c" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.943685 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lnj88" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.968522 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.971897 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-rc8wq" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.972510 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-bjkct" Jan 22 16:32:16 crc kubenswrapper[4758]: E0122 16:32:16.972599 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:17.472581909 +0000 UTC m=+158.955921194 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.989165 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-n2kln" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.989771 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-zlnf7" Jan 22 16:32:16 crc kubenswrapper[4758]: I0122 16:32:16.991672 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24vgp\" (UniqueName: \"kubernetes.io/projected/0cb4bda1-2b7b-4c94-8735-dde72faef39e-kube-api-access-24vgp\") pod \"ingress-canary-zl4zm\" (UID: \"0cb4bda1-2b7b-4c94-8735-dde72faef39e\") " pod="openshift-ingress-canary/ingress-canary-zl4zm" Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.013296 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngx4q\" (UniqueName: \"kubernetes.io/projected/5caed3c6-9037-4ecf-b0db-778db52bd3ee-kube-api-access-ngx4q\") pod \"marketplace-operator-79b997595-fjsgm\" (UID: \"5caed3c6-9037-4ecf-b0db-778db52bd3ee\") " pod="openshift-marketplace/marketplace-operator-79b997595-fjsgm" Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.018329 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq85s\" (UniqueName: \"kubernetes.io/projected/5000e0b7-97a0-4868-a61c-281d1e2ab6ea-kube-api-access-mq85s\") pod \"package-server-manager-789f6589d5-rjlbg\" (UID: \"5000e0b7-97a0-4868-a61c-281d1e2ab6ea\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rjlbg" Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.022353 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6fjnz" Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.036574 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqd5s" Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.038716 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-svrdl" Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.046617 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-zl4zm" Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.047316 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-7jtcn" Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.048984 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vlqm\" (UniqueName: \"kubernetes.io/projected/914dc40a-791a-4d15-83b6-fb5f4002f786-kube-api-access-5vlqm\") pod \"migrator-59844c95c7-km5pw\" (UID: \"914dc40a-791a-4d15-83b6-fb5f4002f786\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-km5pw" Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.056529 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb5ead16-7592-4bd3-9ebb-ee8499eb639b-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bgbsx\" (UID: \"fb5ead16-7592-4bd3-9ebb-ee8499eb639b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bgbsx" Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.073001 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-km5pw" Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.075621 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:17 crc kubenswrapper[4758]: E0122 16:32:17.076165 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:17.576150901 +0000 UTC m=+159.059490186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.079837 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bgbsx" Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.083198 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6zbg\" (UniqueName: \"kubernetes.io/projected/1bc85282-8493-4e92-91eb-3a2072c87514-kube-api-access-d6zbg\") pod \"dns-default-rfv8b\" (UID: \"1bc85282-8493-4e92-91eb-3a2072c87514\") " pod="openshift-dns/dns-default-rfv8b" Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.087092 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-crm27" Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.096412 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk9ft\" (UniqueName: \"kubernetes.io/projected/e7e7f0b8-5889-4e1b-8bc4-4b7b533d9fb4-kube-api-access-qk9ft\") pod \"authentication-operator-69f744f599-zw8x5\" (UID: \"e7e7f0b8-5889-4e1b-8bc4-4b7b533d9fb4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zw8x5" Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.103523 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-zw8x5" Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.104191 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkppt\" (UniqueName: \"kubernetes.io/projected/06a279e1-00f2-4ae0-9bc4-6481c53c14f1-kube-api-access-tkppt\") pod \"control-plane-machine-set-operator-78cbb6b69f-cvjnm\" (UID: \"06a279e1-00f2-4ae0-9bc4-6481c53c14f1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cvjnm" Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.111807 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wszfq" Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.123854 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-65j2c" Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.140609 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sfkr\" (UniqueName: \"kubernetes.io/projected/68864ea7-f5a1-40f4-80c1-09bd344ef4f7-kube-api-access-6sfkr\") pod \"service-ca-operator-777779d784-sbtxv\" (UID: \"68864ea7-f5a1-40f4-80c1-09bd344ef4f7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-sbtxv" Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.148923 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rjlbg" Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.157653 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cvjnm" Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.160389 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkh9n\" (UniqueName: \"kubernetes.io/projected/f6422cff-e2d5-4935-81b3-85fbb721a86b-kube-api-access-tkh9n\") pod \"csi-hostpathplugin-l5xjz\" (UID: \"f6422cff-e2d5-4935-81b3-85fbb721a86b\") " pod="hostpath-provisioner/csi-hostpathplugin-l5xjz" Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.176576 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:17 crc kubenswrapper[4758]: E0122 16:32:17.177022 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:17.677003566 +0000 UTC m=+159.160342861 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.189961 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/37871634-4204-40b5-850b-5789bb71caf6-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-jflvh\" (UID: \"37871634-4204-40b5-850b-5789bb71caf6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jflvh" Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.202665 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdzwn\" (UniqueName: \"kubernetes.io/projected/f3ef3c9d-b2ad-4d97-8ee3-1064b88bc903-kube-api-access-rdzwn\") pod \"packageserver-d55dfcdfc-654gb\" (UID: \"f3ef3c9d-b2ad-4d97-8ee3-1064b88bc903\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-654gb" Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.250629 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fjsgm" Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.251285 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-654gb" Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.274398 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-sbtxv" Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.278103 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:17 crc kubenswrapper[4758]: E0122 16:32:17.278477 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:17.778464129 +0000 UTC m=+159.261803404 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.278958 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf96g\" (UniqueName: \"kubernetes.io/projected/050aa7a5-1385-4d83-baae-173bb748aed6-kube-api-access-rf96g\") pod \"catalog-operator-68c6474976-5z9sw\" (UID: \"050aa7a5-1385-4d83-baae-173bb748aed6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5z9sw" Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.310616 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-l5xjz" Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.334405 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48ntx\" (UniqueName: \"kubernetes.io/projected/d20604ed-3385-44c3-8dfd-b212005182d2-kube-api-access-48ntx\") pod \"machine-config-operator-74547568cd-trk29\" (UID: \"d20604ed-3385-44c3-8dfd-b212005182d2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-trk29" Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.336465 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s47cd\" (UniqueName: \"kubernetes.io/projected/c33d209a-fda4-44bd-944f-95cc380f4173-kube-api-access-s47cd\") pod \"machine-config-controller-84d6567774-k254w\" (UID: \"c33d209a-fda4-44bd-944f-95cc380f4173\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k254w" Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.359617 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jflvh" Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.363800 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-rfv8b" Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.399190 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:17 crc kubenswrapper[4758]: E0122 16:32:17.399694 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:17.89967561 +0000 UTC m=+159.383014905 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.399832 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k254w" Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.422697 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7c9j\" (UniqueName: \"kubernetes.io/projected/ca51b6f8-d0ec-4d8e-bec4-e55b4c591dd4-kube-api-access-p7c9j\") pod \"machine-config-server-zj7cr\" (UID: \"ca51b6f8-d0ec-4d8e-bec4-e55b4c591dd4\") " pod="openshift-machine-config-operator/machine-config-server-zj7cr" Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.432246 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-trk29" Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.450758 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5z9sw" Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.508949 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:17 crc kubenswrapper[4758]: E0122 16:32:17.513138 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:18.013119431 +0000 UTC m=+159.496458716 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.609580 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-p5cqb"] Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.619773 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.620133 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wxbnz"] Jan 22 16:32:17 crc kubenswrapper[4758]: E0122 16:32:17.620326 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:18.120306076 +0000 UTC m=+159.603645361 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.674990 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-zj7cr" Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.678008 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-x45ps"] Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.687121 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qc9q5"] Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.721473 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:17 crc kubenswrapper[4758]: E0122 16:32:17.722026 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:18.222012026 +0000 UTC m=+159.705351311 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.823175 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:17 crc kubenswrapper[4758]: E0122 16:32:17.823298 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:18.323278232 +0000 UTC m=+159.806617517 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.823646 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:17 crc kubenswrapper[4758]: E0122 16:32:17.824032 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:18.324023454 +0000 UTC m=+159.807362739 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.926275 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:17 crc kubenswrapper[4758]: E0122 16:32:17.926676 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:18.426657069 +0000 UTC m=+159.909996354 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.947585 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9dx9w" podStartSLOduration=135.947562791 podStartE2EDuration="2m15.947562791s" podCreationTimestamp="2026-01-22 16:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:32:17.946207202 +0000 UTC m=+159.429546487" watchObservedRunningTime="2026-01-22 16:32:17.947562791 +0000 UTC m=+159.430902086" Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.957894 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484990-bjkct"] Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.961555 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-7jtcn" event={"ID":"1d2c5bee-e237-4043-9f8a-73bb67ebf355","Type":"ContainerStarted","Data":"48b85ab09aba26e534542c4e728289bc60e1e212e40df8bbb49044a247b2ff57"} Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.969807 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-2k2wj"] Jan 22 16:32:17 crc kubenswrapper[4758]: I0122 16:32:17.983028 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2q4t5" event={"ID":"652cdabf-3f77-4cff-aae4-1f51ed209be0","Type":"ContainerStarted","Data":"277fcc92c8f18d91937c2e02ad838a8bb8e34e324aabfaaa53c31a2590e7e501"} Jan 22 16:32:18 crc kubenswrapper[4758]: I0122 16:32:18.005451 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jfncv"] Jan 22 16:32:18 crc kubenswrapper[4758]: I0122 16:32:18.045711 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:18 crc kubenswrapper[4758]: E0122 16:32:18.046114 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:18.54609713 +0000 UTC m=+160.029436415 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:18 crc kubenswrapper[4758]: I0122 16:32:18.059320 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-p5cqb" event={"ID":"327d43d9-41eb-4ef4-9df0-d38e0739b7df","Type":"ContainerStarted","Data":"9e6118dd96b1069b3eaafdfd96477795411c3b95066cf7d3335e8a0125cb94cf"} Jan 22 16:32:18 crc kubenswrapper[4758]: I0122 16:32:18.068057 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wxbnz" event={"ID":"c1b56bc8-fee3-4990-88c8-12d557ea0639","Type":"ContainerStarted","Data":"b516c30ace5691b27cd3cfeeb196d1368a711b4ba59df99b917e276c792c04a7"} Jan 22 16:32:18 crc kubenswrapper[4758]: I0122 16:32:18.088021 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-hwwcr" Jan 22 16:32:18 crc kubenswrapper[4758]: I0122 16:32:18.125137 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-hwwcr" podStartSLOduration=136.125117468 podStartE2EDuration="2m16.125117468s" podCreationTimestamp="2026-01-22 16:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:32:18.124258383 +0000 UTC m=+159.607597668" watchObservedRunningTime="2026-01-22 16:32:18.125117468 +0000 UTC m=+159.608456753" Jan 22 16:32:18 crc kubenswrapper[4758]: I0122 16:32:18.148552 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:18 crc kubenswrapper[4758]: E0122 16:32:18.149582 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:18.649566049 +0000 UTC m=+160.132905334 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:18 crc kubenswrapper[4758]: I0122 16:32:18.217036 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-rc8wq"] Jan 22 16:32:18 crc kubenswrapper[4758]: I0122 16:32:18.240860 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-54h94"] Jan 22 16:32:18 crc kubenswrapper[4758]: I0122 16:32:18.243025 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-lnj88"] Jan 22 16:32:18 crc kubenswrapper[4758]: I0122 16:32:18.250126 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:18 crc kubenswrapper[4758]: E0122 16:32:18.251673 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:18.751659419 +0000 UTC m=+160.234998704 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:18 crc kubenswrapper[4758]: I0122 16:32:18.266471 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-n2kln"] Jan 22 16:32:18 crc kubenswrapper[4758]: I0122 16:32:18.324261 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6fjnz"] Jan 22 16:32:18 crc kubenswrapper[4758]: I0122 16:32:18.353436 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:18 crc kubenswrapper[4758]: E0122 16:32:18.354032 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:18.854013907 +0000 UTC m=+160.337353192 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:18 crc kubenswrapper[4758]: I0122 16:32:18.409579 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-zlnf7"] Jan 22 16:32:18 crc kubenswrapper[4758]: I0122 16:32:18.467467 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:18 crc kubenswrapper[4758]: E0122 16:32:18.471232 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:18.971211315 +0000 UTC m=+160.454550610 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:18 crc kubenswrapper[4758]: I0122 16:32:18.596231 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:18 crc kubenswrapper[4758]: E0122 16:32:18.596653 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:19.096634235 +0000 UTC m=+160.579973520 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:18 crc kubenswrapper[4758]: I0122 16:32:18.698190 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:18 crc kubenswrapper[4758]: E0122 16:32:18.698523 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:19.198509319 +0000 UTC m=+160.681848604 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:18 crc kubenswrapper[4758]: I0122 16:32:18.700411 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" podStartSLOduration=136.700399492 podStartE2EDuration="2m16.700399492s" podCreationTimestamp="2026-01-22 16:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:32:18.699102547 +0000 UTC m=+160.182441822" watchObservedRunningTime="2026-01-22 16:32:18.700399492 +0000 UTC m=+160.183738777" Jan 22 16:32:18 crc kubenswrapper[4758]: I0122 16:32:18.799664 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:18 crc kubenswrapper[4758]: E0122 16:32:18.800059 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:19.300039394 +0000 UTC m=+160.783378679 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:18 crc kubenswrapper[4758]: I0122 16:32:18.903614 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:18 crc kubenswrapper[4758]: E0122 16:32:18.903956 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:19.403941145 +0000 UTC m=+160.887280430 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:19 crc kubenswrapper[4758]: I0122 16:32:19.005413 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:19 crc kubenswrapper[4758]: E0122 16:32:19.005538 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:19.50551652 +0000 UTC m=+160.988855805 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:19 crc kubenswrapper[4758]: I0122 16:32:19.006320 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:19 crc kubenswrapper[4758]: E0122 16:32:19.008266 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:19.508245227 +0000 UTC m=+160.991584562 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:19 crc kubenswrapper[4758]: I0122 16:32:19.087243 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-n2kln" event={"ID":"8f67259d-8eec-4f78-929f-01e7abe893f6","Type":"ContainerStarted","Data":"8eae8d77a6d95ef19ce0215a07ffad917c59d31ab1e66c73689f56ba04b8b0b1"} Jan 22 16:32:19 crc kubenswrapper[4758]: I0122 16:32:19.088443 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-bjkct" event={"ID":"cec5698b-f4e0-4c73-abe0-f999df35f0c6","Type":"ContainerStarted","Data":"42833b78be4d022a890b796bef8c7338af78723082c329af1093cf4985d0968c"} Jan 22 16:32:19 crc kubenswrapper[4758]: I0122 16:32:19.090475 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lnj88" event={"ID":"c8bd5414-72ea-40f8-8cf2-a6bf81e1258a","Type":"ContainerStarted","Data":"b4f7f1ea3eb18312a7b65ff186cef5fa83ff56371de6a50322958e08da6fd999"} Jan 22 16:32:19 crc kubenswrapper[4758]: I0122 16:32:19.093754 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-54h94" event={"ID":"22aa04a2-a400-462b-b73e-ba9b37664490","Type":"ContainerStarted","Data":"c6fa1b3630728b7b8d1424b31468bb0c0db500446b8ed0b8c56a62038fcb0e2f"} Jan 22 16:32:19 crc kubenswrapper[4758]: I0122 16:32:19.104407 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9h8hv" event={"ID":"0eb1a077-ff54-4f67-9cd5-e2c056ef807e","Type":"ContainerStarted","Data":"89153d178480393a54c23d68f316534e0e5871478eca072d66d2f499f4891978"} Jan 22 16:32:19 crc kubenswrapper[4758]: I0122 16:32:19.105945 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6fjnz" event={"ID":"7006bfc3-2fa7-483c-8bcf-7ded310328a9","Type":"ContainerStarted","Data":"a0f26120311684d41d6e9405e1ca03cb9a8a82458fbf7b590f93f0a0d157c86f"} Jan 22 16:32:19 crc kubenswrapper[4758]: I0122 16:32:19.107581 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-2k2wj" event={"ID":"e82bca83-9360-4ff6-b0d8-dcaeb20cdcf7","Type":"ContainerStarted","Data":"da4ef37624a65b71176bfdf53dc9466eb2f4460c1def92b25081c04c9d998b30"} Jan 22 16:32:19 crc kubenswrapper[4758]: I0122 16:32:19.109998 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:19 crc kubenswrapper[4758]: E0122 16:32:19.110426 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:19.61041154 +0000 UTC m=+161.093750825 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:19 crc kubenswrapper[4758]: I0122 16:32:19.114694 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-zj7cr" event={"ID":"ca51b6f8-d0ec-4d8e-bec4-e55b4c591dd4","Type":"ContainerStarted","Data":"1e15d373d03e2a68c1804aad2e8b630d44b2e539b8bb3b6a43222e9b016d99f4"} Jan 22 16:32:19 crc kubenswrapper[4758]: I0122 16:32:19.127772 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-x45ps" event={"ID":"8add5c64-8462-48d4-8ac6-6ea831d7a535","Type":"ContainerStarted","Data":"724017c49000a574de8066598939d4b7fe73a2e35a9e246e82f90dc4dce99e57"} Jan 22 16:32:19 crc kubenswrapper[4758]: I0122 16:32:19.132582 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jfncv" event={"ID":"7a8b9092-45e9-456e-b1bc-e997c96a9836","Type":"ContainerStarted","Data":"d85fb022577f202f2add5850c6f2c89379fd80501b95145f89b3ba7c6e94abf8"} Jan 22 16:32:19 crc kubenswrapper[4758]: I0122 16:32:19.133997 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qc9q5" event={"ID":"7d7a9e04-71e1-4090-96af-395ad7e823ac","Type":"ContainerStarted","Data":"14d91218c33c4d31cd87f596b0b3e9c8680372f9673bd406378fee8ac09cc6e1"} Jan 22 16:32:19 crc kubenswrapper[4758]: I0122 16:32:19.135492 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-zlnf7" event={"ID":"fcc6018a-27ef-4a30-98f2-90d2e6e454be","Type":"ContainerStarted","Data":"f7f52b876ccaf74c001f54c980af04163f00f8d97b6d1d704b0fc5557b0c70c8"} Jan 22 16:32:19 crc kubenswrapper[4758]: I0122 16:32:19.136316 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-rc8wq" event={"ID":"7c7ef802-c8dd-48a5-a7c5-5cf646b633f2","Type":"ContainerStarted","Data":"2c52da8e7cef5230da17d45227660d6f05173669cadce5c058ae4a5c27233e56"} Jan 22 16:32:19 crc kubenswrapper[4758]: I0122 16:32:19.139846 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-7jtcn" event={"ID":"1d2c5bee-e237-4043-9f8a-73bb67ebf355","Type":"ContainerStarted","Data":"e6328b3f8eb0ebb276905cd219ed5e06c1862e4becf813175e47042263d50919"} Jan 22 16:32:19 crc kubenswrapper[4758]: I0122 16:32:19.158777 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjhq" event={"ID":"1358317e-b558-46f9-b9f7-0fcfcc0eb2c9","Type":"ContainerStarted","Data":"ee0dddcf95848e458433049d6853dcbcee69034a172a35b92d059b2d23c7d8a5"} Jan 22 16:32:19 crc kubenswrapper[4758]: I0122 16:32:19.213114 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:19 crc kubenswrapper[4758]: E0122 16:32:19.236207 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:19.73618603 +0000 UTC m=+161.219525315 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:19 crc kubenswrapper[4758]: I0122 16:32:19.363489 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:19 crc kubenswrapper[4758]: E0122 16:32:19.365032 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:19.864994847 +0000 UTC m=+161.348334132 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:19 crc kubenswrapper[4758]: I0122 16:32:19.466759 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:19 crc kubenswrapper[4758]: E0122 16:32:19.467259 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:19.967240801 +0000 UTC m=+161.450580246 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:19 crc kubenswrapper[4758]: I0122 16:32:19.496220 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjhq" podStartSLOduration=137.496201732 podStartE2EDuration="2m17.496201732s" podCreationTimestamp="2026-01-22 16:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:32:19.493590527 +0000 UTC m=+160.976929812" watchObservedRunningTime="2026-01-22 16:32:19.496201732 +0000 UTC m=+160.979541017" Jan 22 16:32:19 crc kubenswrapper[4758]: I0122 16:32:19.569550 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:19 crc kubenswrapper[4758]: E0122 16:32:19.570036 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:20.070013751 +0000 UTC m=+161.553353036 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:19 crc kubenswrapper[4758]: I0122 16:32:19.671517 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:19 crc kubenswrapper[4758]: E0122 16:32:19.671923 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:20.171904095 +0000 UTC m=+161.655243430 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:19 crc kubenswrapper[4758]: I0122 16:32:19.737919 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-7jtcn" podStartSLOduration=137.737902303 podStartE2EDuration="2m17.737902303s" podCreationTimestamp="2026-01-22 16:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:32:19.540103834 +0000 UTC m=+161.023443129" watchObservedRunningTime="2026-01-22 16:32:19.737902303 +0000 UTC m=+161.221241588" Jan 22 16:32:19 crc kubenswrapper[4758]: I0122 16:32:19.739699 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-kqd5s"] Jan 22 16:32:19 crc kubenswrapper[4758]: I0122 16:32:19.773448 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:19 crc kubenswrapper[4758]: E0122 16:32:19.773647 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:20.273627375 +0000 UTC m=+161.756966670 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:19 crc kubenswrapper[4758]: I0122 16:32:19.774094 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:19 crc kubenswrapper[4758]: E0122 16:32:19.774559 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:20.274542341 +0000 UTC m=+161.757881636 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:19 crc kubenswrapper[4758]: W0122 16:32:19.777456 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3fcd001e_7c62_4167_adbd_afd79a1dd594.slice/crio-dbc919f96f1c4b4df79856f2a22abe258d2cd214c26caa35bcf6fd4b04fac489 WatchSource:0}: Error finding container dbc919f96f1c4b4df79856f2a22abe258d2cd214c26caa35bcf6fd4b04fac489: Status 404 returned error can't find the container with id dbc919f96f1c4b4df79856f2a22abe258d2cd214c26caa35bcf6fd4b04fac489 Jan 22 16:32:19 crc kubenswrapper[4758]: I0122 16:32:19.839133 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-65j2c"] Jan 22 16:32:19 crc kubenswrapper[4758]: I0122 16:32:19.841662 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-svrdl"] Jan 22 16:32:19 crc kubenswrapper[4758]: I0122 16:32:19.874496 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjhq" Jan 22 16:32:19 crc kubenswrapper[4758]: I0122 16:32:19.875071 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjhq" Jan 22 16:32:19 crc kubenswrapper[4758]: I0122 16:32:19.875592 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:19 crc kubenswrapper[4758]: E0122 16:32:19.875900 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:20.37588579 +0000 UTC m=+161.859225075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:19 crc kubenswrapper[4758]: I0122 16:32:19.925248 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjhq" Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:19.978580 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:20 crc kubenswrapper[4758]: E0122 16:32:19.978984 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:20.478969958 +0000 UTC m=+161.962309253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:20 crc kubenswrapper[4758]: E0122 16:32:20.011711 4758 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8bd5414_72ea_40f8_8cf2_a6bf81e1258a.slice/crio-conmon-381f6d2d9f93ed970a25e0b17b20f1075519bcb784323f2ffc8602ab7c67754b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8bd5414_72ea_40f8_8cf2_a6bf81e1258a.slice/crio-381f6d2d9f93ed970a25e0b17b20f1075519bcb784323f2ffc8602ab7c67754b.scope\": RecentStats: unable to find data in memory cache]" Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.049606 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-7jtcn" Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.071977 4758 patch_prober.go:28] interesting pod/router-default-5444994796-7jtcn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 16:32:20 crc kubenswrapper[4758]: [-]has-synced failed: reason withheld Jan 22 16:32:20 crc kubenswrapper[4758]: [+]process-running ok Jan 22 16:32:20 crc kubenswrapper[4758]: healthz check failed Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.072042 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7jtcn" podUID="1d2c5bee-e237-4043-9f8a-73bb67ebf355" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.084967 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:20 crc kubenswrapper[4758]: E0122 16:32:20.085279 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:20.585251157 +0000 UTC m=+162.068590442 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.144153 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-crm27"] Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.188634 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:20 crc kubenswrapper[4758]: E0122 16:32:20.189106 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:20.689094707 +0000 UTC m=+162.172433992 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.208705 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-l5xjz"] Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.224567 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-sbtxv"] Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.235492 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-65j2c" event={"ID":"1772eca5-cae4-40ba-94c7-d00f0c70636f","Type":"ContainerStarted","Data":"bf5fd6d70974a79c4e34abf886171ed28d8befd0d9a857c9b94a1c4813086ce0"} Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.245229 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rjlbg"] Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.251239 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-k254w"] Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.255551 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-zw8x5"] Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.262188 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wszfq"] Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.286521 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-654gb"] Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.289284 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:20 crc kubenswrapper[4758]: E0122 16:32:20.289664 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:20.789647114 +0000 UTC m=+162.272986399 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.291827 4758 generic.go:334] "Generic (PLEG): container finished" podID="c8bd5414-72ea-40f8-8cf2-a6bf81e1258a" containerID="381f6d2d9f93ed970a25e0b17b20f1075519bcb784323f2ffc8602ab7c67754b" exitCode=0 Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.291918 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lnj88" event={"ID":"c8bd5414-72ea-40f8-8cf2-a6bf81e1258a","Type":"ContainerDied","Data":"381f6d2d9f93ed970a25e0b17b20f1075519bcb784323f2ffc8602ab7c67754b"} Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.302325 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-zl4zm"] Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.325193 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wxbnz" event={"ID":"c1b56bc8-fee3-4990-88c8-12d557ea0639","Type":"ContainerStarted","Data":"0c4a5d40d46167bd6e34904eb4316e7a0c0ad39b771da72d8d2cf31ea0f5ff82"} Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.328537 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bgbsx"] Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.337935 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-2k2wj" event={"ID":"e82bca83-9360-4ff6-b0d8-dcaeb20cdcf7","Type":"ContainerStarted","Data":"a450224c34a291e7baa82c3213710dd47f486218bba7c8d26acd9f8d11c6745f"} Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.337986 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-2k2wj" event={"ID":"e82bca83-9360-4ff6-b0d8-dcaeb20cdcf7","Type":"ContainerStarted","Data":"d4bb8426f8260229a0577ff62187514f9f366ebc93e06e7215dfff944bcda6de"} Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.342811 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-km5pw"] Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.350980 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-svrdl" event={"ID":"5760aa2c-cda7-44bb-8458-d31b09eb2de5","Type":"ContainerStarted","Data":"43c9df9198c40edf4f87eb73dce830c960c551219e5a9e3ec794573d16c5696b"} Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.358455 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cvjnm"] Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.365158 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6fjnz" event={"ID":"7006bfc3-2fa7-483c-8bcf-7ded310328a9","Type":"ContainerStarted","Data":"79e5c330b824df47386fbfd010dc1693526b2cdb2ecf98bd81bb8b06a9fbd297"} Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.388680 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wxbnz" podStartSLOduration=138.388656186 podStartE2EDuration="2m18.388656186s" podCreationTimestamp="2026-01-22 16:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:32:20.364360638 +0000 UTC m=+161.847699933" watchObservedRunningTime="2026-01-22 16:32:20.388656186 +0000 UTC m=+161.871995471" Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.390448 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:20 crc kubenswrapper[4758]: E0122 16:32:20.409265 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:20.909232219 +0000 UTC m=+162.392571504 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.410383 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fjsgm"] Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.410410 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-trk29"] Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.410425 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-bjkct" event={"ID":"cec5698b-f4e0-4c73-abe0-f999df35f0c6","Type":"ContainerStarted","Data":"ac7c55b44df7dfc84a1aee9d072b00ab1099d6746d5676554bf47046ad89de10"} Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.411712 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-2k2wj" podStartSLOduration=138.411696119 podStartE2EDuration="2m18.411696119s" podCreationTimestamp="2026-01-22 16:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:32:20.409036973 +0000 UTC m=+161.892376258" watchObservedRunningTime="2026-01-22 16:32:20.411696119 +0000 UTC m=+161.895035404" Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.413395 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jflvh"] Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.433657 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5z9sw"] Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.433699 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-rfv8b"] Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.445511 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2q4t5" event={"ID":"652cdabf-3f77-4cff-aae4-1f51ed209be0","Type":"ContainerStarted","Data":"de87bda3d8e570bbc54f670652b355916926e10bbd587a6e00934361a890b4a9"} Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.451692 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6fjnz" podStartSLOduration=138.45167296 podStartE2EDuration="2m18.45167296s" podCreationTimestamp="2026-01-22 16:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:32:20.450833197 +0000 UTC m=+161.934172482" watchObservedRunningTime="2026-01-22 16:32:20.45167296 +0000 UTC m=+161.935012245" Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.492156 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:20 crc kubenswrapper[4758]: E0122 16:32:20.493925 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:20.993889565 +0000 UTC m=+162.477228850 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:20 crc kubenswrapper[4758]: W0122 16:32:20.508167 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68864ea7_f5a1_40f4_80c1_09bd344ef4f7.slice/crio-ede649279aca4462c2befbbd6ba8fca34434d90050cabcdcbd6d70ecf694e8ce WatchSource:0}: Error finding container ede649279aca4462c2befbbd6ba8fca34434d90050cabcdcbd6d70ecf694e8ce: Status 404 returned error can't find the container with id ede649279aca4462c2befbbd6ba8fca34434d90050cabcdcbd6d70ecf694e8ce Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.517834 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2q4t5" podStartSLOduration=138.517812942 podStartE2EDuration="2m18.517812942s" podCreationTimestamp="2026-01-22 16:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:32:20.515317982 +0000 UTC m=+161.998657267" watchObservedRunningTime="2026-01-22 16:32:20.517812942 +0000 UTC m=+162.001152227" Jan 22 16:32:20 crc kubenswrapper[4758]: W0122 16:32:20.527638 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb717b97_58b5_402c_983f_9bf1e88c40a4.slice/crio-92ab5c9b6eda02f1d9b128570453d8d42b9ec7cd3dc27e70d4e70b5e07a13462 WatchSource:0}: Error finding container 92ab5c9b6eda02f1d9b128570453d8d42b9ec7cd3dc27e70d4e70b5e07a13462: Status 404 returned error can't find the container with id 92ab5c9b6eda02f1d9b128570453d8d42b9ec7cd3dc27e70d4e70b5e07a13462 Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.530855 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqd5s" event={"ID":"3fcd001e-7c62-4167-adbd-afd79a1dd594","Type":"ContainerStarted","Data":"dbc919f96f1c4b4df79856f2a22abe258d2cd214c26caa35bcf6fd4b04fac489"} Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.585097 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jfncv" event={"ID":"7a8b9092-45e9-456e-b1bc-e997c96a9836","Type":"ContainerStarted","Data":"2c873c426032273424da547a9d103dd73ee2b258108e2235ba52fbaafeba9468"} Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.585145 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jfncv" event={"ID":"7a8b9092-45e9-456e-b1bc-e997c96a9836","Type":"ContainerStarted","Data":"1c25efbc5c6393e773df55eab54fccdfe1c541b22ba31dd35dee42ed499b5278"} Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.600176 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:20 crc kubenswrapper[4758]: E0122 16:32:20.603146 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:21.103128837 +0000 UTC m=+162.586468122 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.609154 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-bjkct" podStartSLOduration=138.609134848 podStartE2EDuration="2m18.609134848s" podCreationTimestamp="2026-01-22 16:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:32:20.586475677 +0000 UTC m=+162.069814952" watchObservedRunningTime="2026-01-22 16:32:20.609134848 +0000 UTC m=+162.092474133" Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.614690 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qc9q5" event={"ID":"7d7a9e04-71e1-4090-96af-395ad7e823ac","Type":"ContainerStarted","Data":"011e92b10292b22c4668915368ebcce2824985ec2ca68fa3c93833db32e61c66"} Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.615672 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qc9q5" Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.622729 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-n2kln" event={"ID":"8f67259d-8eec-4f78-929f-01e7abe893f6","Type":"ContainerStarted","Data":"60bde2bb53e4460e2d758fe68dffc61e9e1c41ffb7d0a5c6bfd5f4ca86544c4f"} Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.632996 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-crm27" event={"ID":"7baaa22f-75fb-4524-91fa-89eb385e0ad5","Type":"ContainerStarted","Data":"2b374645bd6e590f2aabe4890b98418f81959041b886432e45af2c728a87f19d"} Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.657721 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-p5cqb" event={"ID":"327d43d9-41eb-4ef4-9df0-d38e0739b7df","Type":"ContainerStarted","Data":"ab1303618c1cf291efd95c335a5fe5c3db7734817b22ab68e757fa31d693d809"} Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.658706 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-p5cqb" Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.664290 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-zlnf7" event={"ID":"fcc6018a-27ef-4a30-98f2-90d2e6e454be","Type":"ContainerStarted","Data":"91d6595edcec45ce188b89f7ea4387d81b3d73b5bc188513be6d59fc5e289b07"} Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.685408 4758 patch_prober.go:28] interesting pod/downloads-7954f5f757-p5cqb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.685467 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-p5cqb" podUID="327d43d9-41eb-4ef4-9df0-d38e0739b7df" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.702023 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-x45ps" event={"ID":"8add5c64-8462-48d4-8ac6-6ea831d7a535","Type":"ContainerStarted","Data":"4459707524e658c90fbaee5643ccfcfdd339c1d30312b5d6bc03fe57b3f65ff2"} Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.702914 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-x45ps" Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.703273 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:20 crc kubenswrapper[4758]: E0122 16:32:20.704401 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:21.204385474 +0000 UTC m=+162.687724759 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.726922 4758 patch_prober.go:28] interesting pod/console-operator-58897d9998-x45ps container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.726980 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-x45ps" podUID="8add5c64-8462-48d4-8ac6-6ea831d7a535" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.753012 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-rc8wq" event={"ID":"7c7ef802-c8dd-48a5-a7c5-5cf646b633f2","Type":"ContainerStarted","Data":"c73896ae871569b0f5314dcadd23e0b976e7b9aebc7d6224e1fe1d875860d294"} Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.754871 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jfncv" podStartSLOduration=138.754854713 podStartE2EDuration="2m18.754854713s" podCreationTimestamp="2026-01-22 16:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:32:20.657432815 +0000 UTC m=+162.140772100" watchObservedRunningTime="2026-01-22 16:32:20.754854713 +0000 UTC m=+162.238193998" Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.783917 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9h8hv" event={"ID":"0eb1a077-ff54-4f67-9cd5-e2c056ef807e","Type":"ContainerStarted","Data":"fd7e79c4121f395a37de4c503696fa390e2d88107b3608ba7a636204038a2ac5"} Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.793607 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qc9q5" podStartSLOduration=138.793592329 podStartE2EDuration="2m18.793592329s" podCreationTimestamp="2026-01-22 16:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:32:20.765585246 +0000 UTC m=+162.248924531" watchObservedRunningTime="2026-01-22 16:32:20.793592329 +0000 UTC m=+162.276931614" Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.795252 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-n2kln" podStartSLOduration=138.795243376 podStartE2EDuration="2m18.795243376s" podCreationTimestamp="2026-01-22 16:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:32:20.792797767 +0000 UTC m=+162.276137052" watchObservedRunningTime="2026-01-22 16:32:20.795243376 +0000 UTC m=+162.278582661" Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.798502 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-54h94" event={"ID":"22aa04a2-a400-462b-b73e-ba9b37664490","Type":"ContainerStarted","Data":"85748feb0d892a7e2b67b6e686f28c397694aad57e7523ef1c848690b76660fd"} Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.805037 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.806367 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-zj7cr" event={"ID":"ca51b6f8-d0ec-4d8e-bec4-e55b4c591dd4","Type":"ContainerStarted","Data":"0e016cd99c76b8b3a7ecbae13b3e76b7be2567f2e7380f73dfeb3c200c52cd44"} Jan 22 16:32:20 crc kubenswrapper[4758]: E0122 16:32:20.806999 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:21.306988049 +0000 UTC m=+162.790327334 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.818997 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-rc8wq" podStartSLOduration=138.818971018 podStartE2EDuration="2m18.818971018s" podCreationTimestamp="2026-01-22 16:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:32:20.811843456 +0000 UTC m=+162.295182741" watchObservedRunningTime="2026-01-22 16:32:20.818971018 +0000 UTC m=+162.302310303" Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.825168 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hvjhq" Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.840838 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-9h8hv" podStartSLOduration=138.840821566 podStartE2EDuration="2m18.840821566s" podCreationTimestamp="2026-01-22 16:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:32:20.83742351 +0000 UTC m=+162.320762795" watchObservedRunningTime="2026-01-22 16:32:20.840821566 +0000 UTC m=+162.324160851" Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.861391 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-p5cqb" podStartSLOduration=138.861367898 podStartE2EDuration="2m18.861367898s" podCreationTimestamp="2026-01-22 16:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:32:20.856579892 +0000 UTC m=+162.339919177" watchObservedRunningTime="2026-01-22 16:32:20.861367898 +0000 UTC m=+162.344707183" Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.881522 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-zlnf7" podStartSLOduration=138.881493038 podStartE2EDuration="2m18.881493038s" podCreationTimestamp="2026-01-22 16:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:32:20.879320207 +0000 UTC m=+162.362659492" watchObservedRunningTime="2026-01-22 16:32:20.881493038 +0000 UTC m=+162.364846453" Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.906493 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:20 crc kubenswrapper[4758]: E0122 16:32:20.907847 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:21.407827953 +0000 UTC m=+162.891167238 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.934713 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-54h94" podStartSLOduration=138.934694284 podStartE2EDuration="2m18.934694284s" podCreationTimestamp="2026-01-22 16:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:32:20.933956763 +0000 UTC m=+162.417296058" watchObservedRunningTime="2026-01-22 16:32:20.934694284 +0000 UTC m=+162.418033569" Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.936523 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-x45ps" podStartSLOduration=138.936507796 podStartE2EDuration="2m18.936507796s" podCreationTimestamp="2026-01-22 16:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:32:20.90911219 +0000 UTC m=+162.392451475" watchObservedRunningTime="2026-01-22 16:32:20.936507796 +0000 UTC m=+162.419847091" Jan 22 16:32:20 crc kubenswrapper[4758]: I0122 16:32:20.991639 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-zj7cr" podStartSLOduration=6.991620886 podStartE2EDuration="6.991620886s" podCreationTimestamp="2026-01-22 16:32:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:32:20.968072029 +0000 UTC m=+162.451411324" watchObservedRunningTime="2026-01-22 16:32:20.991620886 +0000 UTC m=+162.474960171" Jan 22 16:32:21 crc kubenswrapper[4758]: I0122 16:32:21.008701 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:21 crc kubenswrapper[4758]: E0122 16:32:21.009052 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:21.509036498 +0000 UTC m=+162.992375783 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:21 crc kubenswrapper[4758]: I0122 16:32:21.053793 4758 patch_prober.go:28] interesting pod/router-default-5444994796-7jtcn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 16:32:21 crc kubenswrapper[4758]: [-]has-synced failed: reason withheld Jan 22 16:32:21 crc kubenswrapper[4758]: [+]process-running ok Jan 22 16:32:21 crc kubenswrapper[4758]: healthz check failed Jan 22 16:32:21 crc kubenswrapper[4758]: I0122 16:32:21.053840 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7jtcn" podUID="1d2c5bee-e237-4043-9f8a-73bb67ebf355" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 16:32:21 crc kubenswrapper[4758]: I0122 16:32:21.109460 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:21 crc kubenswrapper[4758]: E0122 16:32:21.109796 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:21.60977944 +0000 UTC m=+163.093118725 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:21 crc kubenswrapper[4758]: I0122 16:32:21.210639 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:21 crc kubenswrapper[4758]: E0122 16:32:21.211030 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:21.711016287 +0000 UTC m=+163.194355572 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:21 crc kubenswrapper[4758]: I0122 16:32:21.311822 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:21 crc kubenswrapper[4758]: E0122 16:32:21.312156 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:21.812141679 +0000 UTC m=+163.295480964 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:21 crc kubenswrapper[4758]: I0122 16:32:21.413182 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:21 crc kubenswrapper[4758]: E0122 16:32:21.413524 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:21.913507928 +0000 UTC m=+163.396847213 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:21 crc kubenswrapper[4758]: I0122 16:32:21.513762 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:21 crc kubenswrapper[4758]: E0122 16:32:21.513997 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:22.013954562 +0000 UTC m=+163.497293847 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:21 crc kubenswrapper[4758]: I0122 16:32:21.514087 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:21 crc kubenswrapper[4758]: E0122 16:32:21.514496 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:22.014476107 +0000 UTC m=+163.497815392 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:21 crc kubenswrapper[4758]: I0122 16:32:21.615564 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:21 crc kubenswrapper[4758]: E0122 16:32:21.615955 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:22.115935909 +0000 UTC m=+163.599275194 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:21 crc kubenswrapper[4758]: I0122 16:32:21.616276 4758 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-qc9q5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 16:32:21 crc kubenswrapper[4758]: I0122 16:32:21.616348 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qc9q5" podUID="7d7a9e04-71e1-4090-96af-395ad7e823ac" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 16:32:21 crc kubenswrapper[4758]: I0122 16:32:21.718647 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:21 crc kubenswrapper[4758]: E0122 16:32:21.719092 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:22.219055118 +0000 UTC m=+163.702394403 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:21 crc kubenswrapper[4758]: I0122 16:32:21.819510 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:21 crc kubenswrapper[4758]: E0122 16:32:21.819937 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:22.319919443 +0000 UTC m=+163.803258728 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:21 crc kubenswrapper[4758]: I0122 16:32:21.842375 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fjsgm" event={"ID":"5caed3c6-9037-4ecf-b0db-778db52bd3ee","Type":"ContainerStarted","Data":"e5e3ebdad4eeca671ca7800977916d8b4cd3ad73ac41d7c91106d2a709718986"} Jan 22 16:32:21 crc kubenswrapper[4758]: I0122 16:32:21.843569 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-km5pw" event={"ID":"914dc40a-791a-4d15-83b6-fb5f4002f786","Type":"ContainerStarted","Data":"248ff2e58b62b702662c2e4cd20faa9ad36caf2fae33fe542d0d244587ccd9f7"} Jan 22 16:32:21 crc kubenswrapper[4758]: I0122 16:32:21.882347 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqd5s" event={"ID":"3fcd001e-7c62-4167-adbd-afd79a1dd594","Type":"ContainerStarted","Data":"3962dc81e89f1ddcd8151ae93dae7ec3025c449a641e0babe696c7314096acb6"} Jan 22 16:32:21 crc kubenswrapper[4758]: I0122 16:32:21.900877 4758 csr.go:261] certificate signing request csr-jhj77 is approved, waiting to be issued Jan 22 16:32:21 crc kubenswrapper[4758]: I0122 16:32:21.912922 4758 csr.go:257] certificate signing request csr-jhj77 is issued Jan 22 16:32:21 crc kubenswrapper[4758]: I0122 16:32:21.948082 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:21 crc kubenswrapper[4758]: E0122 16:32:21.948822 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:22.448806943 +0000 UTC m=+163.932146228 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:21 crc kubenswrapper[4758]: I0122 16:32:21.958377 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-svrdl" event={"ID":"5760aa2c-cda7-44bb-8458-d31b09eb2de5","Type":"ContainerStarted","Data":"c0de1600da1e7f421dd14f4d3ae8432a0a444d0e866e5fd1bdb320fb1366a77c"} Jan 22 16:32:21 crc kubenswrapper[4758]: I0122 16:32:21.972994 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-65j2c" event={"ID":"1772eca5-cae4-40ba-94c7-d00f0c70636f","Type":"ContainerStarted","Data":"0c08216f82fc3feb04ec6d38f1927b0b5295781736b46607ad8b120c081d6527"} Jan 22 16:32:21 crc kubenswrapper[4758]: I0122 16:32:21.996057 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cvjnm" event={"ID":"06a279e1-00f2-4ae0-9bc4-6481c53c14f1","Type":"ContainerStarted","Data":"c0e27b03fffe6dad3b06b33d518dccde5522be9c12db78f68b97977f45e54042"} Jan 22 16:32:22 crc kubenswrapper[4758]: I0122 16:32:22.036658 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jflvh" event={"ID":"37871634-4204-40b5-850b-5789bb71caf6","Type":"ContainerStarted","Data":"4df40330ba45aede92e980c4b8890cfd521f06698604366fb3016613863efdca"} Jan 22 16:32:22 crc kubenswrapper[4758]: I0122 16:32:22.039222 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-sbtxv" event={"ID":"68864ea7-f5a1-40f4-80c1-09bd344ef4f7","Type":"ContainerStarted","Data":"ede649279aca4462c2befbbd6ba8fca34434d90050cabcdcbd6d70ecf694e8ce"} Jan 22 16:32:22 crc kubenswrapper[4758]: I0122 16:32:22.042486 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-zl4zm" event={"ID":"0cb4bda1-2b7b-4c94-8735-dde72faef39e","Type":"ContainerStarted","Data":"d633571843276db2307637539bc56f2f56f054b540e94c5c5df6809c9adbd6fe"} Jan 22 16:32:22 crc kubenswrapper[4758]: I0122 16:32:22.050216 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:22 crc kubenswrapper[4758]: E0122 16:32:22.051476 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:22.551459459 +0000 UTC m=+164.034798744 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:22 crc kubenswrapper[4758]: I0122 16:32:22.058406 4758 patch_prober.go:28] interesting pod/router-default-5444994796-7jtcn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 16:32:22 crc kubenswrapper[4758]: [-]has-synced failed: reason withheld Jan 22 16:32:22 crc kubenswrapper[4758]: [+]process-running ok Jan 22 16:32:22 crc kubenswrapper[4758]: healthz check failed Jan 22 16:32:22 crc kubenswrapper[4758]: I0122 16:32:22.058472 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7jtcn" podUID="1d2c5bee-e237-4043-9f8a-73bb67ebf355" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 16:32:22 crc kubenswrapper[4758]: I0122 16:32:22.066813 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rfv8b" event={"ID":"1bc85282-8493-4e92-91eb-3a2072c87514","Type":"ContainerStarted","Data":"0ff101892330ae67b12e9888750133ab782f0e0970a5b6a717dfd2649ff97e6e"} Jan 22 16:32:22 crc kubenswrapper[4758]: I0122 16:32:22.074276 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wszfq" event={"ID":"db717b97-58b5-402c-983f-9bf1e88c40a4","Type":"ContainerStarted","Data":"92ab5c9b6eda02f1d9b128570453d8d42b9ec7cd3dc27e70d4e70b5e07a13462"} Jan 22 16:32:22 crc kubenswrapper[4758]: I0122 16:32:22.077998 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-zlnf7" event={"ID":"fcc6018a-27ef-4a30-98f2-90d2e6e454be","Type":"ContainerStarted","Data":"43bb227e755b115d01d9c7d7008678e88de63e4dd23eb9d5aadaad18f6799043"} Jan 22 16:32:22 crc kubenswrapper[4758]: I0122 16:32:22.079839 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k254w" event={"ID":"c33d209a-fda4-44bd-944f-95cc380f4173","Type":"ContainerStarted","Data":"4f023339933a34f384324c94f53ad9bde628f94d6cd2c1ec98462801e5d8e119"} Jan 22 16:32:22 crc kubenswrapper[4758]: I0122 16:32:22.079859 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k254w" event={"ID":"c33d209a-fda4-44bd-944f-95cc380f4173","Type":"ContainerStarted","Data":"7d649fee454abf3f4bceba6b3923d16deee58532370bc513f670aac73fe8858e"} Jan 22 16:32:22 crc kubenswrapper[4758]: I0122 16:32:22.093142 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-zw8x5" event={"ID":"e7e7f0b8-5889-4e1b-8bc4-4b7b533d9fb4","Type":"ContainerStarted","Data":"7f94dd9f1b28a9cd554298654a291c37e508d7f8b6cb53981aa12a140f5abea0"} Jan 22 16:32:22 crc kubenswrapper[4758]: I0122 16:32:22.093191 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-zw8x5" event={"ID":"e7e7f0b8-5889-4e1b-8bc4-4b7b533d9fb4","Type":"ContainerStarted","Data":"aa191909a0c6d00c024717aa07b4393968fa1a85d141df5857251e518f748f56"} Jan 22 16:32:22 crc kubenswrapper[4758]: I0122 16:32:22.097448 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-crm27" event={"ID":"7baaa22f-75fb-4524-91fa-89eb385e0ad5","Type":"ContainerStarted","Data":"48845c1c7cd82891428c7874f9bdd2a92f63131892516f55dad4418a7cd37b9e"} Jan 22 16:32:22 crc kubenswrapper[4758]: I0122 16:32:22.101138 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5z9sw" event={"ID":"050aa7a5-1385-4d83-baae-173bb748aed6","Type":"ContainerStarted","Data":"5dc7b660ecd2468011e2203116663607ad8e83cccad56c700eccb9f30458d9d3"} Jan 22 16:32:22 crc kubenswrapper[4758]: I0122 16:32:22.103579 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-654gb" event={"ID":"f3ef3c9d-b2ad-4d97-8ee3-1064b88bc903","Type":"ContainerStarted","Data":"35d8b95b7cc59a17eca4beb6741b1d05bdd7fb4d0058f7c40979bc825b98b16e"} Jan 22 16:32:22 crc kubenswrapper[4758]: I0122 16:32:22.124651 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lnj88" event={"ID":"c8bd5414-72ea-40f8-8cf2-a6bf81e1258a","Type":"ContainerStarted","Data":"bd8e34956e4186a72cd7b17a01ce56d7cd908a170eba7b5ccdb3061f01377530"} Jan 22 16:32:22 crc kubenswrapper[4758]: I0122 16:32:22.125374 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lnj88" Jan 22 16:32:22 crc kubenswrapper[4758]: I0122 16:32:22.126917 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-svrdl" podStartSLOduration=140.126904065 podStartE2EDuration="2m20.126904065s" podCreationTimestamp="2026-01-22 16:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:32:21.993973361 +0000 UTC m=+163.477312646" watchObservedRunningTime="2026-01-22 16:32:22.126904065 +0000 UTC m=+163.610243350" Jan 22 16:32:22 crc kubenswrapper[4758]: I0122 16:32:22.128889 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-zw8x5" podStartSLOduration=140.12887825 podStartE2EDuration="2m20.12887825s" podCreationTimestamp="2026-01-22 16:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:32:22.12567147 +0000 UTC m=+163.609010765" watchObservedRunningTime="2026-01-22 16:32:22.12887825 +0000 UTC m=+163.612217535" Jan 22 16:32:22 crc kubenswrapper[4758]: I0122 16:32:22.137758 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bgbsx" event={"ID":"fb5ead16-7592-4bd3-9ebb-ee8499eb639b","Type":"ContainerStarted","Data":"3084c0ee41d35f28aa55121ba895a9bc2448910a678aa4114533f37db8e13991"} Jan 22 16:32:22 crc kubenswrapper[4758]: I0122 16:32:22.141438 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rjlbg" event={"ID":"5000e0b7-97a0-4868-a61c-281d1e2ab6ea","Type":"ContainerStarted","Data":"d7123a436b082715a0145319d89413362bf80c7e71c5002b0b5ab42715ce4b39"} Jan 22 16:32:22 crc kubenswrapper[4758]: I0122 16:32:22.141477 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rjlbg" event={"ID":"5000e0b7-97a0-4868-a61c-281d1e2ab6ea","Type":"ContainerStarted","Data":"d07a0dce91e57633ff177a1d3a6dc7838ad9f8d5c0ec03851234dd3ad28c7c64"} Jan 22 16:32:22 crc kubenswrapper[4758]: I0122 16:32:22.146173 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-l5xjz" event={"ID":"f6422cff-e2d5-4935-81b3-85fbb721a86b","Type":"ContainerStarted","Data":"c181d091e5b93c00c90a302e4c59f485a1db657fe3da170d0bd0a21a63b61483"} Jan 22 16:32:22 crc kubenswrapper[4758]: I0122 16:32:22.154673 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-crm27" podStartSLOduration=140.1546576 podStartE2EDuration="2m20.1546576s" podCreationTimestamp="2026-01-22 16:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:32:22.151057438 +0000 UTC m=+163.634396723" watchObservedRunningTime="2026-01-22 16:32:22.1546576 +0000 UTC m=+163.637996875" Jan 22 16:32:22 crc kubenswrapper[4758]: I0122 16:32:22.156135 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:22 crc kubenswrapper[4758]: E0122 16:32:22.159197 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:22.659181078 +0000 UTC m=+164.142520443 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:22 crc kubenswrapper[4758]: I0122 16:32:22.167841 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-trk29" event={"ID":"d20604ed-3385-44c3-8dfd-b212005182d2","Type":"ContainerStarted","Data":"64cec4f4113682d524e1356d658819d915d2caf7ff16c9bff8714d6876de88a5"} Jan 22 16:32:22 crc kubenswrapper[4758]: I0122 16:32:22.170153 4758 patch_prober.go:28] interesting pod/downloads-7954f5f757-p5cqb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Jan 22 16:32:22 crc kubenswrapper[4758]: I0122 16:32:22.170200 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-p5cqb" podUID="327d43d9-41eb-4ef4-9df0-d38e0739b7df" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Jan 22 16:32:22 crc kubenswrapper[4758]: I0122 16:32:22.173489 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qc9q5" Jan 22 16:32:22 crc kubenswrapper[4758]: I0122 16:32:22.205534 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lnj88" podStartSLOduration=140.20551471 podStartE2EDuration="2m20.20551471s" podCreationTimestamp="2026-01-22 16:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:32:22.173450322 +0000 UTC m=+163.656789617" watchObservedRunningTime="2026-01-22 16:32:22.20551471 +0000 UTC m=+163.688853995" Jan 22 16:32:22 crc kubenswrapper[4758]: I0122 16:32:22.257545 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:22 crc kubenswrapper[4758]: E0122 16:32:22.258994 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:22.758978873 +0000 UTC m=+164.242318158 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:22 crc kubenswrapper[4758]: I0122 16:32:22.366493 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:22 crc kubenswrapper[4758]: E0122 16:32:22.367097 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:22.867079223 +0000 UTC m=+164.350418568 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:22 crc kubenswrapper[4758]: I0122 16:32:22.446389 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-x45ps" Jan 22 16:32:22 crc kubenswrapper[4758]: I0122 16:32:22.468665 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:22 crc kubenswrapper[4758]: E0122 16:32:22.469026 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:22.969006579 +0000 UTC m=+164.452345864 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:22 crc kubenswrapper[4758]: I0122 16:32:22.571439 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:22 crc kubenswrapper[4758]: E0122 16:32:22.571999 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:23.071988095 +0000 UTC m=+164.555327380 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:22 crc kubenswrapper[4758]: I0122 16:32:22.673091 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:22 crc kubenswrapper[4758]: E0122 16:32:22.673232 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:23.173203269 +0000 UTC m=+164.656542574 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:22 crc kubenswrapper[4758]: I0122 16:32:22.673312 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:22 crc kubenswrapper[4758]: E0122 16:32:22.673669 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:23.173658112 +0000 UTC m=+164.656997397 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:22 crc kubenswrapper[4758]: I0122 16:32:22.774486 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:22 crc kubenswrapper[4758]: E0122 16:32:22.774707 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:23.274679822 +0000 UTC m=+164.758019107 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:22 crc kubenswrapper[4758]: I0122 16:32:22.775029 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:22 crc kubenswrapper[4758]: E0122 16:32:22.775387 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:23.275372032 +0000 UTC m=+164.758711317 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:22 crc kubenswrapper[4758]: I0122 16:32:22.875731 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:22 crc kubenswrapper[4758]: E0122 16:32:22.876117 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:23.376100053 +0000 UTC m=+164.859439338 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:22 crc kubenswrapper[4758]: I0122 16:32:22.944569 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-22 16:27:21 +0000 UTC, rotation deadline is 2026-11-06 19:36:57.422908544 +0000 UTC Jan 22 16:32:22 crc kubenswrapper[4758]: I0122 16:32:22.944623 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6915h4m34.47828895s for next certificate rotation Jan 22 16:32:22 crc kubenswrapper[4758]: I0122 16:32:22.977138 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:22 crc kubenswrapper[4758]: E0122 16:32:22.977589 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:23.477567595 +0000 UTC m=+164.960906880 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:23 crc kubenswrapper[4758]: I0122 16:32:23.054177 4758 patch_prober.go:28] interesting pod/router-default-5444994796-7jtcn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 16:32:23 crc kubenswrapper[4758]: [-]has-synced failed: reason withheld Jan 22 16:32:23 crc kubenswrapper[4758]: [+]process-running ok Jan 22 16:32:23 crc kubenswrapper[4758]: healthz check failed Jan 22 16:32:23 crc kubenswrapper[4758]: I0122 16:32:23.054254 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7jtcn" podUID="1d2c5bee-e237-4043-9f8a-73bb67ebf355" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 16:32:23 crc kubenswrapper[4758]: I0122 16:32:23.078038 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:23 crc kubenswrapper[4758]: E0122 16:32:23.078231 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:23.578206205 +0000 UTC m=+165.061545490 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:23 crc kubenswrapper[4758]: I0122 16:32:23.078273 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:23 crc kubenswrapper[4758]: E0122 16:32:23.078576 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:23.578569505 +0000 UTC m=+165.061908790 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:23 crc kubenswrapper[4758]: I0122 16:32:23.172353 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rfv8b" event={"ID":"1bc85282-8493-4e92-91eb-3a2072c87514","Type":"ContainerStarted","Data":"d55d9fcbda806ae35407d5b7984d9abfdf6f75bff78d99dbda2cce1925580cee"} Jan 22 16:32:23 crc kubenswrapper[4758]: I0122 16:32:23.173699 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-trk29" event={"ID":"d20604ed-3385-44c3-8dfd-b212005182d2","Type":"ContainerStarted","Data":"ec51843cfe6699de5f0570b4e0e251aeebf9b65a7ad7677da406d7368aaf8410"} Jan 22 16:32:23 crc kubenswrapper[4758]: I0122 16:32:23.174996 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jflvh" event={"ID":"37871634-4204-40b5-850b-5789bb71caf6","Type":"ContainerStarted","Data":"e3190223130af18b76b1aef53fd6bbe322db71ce898f246167d934ab721ef2a3"} Jan 22 16:32:23 crc kubenswrapper[4758]: I0122 16:32:23.176038 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-sbtxv" event={"ID":"68864ea7-f5a1-40f4-80c1-09bd344ef4f7","Type":"ContainerStarted","Data":"88b4ce4e5ed0e83d56b9714243a4ce5043030da1bdedb4189684ad0ad3941102"} Jan 22 16:32:23 crc kubenswrapper[4758]: I0122 16:32:23.177423 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bgbsx" event={"ID":"fb5ead16-7592-4bd3-9ebb-ee8499eb639b","Type":"ContainerStarted","Data":"126832504fb6871d7d99cf932999468041e1b3df5eaa96a396001d9148153ec3"} Jan 22 16:32:23 crc kubenswrapper[4758]: I0122 16:32:23.178604 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:23 crc kubenswrapper[4758]: E0122 16:32:23.178716 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:23.678693099 +0000 UTC m=+165.162032384 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:23 crc kubenswrapper[4758]: I0122 16:32:23.178890 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:23 crc kubenswrapper[4758]: E0122 16:32:23.179198 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:23.679188433 +0000 UTC m=+165.162527718 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:23 crc kubenswrapper[4758]: I0122 16:32:23.180965 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-km5pw" event={"ID":"914dc40a-791a-4d15-83b6-fb5f4002f786","Type":"ContainerStarted","Data":"a75fce1162a45b5e1966b3cbd9e7be7d6f9fd28370a1cc65f171a2f38bddb733"} Jan 22 16:32:23 crc kubenswrapper[4758]: I0122 16:32:23.182286 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqd5s" event={"ID":"3fcd001e-7c62-4167-adbd-afd79a1dd594","Type":"ContainerStarted","Data":"f07cccf02d7ae70cb1cf1748c8dd44e41c2fc23e35d1af6447e94198ec82652c"} Jan 22 16:32:23 crc kubenswrapper[4758]: I0122 16:32:23.183990 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-zl4zm" event={"ID":"0cb4bda1-2b7b-4c94-8735-dde72faef39e","Type":"ContainerStarted","Data":"174dcad1c1dc6978660c431fed2131a001acee492bd8580f162cda9a6849b92b"} Jan 22 16:32:23 crc kubenswrapper[4758]: I0122 16:32:23.185265 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5z9sw" event={"ID":"050aa7a5-1385-4d83-baae-173bb748aed6","Type":"ContainerStarted","Data":"3232c210f57989ffc8bdfd449e6708f8d895805f2056f7ecacd29a32f679f073"} Jan 22 16:32:23 crc kubenswrapper[4758]: I0122 16:32:23.185977 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5z9sw" Jan 22 16:32:23 crc kubenswrapper[4758]: I0122 16:32:23.186755 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-654gb" event={"ID":"f3ef3c9d-b2ad-4d97-8ee3-1064b88bc903","Type":"ContainerStarted","Data":"b1993848ff0f09187872ba4650b4e9bd3e463e4b5cbc7e3e3725f226f10b3b97"} Jan 22 16:32:23 crc kubenswrapper[4758]: I0122 16:32:23.187377 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-654gb" Jan 22 16:32:23 crc kubenswrapper[4758]: I0122 16:32:23.187459 4758 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-5z9sw container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 22 16:32:23 crc kubenswrapper[4758]: I0122 16:32:23.187497 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5z9sw" podUID="050aa7a5-1385-4d83-baae-173bb748aed6" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 22 16:32:23 crc kubenswrapper[4758]: I0122 16:32:23.188271 4758 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-654gb container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Jan 22 16:32:23 crc kubenswrapper[4758]: I0122 16:32:23.188280 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cvjnm" event={"ID":"06a279e1-00f2-4ae0-9bc4-6481c53c14f1","Type":"ContainerStarted","Data":"dbca5d0db607c71f0be53cadf8c5247f0dbc2b2e4070c53a67d0d91739dac9ae"} Jan 22 16:32:23 crc kubenswrapper[4758]: I0122 16:32:23.188341 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-654gb" podUID="f3ef3c9d-b2ad-4d97-8ee3-1064b88bc903" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Jan 22 16:32:23 crc kubenswrapper[4758]: I0122 16:32:23.188886 4758 patch_prober.go:28] interesting pod/downloads-7954f5f757-p5cqb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Jan 22 16:32:23 crc kubenswrapper[4758]: I0122 16:32:23.188940 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-p5cqb" podUID="327d43d9-41eb-4ef4-9df0-d38e0739b7df" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Jan 22 16:32:23 crc kubenswrapper[4758]: I0122 16:32:23.204979 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-sbtxv" podStartSLOduration=141.204961243 podStartE2EDuration="2m21.204961243s" podCreationTimestamp="2026-01-22 16:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:32:23.199981392 +0000 UTC m=+164.683320677" watchObservedRunningTime="2026-01-22 16:32:23.204961243 +0000 UTC m=+164.688300528" Jan 22 16:32:23 crc kubenswrapper[4758]: I0122 16:32:23.223864 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqd5s" podStartSLOduration=141.223845547 podStartE2EDuration="2m21.223845547s" podCreationTimestamp="2026-01-22 16:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:32:23.222138839 +0000 UTC m=+164.705478124" watchObservedRunningTime="2026-01-22 16:32:23.223845547 +0000 UTC m=+164.707184832" Jan 22 16:32:23 crc kubenswrapper[4758]: I0122 16:32:23.240605 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-654gb" podStartSLOduration=141.240589651 podStartE2EDuration="2m21.240589651s" podCreationTimestamp="2026-01-22 16:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:32:23.239166141 +0000 UTC m=+164.722505426" watchObservedRunningTime="2026-01-22 16:32:23.240589651 +0000 UTC m=+164.723928926" Jan 22 16:32:23 crc kubenswrapper[4758]: I0122 16:32:23.280255 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:23 crc kubenswrapper[4758]: E0122 16:32:23.280394 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:23.780367778 +0000 UTC m=+165.263707053 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:23 crc kubenswrapper[4758]: I0122 16:32:23.280678 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:23 crc kubenswrapper[4758]: E0122 16:32:23.281105 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:23.781090278 +0000 UTC m=+165.264429563 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:23 crc kubenswrapper[4758]: I0122 16:32:23.284437 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5z9sw" podStartSLOduration=141.284425122 podStartE2EDuration="2m21.284425122s" podCreationTimestamp="2026-01-22 16:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:32:23.284309729 +0000 UTC m=+164.767649014" watchObservedRunningTime="2026-01-22 16:32:23.284425122 +0000 UTC m=+164.767764407" Jan 22 16:32:23 crc kubenswrapper[4758]: I0122 16:32:23.311933 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bgbsx" podStartSLOduration=141.31190862 podStartE2EDuration="2m21.31190862s" podCreationTimestamp="2026-01-22 16:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:32:23.310150121 +0000 UTC m=+164.793489406" watchObservedRunningTime="2026-01-22 16:32:23.31190862 +0000 UTC m=+164.795247905" Jan 22 16:32:23 crc kubenswrapper[4758]: I0122 16:32:23.381484 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:23 crc kubenswrapper[4758]: E0122 16:32:23.382733 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:23.882717035 +0000 UTC m=+165.366056320 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:23 crc kubenswrapper[4758]: I0122 16:32:23.392206 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cvjnm" podStartSLOduration=141.392193353 podStartE2EDuration="2m21.392193353s" podCreationTimestamp="2026-01-22 16:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:32:23.391697509 +0000 UTC m=+164.875036794" watchObservedRunningTime="2026-01-22 16:32:23.392193353 +0000 UTC m=+164.875532638" Jan 22 16:32:23 crc kubenswrapper[4758]: I0122 16:32:23.407126 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-zl4zm" podStartSLOduration=9.407107246 podStartE2EDuration="9.407107246s" podCreationTimestamp="2026-01-22 16:32:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:32:23.406454598 +0000 UTC m=+164.889793883" watchObservedRunningTime="2026-01-22 16:32:23.407107246 +0000 UTC m=+164.890446531" Jan 22 16:32:23 crc kubenswrapper[4758]: I0122 16:32:23.482955 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:23 crc kubenswrapper[4758]: E0122 16:32:23.483570 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:23.98355875 +0000 UTC m=+165.466898035 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:23 crc kubenswrapper[4758]: I0122 16:32:23.590665 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:23 crc kubenswrapper[4758]: E0122 16:32:23.590871 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:24.090843207 +0000 UTC m=+165.574182492 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:23 crc kubenswrapper[4758]: I0122 16:32:23.591059 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:23 crc kubenswrapper[4758]: E0122 16:32:23.591439 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:24.091425674 +0000 UTC m=+165.574764959 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:23 crc kubenswrapper[4758]: I0122 16:32:23.692615 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:23 crc kubenswrapper[4758]: E0122 16:32:23.692797 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:24.192763122 +0000 UTC m=+165.676102407 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:23 crc kubenswrapper[4758]: I0122 16:32:23.693060 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:23 crc kubenswrapper[4758]: E0122 16:32:23.693379 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:24.193368099 +0000 UTC m=+165.676707454 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:23 crc kubenswrapper[4758]: I0122 16:32:23.794402 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:23 crc kubenswrapper[4758]: E0122 16:32:23.794860 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:24.294840222 +0000 UTC m=+165.778179507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:23 crc kubenswrapper[4758]: I0122 16:32:23.896349 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:23 crc kubenswrapper[4758]: E0122 16:32:23.896786 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:24.396771558 +0000 UTC m=+165.880110843 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:23 crc kubenswrapper[4758]: I0122 16:32:23.998059 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:23 crc kubenswrapper[4758]: E0122 16:32:23.998473 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:24.498450386 +0000 UTC m=+165.981789681 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:24 crc kubenswrapper[4758]: I0122 16:32:24.062550 4758 patch_prober.go:28] interesting pod/router-default-5444994796-7jtcn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 16:32:24 crc kubenswrapper[4758]: [-]has-synced failed: reason withheld Jan 22 16:32:24 crc kubenswrapper[4758]: [+]process-running ok Jan 22 16:32:24 crc kubenswrapper[4758]: healthz check failed Jan 22 16:32:24 crc kubenswrapper[4758]: I0122 16:32:24.062618 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7jtcn" podUID="1d2c5bee-e237-4043-9f8a-73bb67ebf355" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 16:32:24 crc kubenswrapper[4758]: I0122 16:32:24.099624 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:24 crc kubenswrapper[4758]: E0122 16:32:24.099998 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:24.599983331 +0000 UTC m=+166.083322626 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:24 crc kubenswrapper[4758]: I0122 16:32:24.194357 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-trk29" event={"ID":"d20604ed-3385-44c3-8dfd-b212005182d2","Type":"ContainerStarted","Data":"6e7ffdca7428764cf359b4c2fc438f824b21d4a83d68eb096b110c65a145a849"} Jan 22 16:32:24 crc kubenswrapper[4758]: I0122 16:32:24.196145 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wszfq" event={"ID":"db717b97-58b5-402c-983f-9bf1e88c40a4","Type":"ContainerStarted","Data":"6a787be50be3bbfdbdf55f757cbaae376b940a7b77aa3e9a2c0f8429f0feab90"} Jan 22 16:32:24 crc kubenswrapper[4758]: I0122 16:32:24.196906 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wszfq" Jan 22 16:32:24 crc kubenswrapper[4758]: I0122 16:32:24.197798 4758 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-wszfq container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 22 16:32:24 crc kubenswrapper[4758]: I0122 16:32:24.197836 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wszfq" podUID="db717b97-58b5-402c-983f-9bf1e88c40a4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 22 16:32:24 crc kubenswrapper[4758]: I0122 16:32:24.198132 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fjsgm" event={"ID":"5caed3c6-9037-4ecf-b0db-778db52bd3ee","Type":"ContainerStarted","Data":"ad7762057c01299f540360f0792d6ba76ce7864075c83239ba128aa10145c676"} Jan 22 16:32:24 crc kubenswrapper[4758]: I0122 16:32:24.198830 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-fjsgm" Jan 22 16:32:24 crc kubenswrapper[4758]: I0122 16:32:24.199453 4758 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fjsgm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" start-of-body= Jan 22 16:32:24 crc kubenswrapper[4758]: I0122 16:32:24.199492 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-fjsgm" podUID="5caed3c6-9037-4ecf-b0db-778db52bd3ee" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" Jan 22 16:32:24 crc kubenswrapper[4758]: I0122 16:32:24.200142 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-km5pw" event={"ID":"914dc40a-791a-4d15-83b6-fb5f4002f786","Type":"ContainerStarted","Data":"26cdb17c17a0d9dcbe000910400c08034df391483f0478c985aa0c5a856ada98"} Jan 22 16:32:24 crc kubenswrapper[4758]: I0122 16:32:24.201705 4758 generic.go:334] "Generic (PLEG): container finished" podID="cec5698b-f4e0-4c73-abe0-f999df35f0c6" containerID="ac7c55b44df7dfc84a1aee9d072b00ab1099d6746d5676554bf47046ad89de10" exitCode=0 Jan 22 16:32:24 crc kubenswrapper[4758]: I0122 16:32:24.202058 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-bjkct" event={"ID":"cec5698b-f4e0-4c73-abe0-f999df35f0c6","Type":"ContainerDied","Data":"ac7c55b44df7dfc84a1aee9d072b00ab1099d6746d5676554bf47046ad89de10"} Jan 22 16:32:24 crc kubenswrapper[4758]: I0122 16:32:24.205100 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:24 crc kubenswrapper[4758]: E0122 16:32:24.205458 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:24.705425215 +0000 UTC m=+166.188764500 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:24 crc kubenswrapper[4758]: I0122 16:32:24.205940 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rfv8b" event={"ID":"1bc85282-8493-4e92-91eb-3a2072c87514","Type":"ContainerStarted","Data":"44aeaa416f14f37840c0309f0bc111f12000631121d3f5dffa0557a8bb10e460"} Jan 22 16:32:24 crc kubenswrapper[4758]: I0122 16:32:24.206031 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-rfv8b" Jan 22 16:32:24 crc kubenswrapper[4758]: I0122 16:32:24.208615 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-65j2c" event={"ID":"1772eca5-cae4-40ba-94c7-d00f0c70636f","Type":"ContainerStarted","Data":"884f623090a5fb7f3fd75b3d0513c80bd0dd079dbc1b14da9ccd22635c046bf0"} Jan 22 16:32:24 crc kubenswrapper[4758]: I0122 16:32:24.210511 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rjlbg" event={"ID":"5000e0b7-97a0-4868-a61c-281d1e2ab6ea","Type":"ContainerStarted","Data":"7f1edbd6dafc9deb4242078c8f020a1336cedf88adcc422be3876f3ad8339694"} Jan 22 16:32:24 crc kubenswrapper[4758]: I0122 16:32:24.211069 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rjlbg" Jan 22 16:32:24 crc kubenswrapper[4758]: I0122 16:32:24.213799 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k254w" event={"ID":"c33d209a-fda4-44bd-944f-95cc380f4173","Type":"ContainerStarted","Data":"98640906343a0202cb8333e0413421c335ee8195d3220c13fd645cc576aea118"} Jan 22 16:32:24 crc kubenswrapper[4758]: I0122 16:32:24.215822 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-l5xjz" event={"ID":"f6422cff-e2d5-4935-81b3-85fbb721a86b","Type":"ContainerStarted","Data":"0a18be3350b7bc8c9958079fedec9f5cd763f55c5d24c4a785c06b1937aac9af"} Jan 22 16:32:24 crc kubenswrapper[4758]: I0122 16:32:24.216243 4758 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-654gb container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Jan 22 16:32:24 crc kubenswrapper[4758]: I0122 16:32:24.216284 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-654gb" podUID="f3ef3c9d-b2ad-4d97-8ee3-1064b88bc903" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Jan 22 16:32:24 crc kubenswrapper[4758]: I0122 16:32:24.248609 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5z9sw" Jan 22 16:32:24 crc kubenswrapper[4758]: I0122 16:32:24.296321 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-trk29" podStartSLOduration=142.296294058 podStartE2EDuration="2m22.296294058s" podCreationTimestamp="2026-01-22 16:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:32:24.251333655 +0000 UTC m=+165.734672940" watchObservedRunningTime="2026-01-22 16:32:24.296294058 +0000 UTC m=+165.779633343" Jan 22 16:32:24 crc kubenswrapper[4758]: I0122 16:32:24.307145 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:24 crc kubenswrapper[4758]: E0122 16:32:24.315373 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:24.815359428 +0000 UTC m=+166.298698713 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:24 crc kubenswrapper[4758]: I0122 16:32:24.340418 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-65j2c" podStartSLOduration=142.340401227 podStartE2EDuration="2m22.340401227s" podCreationTimestamp="2026-01-22 16:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:32:24.295501106 +0000 UTC m=+165.778840391" watchObservedRunningTime="2026-01-22 16:32:24.340401227 +0000 UTC m=+165.823740512" Jan 22 16:32:24 crc kubenswrapper[4758]: I0122 16:32:24.379394 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jflvh" podStartSLOduration=142.37937762 podStartE2EDuration="2m22.37937762s" podCreationTimestamp="2026-01-22 16:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:32:24.378498335 +0000 UTC m=+165.861837630" watchObservedRunningTime="2026-01-22 16:32:24.37937762 +0000 UTC m=+165.862716905" Jan 22 16:32:24 crc kubenswrapper[4758]: I0122 16:32:24.411042 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:24 crc kubenswrapper[4758]: E0122 16:32:24.411308 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:24.911292783 +0000 UTC m=+166.394632068 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:24 crc kubenswrapper[4758]: I0122 16:32:24.447434 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k254w" podStartSLOduration=142.447411736 podStartE2EDuration="2m22.447411736s" podCreationTimestamp="2026-01-22 16:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:32:24.401885418 +0000 UTC m=+165.885224703" watchObservedRunningTime="2026-01-22 16:32:24.447411736 +0000 UTC m=+165.930751021" Jan 22 16:32:24 crc kubenswrapper[4758]: I0122 16:32:24.494455 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-km5pw" podStartSLOduration=142.494418237 podStartE2EDuration="2m22.494418237s" podCreationTimestamp="2026-01-22 16:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:32:24.476972933 +0000 UTC m=+165.960312218" watchObservedRunningTime="2026-01-22 16:32:24.494418237 +0000 UTC m=+165.977757522" Jan 22 16:32:24 crc kubenswrapper[4758]: I0122 16:32:24.512655 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:24 crc kubenswrapper[4758]: E0122 16:32:24.512986 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:25.012971582 +0000 UTC m=+166.496310867 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:24 crc kubenswrapper[4758]: I0122 16:32:24.524764 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rjlbg" podStartSLOduration=142.524728344 podStartE2EDuration="2m22.524728344s" podCreationTimestamp="2026-01-22 16:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:32:24.522176132 +0000 UTC m=+166.005515407" watchObservedRunningTime="2026-01-22 16:32:24.524728344 +0000 UTC m=+166.008067629" Jan 22 16:32:24 crc kubenswrapper[4758]: I0122 16:32:24.556222 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wszfq" podStartSLOduration=142.556200896 podStartE2EDuration="2m22.556200896s" podCreationTimestamp="2026-01-22 16:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:32:24.554328762 +0000 UTC m=+166.037668047" watchObservedRunningTime="2026-01-22 16:32:24.556200896 +0000 UTC m=+166.039540181" Jan 22 16:32:24 crc kubenswrapper[4758]: I0122 16:32:24.594461 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-rfv8b" podStartSLOduration=10.594432218 podStartE2EDuration="10.594432218s" podCreationTimestamp="2026-01-22 16:32:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:32:24.589299742 +0000 UTC m=+166.072639027" watchObservedRunningTime="2026-01-22 16:32:24.594432218 +0000 UTC m=+166.077771503" Jan 22 16:32:24 crc kubenswrapper[4758]: I0122 16:32:24.613223 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:24 crc kubenswrapper[4758]: E0122 16:32:24.613522 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:25.113505298 +0000 UTC m=+166.596844583 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:24 crc kubenswrapper[4758]: I0122 16:32:24.630454 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-fjsgm" podStartSLOduration=142.630431667 podStartE2EDuration="2m22.630431667s" podCreationTimestamp="2026-01-22 16:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:32:24.620704102 +0000 UTC m=+166.104043387" watchObservedRunningTime="2026-01-22 16:32:24.630431667 +0000 UTC m=+166.113770962" Jan 22 16:32:24 crc kubenswrapper[4758]: I0122 16:32:24.714199 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:24 crc kubenswrapper[4758]: E0122 16:32:24.714498 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:25.214484657 +0000 UTC m=+166.697823942 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:24 crc kubenswrapper[4758]: I0122 16:32:24.816316 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:24 crc kubenswrapper[4758]: I0122 16:32:24.816595 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3ef1c490-d5f9-458d-8b3e-8580a5f07df6-metrics-certs\") pod \"network-metrics-daemon-2xqns\" (UID: \"3ef1c490-d5f9-458d-8b3e-8580a5f07df6\") " pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:32:24 crc kubenswrapper[4758]: E0122 16:32:24.816708 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:25.31668796 +0000 UTC m=+166.800027245 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:24 crc kubenswrapper[4758]: I0122 16:32:24.835836 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3ef1c490-d5f9-458d-8b3e-8580a5f07df6-metrics-certs\") pod \"network-metrics-daemon-2xqns\" (UID: \"3ef1c490-d5f9-458d-8b3e-8580a5f07df6\") " pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:32:24 crc kubenswrapper[4758]: I0122 16:32:24.918285 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:24 crc kubenswrapper[4758]: E0122 16:32:24.918691 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:25.418672247 +0000 UTC m=+166.902011532 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:25 crc kubenswrapper[4758]: I0122 16:32:25.019146 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:25 crc kubenswrapper[4758]: I0122 16:32:25.019497 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xqns" Jan 22 16:32:25 crc kubenswrapper[4758]: E0122 16:32:25.019529 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:25.519470321 +0000 UTC m=+167.002809606 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:25 crc kubenswrapper[4758]: I0122 16:32:25.019632 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:25 crc kubenswrapper[4758]: E0122 16:32:25.020201 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:25.520190541 +0000 UTC m=+167.003529826 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:25 crc kubenswrapper[4758]: I0122 16:32:25.052392 4758 patch_prober.go:28] interesting pod/router-default-5444994796-7jtcn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 16:32:25 crc kubenswrapper[4758]: [-]has-synced failed: reason withheld Jan 22 16:32:25 crc kubenswrapper[4758]: [+]process-running ok Jan 22 16:32:25 crc kubenswrapper[4758]: healthz check failed Jan 22 16:32:25 crc kubenswrapper[4758]: I0122 16:32:25.052463 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7jtcn" podUID="1d2c5bee-e237-4043-9f8a-73bb67ebf355" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 16:32:25 crc kubenswrapper[4758]: I0122 16:32:25.120979 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:25 crc kubenswrapper[4758]: E0122 16:32:25.121209 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:25.62118317 +0000 UTC m=+167.104522455 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:25 crc kubenswrapper[4758]: I0122 16:32:25.121347 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:25 crc kubenswrapper[4758]: E0122 16:32:25.121659 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:25.621650773 +0000 UTC m=+167.104990058 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:25 crc kubenswrapper[4758]: I0122 16:32:25.144414 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-9h8hv" Jan 22 16:32:25 crc kubenswrapper[4758]: I0122 16:32:25.144768 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-9h8hv" Jan 22 16:32:25 crc kubenswrapper[4758]: I0122 16:32:25.150641 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-9h8hv" Jan 22 16:32:25 crc kubenswrapper[4758]: I0122 16:32:25.221909 4758 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-lnj88 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 16:32:25 crc kubenswrapper[4758]: I0122 16:32:25.222247 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lnj88" podUID="c8bd5414-72ea-40f8-8cf2-a6bf81e1258a" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 16:32:25 crc kubenswrapper[4758]: I0122 16:32:25.222977 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:25 crc kubenswrapper[4758]: E0122 16:32:25.224194 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:25.724174716 +0000 UTC m=+167.207514021 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:25 crc kubenswrapper[4758]: I0122 16:32:25.249217 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-l5xjz" event={"ID":"f6422cff-e2d5-4935-81b3-85fbb721a86b","Type":"ContainerStarted","Data":"2b9582debeeabb3acbdf48000e608bee94a1722447fd8e5d9ceae08589184c22"} Jan 22 16:32:25 crc kubenswrapper[4758]: I0122 16:32:25.250092 4758 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fjsgm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" start-of-body= Jan 22 16:32:25 crc kubenswrapper[4758]: I0122 16:32:25.250125 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-fjsgm" podUID="5caed3c6-9037-4ecf-b0db-778db52bd3ee" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" Jan 22 16:32:25 crc kubenswrapper[4758]: I0122 16:32:25.250589 4758 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-wszfq container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 22 16:32:25 crc kubenswrapper[4758]: I0122 16:32:25.250630 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wszfq" podUID="db717b97-58b5-402c-983f-9bf1e88c40a4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 22 16:32:25 crc kubenswrapper[4758]: I0122 16:32:25.256415 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-9h8hv" Jan 22 16:32:25 crc kubenswrapper[4758]: I0122 16:32:25.324952 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:25 crc kubenswrapper[4758]: E0122 16:32:25.329064 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:25.829049115 +0000 UTC m=+167.312388490 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:25 crc kubenswrapper[4758]: I0122 16:32:25.362570 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-2xqns"] Jan 22 16:32:25 crc kubenswrapper[4758]: I0122 16:32:25.429114 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:25 crc kubenswrapper[4758]: E0122 16:32:25.429388 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:25.929373064 +0000 UTC m=+167.412712349 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:25 crc kubenswrapper[4758]: I0122 16:32:25.545870 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:25 crc kubenswrapper[4758]: E0122 16:32:25.546358 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:26.046338515 +0000 UTC m=+167.529677830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:25 crc kubenswrapper[4758]: I0122 16:32:25.647098 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:25 crc kubenswrapper[4758]: E0122 16:32:25.647327 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:26.147296124 +0000 UTC m=+167.630635419 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:25 crc kubenswrapper[4758]: I0122 16:32:25.647463 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:25 crc kubenswrapper[4758]: E0122 16:32:25.647836 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:26.147823708 +0000 UTC m=+167.631163003 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:25 crc kubenswrapper[4758]: I0122 16:32:25.693783 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-bjkct" Jan 22 16:32:25 crc kubenswrapper[4758]: I0122 16:32:25.750034 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:25 crc kubenswrapper[4758]: E0122 16:32:25.750230 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:26.250201927 +0000 UTC m=+167.733541212 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:25 crc kubenswrapper[4758]: I0122 16:32:25.750282 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:25 crc kubenswrapper[4758]: E0122 16:32:25.750677 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:26.250664 +0000 UTC m=+167.734003285 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:25 crc kubenswrapper[4758]: I0122 16:32:25.816985 4758 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 22 16:32:25 crc kubenswrapper[4758]: I0122 16:32:25.871434 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cec5698b-f4e0-4c73-abe0-f999df35f0c6-config-volume\") pod \"cec5698b-f4e0-4c73-abe0-f999df35f0c6\" (UID: \"cec5698b-f4e0-4c73-abe0-f999df35f0c6\") " Jan 22 16:32:25 crc kubenswrapper[4758]: I0122 16:32:25.871871 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cec5698b-f4e0-4c73-abe0-f999df35f0c6-secret-volume\") pod \"cec5698b-f4e0-4c73-abe0-f999df35f0c6\" (UID: \"cec5698b-f4e0-4c73-abe0-f999df35f0c6\") " Jan 22 16:32:25 crc kubenswrapper[4758]: I0122 16:32:25.872045 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jcd4\" (UniqueName: \"kubernetes.io/projected/cec5698b-f4e0-4c73-abe0-f999df35f0c6-kube-api-access-2jcd4\") pod \"cec5698b-f4e0-4c73-abe0-f999df35f0c6\" (UID: \"cec5698b-f4e0-4c73-abe0-f999df35f0c6\") " Jan 22 16:32:25 crc kubenswrapper[4758]: I0122 16:32:25.872089 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cec5698b-f4e0-4c73-abe0-f999df35f0c6-config-volume" (OuterVolumeSpecName: "config-volume") pod "cec5698b-f4e0-4c73-abe0-f999df35f0c6" (UID: "cec5698b-f4e0-4c73-abe0-f999df35f0c6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:32:25 crc kubenswrapper[4758]: I0122 16:32:25.872387 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:25 crc kubenswrapper[4758]: E0122 16:32:25.872545 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:26.37252287 +0000 UTC m=+167.855862185 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:25 crc kubenswrapper[4758]: I0122 16:32:25.872879 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:25 crc kubenswrapper[4758]: I0122 16:32:25.873066 4758 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cec5698b-f4e0-4c73-abe0-f999df35f0c6-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 16:32:25 crc kubenswrapper[4758]: E0122 16:32:25.873247 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:26.373229029 +0000 UTC m=+167.856568314 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:25 crc kubenswrapper[4758]: I0122 16:32:25.974588 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lnj88" Jan 22 16:32:25 crc kubenswrapper[4758]: I0122 16:32:25.992206 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cec5698b-f4e0-4c73-abe0-f999df35f0c6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cec5698b-f4e0-4c73-abe0-f999df35f0c6" (UID: "cec5698b-f4e0-4c73-abe0-f999df35f0c6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:32:25 crc kubenswrapper[4758]: I0122 16:32:25.992289 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cec5698b-f4e0-4c73-abe0-f999df35f0c6-kube-api-access-2jcd4" (OuterVolumeSpecName: "kube-api-access-2jcd4") pod "cec5698b-f4e0-4c73-abe0-f999df35f0c6" (UID: "cec5698b-f4e0-4c73-abe0-f999df35f0c6"). InnerVolumeSpecName "kube-api-access-2jcd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:32:25 crc kubenswrapper[4758]: I0122 16:32:25.992534 4758 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-22T16:32:25.817215704Z","Handler":null,"Name":""} Jan 22 16:32:25 crc kubenswrapper[4758]: I0122 16:32:25.993270 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:25 crc kubenswrapper[4758]: E0122 16:32:25.993699 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:26.493676669 +0000 UTC m=+167.977015944 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:25 crc kubenswrapper[4758]: I0122 16:32:25.996216 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:25 crc kubenswrapper[4758]: I0122 16:32:25.996425 4758 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cec5698b-f4e0-4c73-abe0-f999df35f0c6-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 16:32:25 crc kubenswrapper[4758]: I0122 16:32:25.996440 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jcd4\" (UniqueName: \"kubernetes.io/projected/cec5698b-f4e0-4c73-abe0-f999df35f0c6-kube-api-access-2jcd4\") on node \"crc\" DevicePath \"\"" Jan 22 16:32:25 crc kubenswrapper[4758]: E0122 16:32:25.997516 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:32:26.497504678 +0000 UTC m=+167.980843963 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kd79d" (UID: "1c983b09-f715-422e-960d-36dcc610c30b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.032077 4758 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.032110 4758 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.038346 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8v88c"] Jan 22 16:32:26 crc kubenswrapper[4758]: E0122 16:32:26.038591 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cec5698b-f4e0-4c73-abe0-f999df35f0c6" containerName="collect-profiles" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.038611 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="cec5698b-f4e0-4c73-abe0-f999df35f0c6" containerName="collect-profiles" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.038718 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="cec5698b-f4e0-4c73-abe0-f999df35f0c6" containerName="collect-profiles" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.041272 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8v88c" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.042681 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.059934 4758 patch_prober.go:28] interesting pod/router-default-5444994796-7jtcn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 16:32:26 crc kubenswrapper[4758]: [-]has-synced failed: reason withheld Jan 22 16:32:26 crc kubenswrapper[4758]: [+]process-running ok Jan 22 16:32:26 crc kubenswrapper[4758]: healthz check failed Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.060286 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7jtcn" podUID="1d2c5bee-e237-4043-9f8a-73bb67ebf355" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.086939 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8v88c"] Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.109594 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.112480 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88b3808a-aa06-48ab-9b57-f474a2e1379a-catalog-content\") pod \"certified-operators-8v88c\" (UID: \"88b3808a-aa06-48ab-9b57-f474a2e1379a\") " pod="openshift-marketplace/certified-operators-8v88c" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.112556 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnx6v\" (UniqueName: \"kubernetes.io/projected/88b3808a-aa06-48ab-9b57-f474a2e1379a-kube-api-access-hnx6v\") pod \"certified-operators-8v88c\" (UID: \"88b3808a-aa06-48ab-9b57-f474a2e1379a\") " pod="openshift-marketplace/certified-operators-8v88c" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.112651 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88b3808a-aa06-48ab-9b57-f474a2e1379a-utilities\") pod \"certified-operators-8v88c\" (UID: \"88b3808a-aa06-48ab-9b57-f474a2e1379a\") " pod="openshift-marketplace/certified-operators-8v88c" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.118386 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-654gb" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.145155 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c6qmr"] Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.148475 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c6qmr" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.152708 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.154290 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.162489 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c6qmr"] Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.213844 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88b3808a-aa06-48ab-9b57-f474a2e1379a-catalog-content\") pod \"certified-operators-8v88c\" (UID: \"88b3808a-aa06-48ab-9b57-f474a2e1379a\") " pod="openshift-marketplace/certified-operators-8v88c" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.213895 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4-catalog-content\") pod \"community-operators-c6qmr\" (UID: \"6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4\") " pod="openshift-marketplace/community-operators-c6qmr" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.213923 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.213947 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnx6v\" (UniqueName: \"kubernetes.io/projected/88b3808a-aa06-48ab-9b57-f474a2e1379a-kube-api-access-hnx6v\") pod \"certified-operators-8v88c\" (UID: \"88b3808a-aa06-48ab-9b57-f474a2e1379a\") " pod="openshift-marketplace/certified-operators-8v88c" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.213967 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kqdl\" (UniqueName: \"kubernetes.io/projected/6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4-kube-api-access-6kqdl\") pod \"community-operators-c6qmr\" (UID: \"6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4\") " pod="openshift-marketplace/community-operators-c6qmr" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.214009 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4-utilities\") pod \"community-operators-c6qmr\" (UID: \"6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4\") " pod="openshift-marketplace/community-operators-c6qmr" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.214031 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88b3808a-aa06-48ab-9b57-f474a2e1379a-utilities\") pod \"certified-operators-8v88c\" (UID: \"88b3808a-aa06-48ab-9b57-f474a2e1379a\") " pod="openshift-marketplace/certified-operators-8v88c" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.214712 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88b3808a-aa06-48ab-9b57-f474a2e1379a-utilities\") pod \"certified-operators-8v88c\" (UID: \"88b3808a-aa06-48ab-9b57-f474a2e1379a\") " pod="openshift-marketplace/certified-operators-8v88c" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.215222 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88b3808a-aa06-48ab-9b57-f474a2e1379a-catalog-content\") pod \"certified-operators-8v88c\" (UID: \"88b3808a-aa06-48ab-9b57-f474a2e1379a\") " pod="openshift-marketplace/certified-operators-8v88c" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.216977 4758 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.217011 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.227182 4758 patch_prober.go:28] interesting pod/downloads-7954f5f757-p5cqb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.227233 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-p5cqb" podUID="327d43d9-41eb-4ef4-9df0-d38e0739b7df" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.227555 4758 patch_prober.go:28] interesting pod/downloads-7954f5f757-p5cqb container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.227598 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-p5cqb" podUID="327d43d9-41eb-4ef4-9df0-d38e0739b7df" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.236099 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnx6v\" (UniqueName: \"kubernetes.io/projected/88b3808a-aa06-48ab-9b57-f474a2e1379a-kube-api-access-hnx6v\") pod \"certified-operators-8v88c\" (UID: \"88b3808a-aa06-48ab-9b57-f474a2e1379a\") " pod="openshift-marketplace/certified-operators-8v88c" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.254552 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2xqns" event={"ID":"3ef1c490-d5f9-458d-8b3e-8580a5f07df6","Type":"ContainerStarted","Data":"8e866165de337eb55407ebedf6092675b9f30c2586971d76ab3164a18a688b5f"} Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.256079 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-bjkct" event={"ID":"cec5698b-f4e0-4c73-abe0-f999df35f0c6","Type":"ContainerDied","Data":"42833b78be4d022a890b796bef8c7338af78723082c329af1093cf4985d0968c"} Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.256194 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42833b78be4d022a890b796bef8c7338af78723082c329af1093cf4985d0968c" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.256367 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-bjkct" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.260804 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-l5xjz" event={"ID":"f6422cff-e2d5-4935-81b3-85fbb721a86b","Type":"ContainerStarted","Data":"f45784a1507c41d5791f818f9418b0f3541e7a497c4579f9b1c278e89717e89e"} Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.261208 4758 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fjsgm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" start-of-body= Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.261254 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-fjsgm" podUID="5caed3c6-9037-4ecf-b0db-778db52bd3ee" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.265642 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wszfq" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.318173 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4-utilities\") pod \"community-operators-c6qmr\" (UID: \"6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4\") " pod="openshift-marketplace/community-operators-c6qmr" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.319383 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4-catalog-content\") pod \"community-operators-c6qmr\" (UID: \"6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4\") " pod="openshift-marketplace/community-operators-c6qmr" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.319465 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kqdl\" (UniqueName: \"kubernetes.io/projected/6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4-kube-api-access-6kqdl\") pod \"community-operators-c6qmr\" (UID: \"6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4\") " pod="openshift-marketplace/community-operators-c6qmr" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.319479 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4-utilities\") pod \"community-operators-c6qmr\" (UID: \"6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4\") " pod="openshift-marketplace/community-operators-c6qmr" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.319726 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4-catalog-content\") pod \"community-operators-c6qmr\" (UID: \"6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4\") " pod="openshift-marketplace/community-operators-c6qmr" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.335635 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mh88h"] Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.336998 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mh88h" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.359698 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mh88h"] Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.392605 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8v88c" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.394428 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kqdl\" (UniqueName: \"kubernetes.io/projected/6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4-kube-api-access-6kqdl\") pod \"community-operators-c6qmr\" (UID: \"6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4\") " pod="openshift-marketplace/community-operators-c6qmr" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.420881 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gdng\" (UniqueName: \"kubernetes.io/projected/0437f83e-83ed-42f5-88ab-110deeeac7a4-kube-api-access-6gdng\") pod \"certified-operators-mh88h\" (UID: \"0437f83e-83ed-42f5-88ab-110deeeac7a4\") " pod="openshift-marketplace/certified-operators-mh88h" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.420930 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0437f83e-83ed-42f5-88ab-110deeeac7a4-catalog-content\") pod \"certified-operators-mh88h\" (UID: \"0437f83e-83ed-42f5-88ab-110deeeac7a4\") " pod="openshift-marketplace/certified-operators-mh88h" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.420971 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0437f83e-83ed-42f5-88ab-110deeeac7a4-utilities\") pod \"certified-operators-mh88h\" (UID: \"0437f83e-83ed-42f5-88ab-110deeeac7a4\") " pod="openshift-marketplace/certified-operators-mh88h" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.463887 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c6qmr" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.522664 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gdng\" (UniqueName: \"kubernetes.io/projected/0437f83e-83ed-42f5-88ab-110deeeac7a4-kube-api-access-6gdng\") pod \"certified-operators-mh88h\" (UID: \"0437f83e-83ed-42f5-88ab-110deeeac7a4\") " pod="openshift-marketplace/certified-operators-mh88h" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.522720 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0437f83e-83ed-42f5-88ab-110deeeac7a4-catalog-content\") pod \"certified-operators-mh88h\" (UID: \"0437f83e-83ed-42f5-88ab-110deeeac7a4\") " pod="openshift-marketplace/certified-operators-mh88h" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.522792 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0437f83e-83ed-42f5-88ab-110deeeac7a4-utilities\") pod \"certified-operators-mh88h\" (UID: \"0437f83e-83ed-42f5-88ab-110deeeac7a4\") " pod="openshift-marketplace/certified-operators-mh88h" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.523329 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0437f83e-83ed-42f5-88ab-110deeeac7a4-utilities\") pod \"certified-operators-mh88h\" (UID: \"0437f83e-83ed-42f5-88ab-110deeeac7a4\") " pod="openshift-marketplace/certified-operators-mh88h" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.523905 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0437f83e-83ed-42f5-88ab-110deeeac7a4-catalog-content\") pod \"certified-operators-mh88h\" (UID: \"0437f83e-83ed-42f5-88ab-110deeeac7a4\") " pod="openshift-marketplace/certified-operators-mh88h" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.536688 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b2rzs"] Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.537781 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b2rzs" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.548939 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b2rzs"] Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.558472 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gdng\" (UniqueName: \"kubernetes.io/projected/0437f83e-83ed-42f5-88ab-110deeeac7a4-kube-api-access-6gdng\") pod \"certified-operators-mh88h\" (UID: \"0437f83e-83ed-42f5-88ab-110deeeac7a4\") " pod="openshift-marketplace/certified-operators-mh88h" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.624156 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9-catalog-content\") pod \"community-operators-b2rzs\" (UID: \"8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9\") " pod="openshift-marketplace/community-operators-b2rzs" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.624246 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9-utilities\") pod \"community-operators-b2rzs\" (UID: \"8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9\") " pod="openshift-marketplace/community-operators-b2rzs" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.624313 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg45h\" (UniqueName: \"kubernetes.io/projected/8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9-kube-api-access-hg45h\") pod \"community-operators-b2rzs\" (UID: \"8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9\") " pod="openshift-marketplace/community-operators-b2rzs" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.658212 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mh88h" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.725562 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9-catalog-content\") pod \"community-operators-b2rzs\" (UID: \"8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9\") " pod="openshift-marketplace/community-operators-b2rzs" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.725628 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9-utilities\") pod \"community-operators-b2rzs\" (UID: \"8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9\") " pod="openshift-marketplace/community-operators-b2rzs" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.725690 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg45h\" (UniqueName: \"kubernetes.io/projected/8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9-kube-api-access-hg45h\") pod \"community-operators-b2rzs\" (UID: \"8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9\") " pod="openshift-marketplace/community-operators-b2rzs" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.726576 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9-utilities\") pod \"community-operators-b2rzs\" (UID: \"8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9\") " pod="openshift-marketplace/community-operators-b2rzs" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.726652 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9-catalog-content\") pod \"community-operators-b2rzs\" (UID: \"8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9\") " pod="openshift-marketplace/community-operators-b2rzs" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.749074 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg45h\" (UniqueName: \"kubernetes.io/projected/8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9-kube-api-access-hg45h\") pod \"community-operators-b2rzs\" (UID: \"8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9\") " pod="openshift-marketplace/community-operators-b2rzs" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.827097 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kd79d\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.837204 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.837734 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8v88c"] Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.837769 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c6qmr"] Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.887095 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b2rzs" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.894268 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.991011 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-n2kln" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.991048 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-n2kln" Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.992212 4758 patch_prober.go:28] interesting pod/console-f9d7485db-n2kln container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 22 16:32:26 crc kubenswrapper[4758]: I0122 16:32:26.992255 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-n2kln" podUID="8f67259d-8eec-4f78-929f-01e7abe893f6" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 22 16:32:27 crc kubenswrapper[4758]: I0122 16:32:27.006455 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mh88h"] Jan 22 16:32:27 crc kubenswrapper[4758]: I0122 16:32:27.049573 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-7jtcn" Jan 22 16:32:27 crc kubenswrapper[4758]: I0122 16:32:27.051688 4758 patch_prober.go:28] interesting pod/router-default-5444994796-7jtcn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 16:32:27 crc kubenswrapper[4758]: [-]has-synced failed: reason withheld Jan 22 16:32:27 crc kubenswrapper[4758]: [+]process-running ok Jan 22 16:32:27 crc kubenswrapper[4758]: healthz check failed Jan 22 16:32:27 crc kubenswrapper[4758]: I0122 16:32:27.051720 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7jtcn" podUID="1d2c5bee-e237-4043-9f8a-73bb67ebf355" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 16:32:27 crc kubenswrapper[4758]: W0122 16:32:27.223848 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0437f83e_83ed_42f5_88ab_110deeeac7a4.slice/crio-1b89572c89fa54bb472656f35042d63af561f98b6ebebf494db7601b9df0a43e WatchSource:0}: Error finding container 1b89572c89fa54bb472656f35042d63af561f98b6ebebf494db7601b9df0a43e: Status 404 returned error can't find the container with id 1b89572c89fa54bb472656f35042d63af561f98b6ebebf494db7601b9df0a43e Jan 22 16:32:27 crc kubenswrapper[4758]: I0122 16:32:27.253158 4758 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fjsgm container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" start-of-body= Jan 22 16:32:27 crc kubenswrapper[4758]: I0122 16:32:27.253201 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-fjsgm" podUID="5caed3c6-9037-4ecf-b0db-778db52bd3ee" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" Jan 22 16:32:27 crc kubenswrapper[4758]: I0122 16:32:27.253836 4758 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fjsgm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" start-of-body= Jan 22 16:32:27 crc kubenswrapper[4758]: I0122 16:32:27.253859 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-fjsgm" podUID="5caed3c6-9037-4ecf-b0db-778db52bd3ee" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" Jan 22 16:32:27 crc kubenswrapper[4758]: I0122 16:32:27.313999 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8v88c" event={"ID":"88b3808a-aa06-48ab-9b57-f474a2e1379a","Type":"ContainerStarted","Data":"2a986056293c56b3775e881773212166c7505ac4f9f89e8e2f09f84ce3910057"} Jan 22 16:32:27 crc kubenswrapper[4758]: I0122 16:32:27.320883 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6qmr" event={"ID":"6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4","Type":"ContainerStarted","Data":"930621c983d344d7049ad24f91878878538f33a7eeb161ae0f994ceb85ae9111"} Jan 22 16:32:27 crc kubenswrapper[4758]: I0122 16:32:27.333825 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mh88h" event={"ID":"0437f83e-83ed-42f5-88ab-110deeeac7a4","Type":"ContainerStarted","Data":"1b89572c89fa54bb472656f35042d63af561f98b6ebebf494db7601b9df0a43e"} Jan 22 16:32:27 crc kubenswrapper[4758]: I0122 16:32:27.412803 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kd79d"] Jan 22 16:32:27 crc kubenswrapper[4758]: I0122 16:32:27.471318 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b2rzs"] Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.051039 4758 patch_prober.go:28] interesting pod/router-default-5444994796-7jtcn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 16:32:28 crc kubenswrapper[4758]: [-]has-synced failed: reason withheld Jan 22 16:32:28 crc kubenswrapper[4758]: [+]process-running ok Jan 22 16:32:28 crc kubenswrapper[4758]: healthz check failed Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.051334 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7jtcn" podUID="1d2c5bee-e237-4043-9f8a-73bb67ebf355" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.140658 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wjs4t"] Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.141958 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wjs4t" Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.144130 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.156732 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wjs4t"] Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.294829 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a12a62bb-3713-4f66-902e-673cc09db2ee-catalog-content\") pod \"redhat-marketplace-wjs4t\" (UID: \"a12a62bb-3713-4f66-902e-673cc09db2ee\") " pod="openshift-marketplace/redhat-marketplace-wjs4t" Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.294865 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a12a62bb-3713-4f66-902e-673cc09db2ee-utilities\") pod \"redhat-marketplace-wjs4t\" (UID: \"a12a62bb-3713-4f66-902e-673cc09db2ee\") " pod="openshift-marketplace/redhat-marketplace-wjs4t" Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.294891 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvcxg\" (UniqueName: \"kubernetes.io/projected/a12a62bb-3713-4f66-902e-673cc09db2ee-kube-api-access-kvcxg\") pod \"redhat-marketplace-wjs4t\" (UID: \"a12a62bb-3713-4f66-902e-673cc09db2ee\") " pod="openshift-marketplace/redhat-marketplace-wjs4t" Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.340268 4758 generic.go:334] "Generic (PLEG): container finished" podID="88b3808a-aa06-48ab-9b57-f474a2e1379a" containerID="81348e474a064553ee490f2f52e2a9d4997af0961b7545ad14415651bbb90908" exitCode=0 Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.340336 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8v88c" event={"ID":"88b3808a-aa06-48ab-9b57-f474a2e1379a","Type":"ContainerDied","Data":"81348e474a064553ee490f2f52e2a9d4997af0961b7545ad14415651bbb90908"} Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.342472 4758 generic.go:334] "Generic (PLEG): container finished" podID="8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9" containerID="2b06c240c13ac71aa873c8491ed2c54fb64ed87343fd8ba85555e22e613c36b8" exitCode=0 Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.342525 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2rzs" event={"ID":"8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9","Type":"ContainerDied","Data":"2b06c240c13ac71aa873c8491ed2c54fb64ed87343fd8ba85555e22e613c36b8"} Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.342603 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2rzs" event={"ID":"8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9","Type":"ContainerStarted","Data":"18c999ed0d6e1b4702584de69c6aed237434a262c3945cb9712190505b055913"} Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.342573 4758 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.345018 4758 generic.go:334] "Generic (PLEG): container finished" podID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" containerID="43d9a5e9db109d92bdb8bc0744b9b457e395a08a418345afba79e3c1b91ddc02" exitCode=0 Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.345074 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6qmr" event={"ID":"6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4","Type":"ContainerDied","Data":"43d9a5e9db109d92bdb8bc0744b9b457e395a08a418345afba79e3c1b91ddc02"} Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.347699 4758 generic.go:334] "Generic (PLEG): container finished" podID="0437f83e-83ed-42f5-88ab-110deeeac7a4" containerID="679e83afb94bfd6c31f16c82313770f48b39c11d40eb40aff2e9b243c3a5faf6" exitCode=0 Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.347807 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mh88h" event={"ID":"0437f83e-83ed-42f5-88ab-110deeeac7a4","Type":"ContainerDied","Data":"679e83afb94bfd6c31f16c82313770f48b39c11d40eb40aff2e9b243c3a5faf6"} Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.349631 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" event={"ID":"1c983b09-f715-422e-960d-36dcc610c30b","Type":"ContainerStarted","Data":"a05e46dff7100ab1d08ccefc40448405fa7dd4821e00d9b7ec4ac4175d7c6f6b"} Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.349656 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" event={"ID":"1c983b09-f715-422e-960d-36dcc610c30b","Type":"ContainerStarted","Data":"522bd19cef8372e2e486841d1a62589fd1e4fd104e07af43a8df5af7304c1632"} Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.350245 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.355023 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2xqns" event={"ID":"3ef1c490-d5f9-458d-8b3e-8580a5f07df6","Type":"ContainerStarted","Data":"2085afb4d7d209bc4fbeabc766359c03a77206a614f6d4d37a9358690881a535"} Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.355052 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2xqns" event={"ID":"3ef1c490-d5f9-458d-8b3e-8580a5f07df6","Type":"ContainerStarted","Data":"cdb59c0cca9202202e96df57c083cb60e41fa1a0bf607505d1c9f6df289e01f8"} Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.362832 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-l5xjz" event={"ID":"f6422cff-e2d5-4935-81b3-85fbb721a86b","Type":"ContainerStarted","Data":"216a7c6ba83e3246398b58479a65b023571b81c03ec73563f71659e0f3767172"} Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.383042 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-2xqns" podStartSLOduration=146.38300677 podStartE2EDuration="2m26.38300677s" podCreationTimestamp="2026-01-22 16:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:32:28.382122974 +0000 UTC m=+169.865462259" watchObservedRunningTime="2026-01-22 16:32:28.38300677 +0000 UTC m=+169.866346055" Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.395764 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a12a62bb-3713-4f66-902e-673cc09db2ee-catalog-content\") pod \"redhat-marketplace-wjs4t\" (UID: \"a12a62bb-3713-4f66-902e-673cc09db2ee\") " pod="openshift-marketplace/redhat-marketplace-wjs4t" Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.396046 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a12a62bb-3713-4f66-902e-673cc09db2ee-utilities\") pod \"redhat-marketplace-wjs4t\" (UID: \"a12a62bb-3713-4f66-902e-673cc09db2ee\") " pod="openshift-marketplace/redhat-marketplace-wjs4t" Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.396136 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvcxg\" (UniqueName: \"kubernetes.io/projected/a12a62bb-3713-4f66-902e-673cc09db2ee-kube-api-access-kvcxg\") pod \"redhat-marketplace-wjs4t\" (UID: \"a12a62bb-3713-4f66-902e-673cc09db2ee\") " pod="openshift-marketplace/redhat-marketplace-wjs4t" Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.397136 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a12a62bb-3713-4f66-902e-673cc09db2ee-catalog-content\") pod \"redhat-marketplace-wjs4t\" (UID: \"a12a62bb-3713-4f66-902e-673cc09db2ee\") " pod="openshift-marketplace/redhat-marketplace-wjs4t" Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.397358 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a12a62bb-3713-4f66-902e-673cc09db2ee-utilities\") pod \"redhat-marketplace-wjs4t\" (UID: \"a12a62bb-3713-4f66-902e-673cc09db2ee\") " pod="openshift-marketplace/redhat-marketplace-wjs4t" Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.421792 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvcxg\" (UniqueName: \"kubernetes.io/projected/a12a62bb-3713-4f66-902e-673cc09db2ee-kube-api-access-kvcxg\") pod \"redhat-marketplace-wjs4t\" (UID: \"a12a62bb-3713-4f66-902e-673cc09db2ee\") " pod="openshift-marketplace/redhat-marketplace-wjs4t" Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.470058 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wjs4t" Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.505360 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" podStartSLOduration=146.505341602 podStartE2EDuration="2m26.505341602s" podCreationTimestamp="2026-01-22 16:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:32:28.504392506 +0000 UTC m=+169.987731791" watchObservedRunningTime="2026-01-22 16:32:28.505341602 +0000 UTC m=+169.988680887" Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.559670 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nthqj"] Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.560666 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nthqj" Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.583202 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nthqj"] Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.709452 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e-utilities\") pod \"redhat-marketplace-nthqj\" (UID: \"25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e\") " pod="openshift-marketplace/redhat-marketplace-nthqj" Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.709877 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e-catalog-content\") pod \"redhat-marketplace-nthqj\" (UID: \"25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e\") " pod="openshift-marketplace/redhat-marketplace-nthqj" Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.709945 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tnwm\" (UniqueName: \"kubernetes.io/projected/25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e-kube-api-access-7tnwm\") pod \"redhat-marketplace-nthqj\" (UID: \"25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e\") " pod="openshift-marketplace/redhat-marketplace-nthqj" Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.756106 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-l5xjz" podStartSLOduration=14.756090891 podStartE2EDuration="14.756090891s" podCreationTimestamp="2026-01-22 16:32:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:32:28.731237478 +0000 UTC m=+170.214576763" watchObservedRunningTime="2026-01-22 16:32:28.756090891 +0000 UTC m=+170.239430176" Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.810763 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e-utilities\") pod \"redhat-marketplace-nthqj\" (UID: \"25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e\") " pod="openshift-marketplace/redhat-marketplace-nthqj" Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.810807 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e-catalog-content\") pod \"redhat-marketplace-nthqj\" (UID: \"25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e\") " pod="openshift-marketplace/redhat-marketplace-nthqj" Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.810846 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tnwm\" (UniqueName: \"kubernetes.io/projected/25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e-kube-api-access-7tnwm\") pod \"redhat-marketplace-nthqj\" (UID: \"25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e\") " pod="openshift-marketplace/redhat-marketplace-nthqj" Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.811223 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e-utilities\") pod \"redhat-marketplace-nthqj\" (UID: \"25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e\") " pod="openshift-marketplace/redhat-marketplace-nthqj" Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.811413 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e-catalog-content\") pod \"redhat-marketplace-nthqj\" (UID: \"25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e\") " pod="openshift-marketplace/redhat-marketplace-nthqj" Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.822051 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.823197 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.831763 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.832299 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.834659 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tnwm\" (UniqueName: \"kubernetes.io/projected/25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e-kube-api-access-7tnwm\") pod \"redhat-marketplace-nthqj\" (UID: \"25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e\") " pod="openshift-marketplace/redhat-marketplace-nthqj" Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.844147 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.855208 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wjs4t"] Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.884041 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nthqj" Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.911639 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/83ce5f1a-67e0-43f8-b8cf-99636c7e25a2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"83ce5f1a-67e0-43f8-b8cf-99636c7e25a2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 16:32:28 crc kubenswrapper[4758]: I0122 16:32:28.911860 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/83ce5f1a-67e0-43f8-b8cf-99636c7e25a2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"83ce5f1a-67e0-43f8-b8cf-99636c7e25a2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 16:32:29 crc kubenswrapper[4758]: I0122 16:32:29.014452 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/83ce5f1a-67e0-43f8-b8cf-99636c7e25a2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"83ce5f1a-67e0-43f8-b8cf-99636c7e25a2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 16:32:29 crc kubenswrapper[4758]: I0122 16:32:29.014538 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/83ce5f1a-67e0-43f8-b8cf-99636c7e25a2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"83ce5f1a-67e0-43f8-b8cf-99636c7e25a2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 16:32:29 crc kubenswrapper[4758]: I0122 16:32:29.014702 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/83ce5f1a-67e0-43f8-b8cf-99636c7e25a2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"83ce5f1a-67e0-43f8-b8cf-99636c7e25a2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 16:32:29 crc kubenswrapper[4758]: I0122 16:32:29.049305 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/83ce5f1a-67e0-43f8-b8cf-99636c7e25a2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"83ce5f1a-67e0-43f8-b8cf-99636c7e25a2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 16:32:29 crc kubenswrapper[4758]: I0122 16:32:29.053232 4758 patch_prober.go:28] interesting pod/router-default-5444994796-7jtcn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 16:32:29 crc kubenswrapper[4758]: [-]has-synced failed: reason withheld Jan 22 16:32:29 crc kubenswrapper[4758]: [+]process-running ok Jan 22 16:32:29 crc kubenswrapper[4758]: healthz check failed Jan 22 16:32:29 crc kubenswrapper[4758]: I0122 16:32:29.053287 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7jtcn" podUID="1d2c5bee-e237-4043-9f8a-73bb67ebf355" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 16:32:29 crc kubenswrapper[4758]: I0122 16:32:29.157925 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 16:32:29 crc kubenswrapper[4758]: I0122 16:32:29.168990 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s7bgv"] Jan 22 16:32:29 crc kubenswrapper[4758]: I0122 16:32:29.169956 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s7bgv" Jan 22 16:32:29 crc kubenswrapper[4758]: I0122 16:32:29.172576 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 22 16:32:29 crc kubenswrapper[4758]: I0122 16:32:29.236332 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s7bgv"] Jan 22 16:32:29 crc kubenswrapper[4758]: I0122 16:32:29.322426 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e88aa20b-e3aa-4cc2-856c-0dd5e9394992-utilities\") pod \"redhat-operators-s7bgv\" (UID: \"e88aa20b-e3aa-4cc2-856c-0dd5e9394992\") " pod="openshift-marketplace/redhat-operators-s7bgv" Jan 22 16:32:29 crc kubenswrapper[4758]: I0122 16:32:29.322498 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d864j\" (UniqueName: \"kubernetes.io/projected/e88aa20b-e3aa-4cc2-856c-0dd5e9394992-kube-api-access-d864j\") pod \"redhat-operators-s7bgv\" (UID: \"e88aa20b-e3aa-4cc2-856c-0dd5e9394992\") " pod="openshift-marketplace/redhat-operators-s7bgv" Jan 22 16:32:29 crc kubenswrapper[4758]: I0122 16:32:29.322591 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e88aa20b-e3aa-4cc2-856c-0dd5e9394992-catalog-content\") pod \"redhat-operators-s7bgv\" (UID: \"e88aa20b-e3aa-4cc2-856c-0dd5e9394992\") " pod="openshift-marketplace/redhat-operators-s7bgv" Jan 22 16:32:29 crc kubenswrapper[4758]: I0122 16:32:29.334562 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:32:29 crc kubenswrapper[4758]: I0122 16:32:29.341026 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-559wb"] Jan 22 16:32:29 crc kubenswrapper[4758]: I0122 16:32:29.343579 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-559wb" Jan 22 16:32:29 crc kubenswrapper[4758]: I0122 16:32:29.389074 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-559wb"] Jan 22 16:32:29 crc kubenswrapper[4758]: I0122 16:32:29.404236 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nthqj"] Jan 22 16:32:29 crc kubenswrapper[4758]: I0122 16:32:29.423611 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/895a8f2e-590a-4270-9eb0-1f7c76da93d9-catalog-content\") pod \"redhat-operators-559wb\" (UID: \"895a8f2e-590a-4270-9eb0-1f7c76da93d9\") " pod="openshift-marketplace/redhat-operators-559wb" Jan 22 16:32:29 crc kubenswrapper[4758]: I0122 16:32:29.423663 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e88aa20b-e3aa-4cc2-856c-0dd5e9394992-utilities\") pod \"redhat-operators-s7bgv\" (UID: \"e88aa20b-e3aa-4cc2-856c-0dd5e9394992\") " pod="openshift-marketplace/redhat-operators-s7bgv" Jan 22 16:32:29 crc kubenswrapper[4758]: I0122 16:32:29.423717 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/895a8f2e-590a-4270-9eb0-1f7c76da93d9-utilities\") pod \"redhat-operators-559wb\" (UID: \"895a8f2e-590a-4270-9eb0-1f7c76da93d9\") " pod="openshift-marketplace/redhat-operators-559wb" Jan 22 16:32:29 crc kubenswrapper[4758]: I0122 16:32:29.423774 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d864j\" (UniqueName: \"kubernetes.io/projected/e88aa20b-e3aa-4cc2-856c-0dd5e9394992-kube-api-access-d864j\") pod \"redhat-operators-s7bgv\" (UID: \"e88aa20b-e3aa-4cc2-856c-0dd5e9394992\") " pod="openshift-marketplace/redhat-operators-s7bgv" Jan 22 16:32:29 crc kubenswrapper[4758]: I0122 16:32:29.423810 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fp54\" (UniqueName: \"kubernetes.io/projected/895a8f2e-590a-4270-9eb0-1f7c76da93d9-kube-api-access-2fp54\") pod \"redhat-operators-559wb\" (UID: \"895a8f2e-590a-4270-9eb0-1f7c76da93d9\") " pod="openshift-marketplace/redhat-operators-559wb" Jan 22 16:32:29 crc kubenswrapper[4758]: I0122 16:32:29.423899 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e88aa20b-e3aa-4cc2-856c-0dd5e9394992-catalog-content\") pod \"redhat-operators-s7bgv\" (UID: \"e88aa20b-e3aa-4cc2-856c-0dd5e9394992\") " pod="openshift-marketplace/redhat-operators-s7bgv" Jan 22 16:32:29 crc kubenswrapper[4758]: I0122 16:32:29.424496 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e88aa20b-e3aa-4cc2-856c-0dd5e9394992-catalog-content\") pod \"redhat-operators-s7bgv\" (UID: \"e88aa20b-e3aa-4cc2-856c-0dd5e9394992\") " pod="openshift-marketplace/redhat-operators-s7bgv" Jan 22 16:32:29 crc kubenswrapper[4758]: I0122 16:32:29.425084 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e88aa20b-e3aa-4cc2-856c-0dd5e9394992-utilities\") pod \"redhat-operators-s7bgv\" (UID: \"e88aa20b-e3aa-4cc2-856c-0dd5e9394992\") " pod="openshift-marketplace/redhat-operators-s7bgv" Jan 22 16:32:29 crc kubenswrapper[4758]: I0122 16:32:29.471557 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d864j\" (UniqueName: \"kubernetes.io/projected/e88aa20b-e3aa-4cc2-856c-0dd5e9394992-kube-api-access-d864j\") pod \"redhat-operators-s7bgv\" (UID: \"e88aa20b-e3aa-4cc2-856c-0dd5e9394992\") " pod="openshift-marketplace/redhat-operators-s7bgv" Jan 22 16:32:29 crc kubenswrapper[4758]: I0122 16:32:29.497894 4758 generic.go:334] "Generic (PLEG): container finished" podID="a12a62bb-3713-4f66-902e-673cc09db2ee" containerID="e96fc013e143123006a46ad80975200eecc834f0c5909cc49b4d46f53b63c771" exitCode=0 Jan 22 16:32:29 crc kubenswrapper[4758]: I0122 16:32:29.499014 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wjs4t" event={"ID":"a12a62bb-3713-4f66-902e-673cc09db2ee","Type":"ContainerDied","Data":"e96fc013e143123006a46ad80975200eecc834f0c5909cc49b4d46f53b63c771"} Jan 22 16:32:29 crc kubenswrapper[4758]: I0122 16:32:29.499045 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wjs4t" event={"ID":"a12a62bb-3713-4f66-902e-673cc09db2ee","Type":"ContainerStarted","Data":"7e4e5e6940233b49b11fd5366b591eda968fe326775d2c1e20458d4fb644172a"} Jan 22 16:32:29 crc kubenswrapper[4758]: I0122 16:32:29.538583 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/895a8f2e-590a-4270-9eb0-1f7c76da93d9-utilities\") pod \"redhat-operators-559wb\" (UID: \"895a8f2e-590a-4270-9eb0-1f7c76da93d9\") " pod="openshift-marketplace/redhat-operators-559wb" Jan 22 16:32:29 crc kubenswrapper[4758]: I0122 16:32:29.538699 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fp54\" (UniqueName: \"kubernetes.io/projected/895a8f2e-590a-4270-9eb0-1f7c76da93d9-kube-api-access-2fp54\") pod \"redhat-operators-559wb\" (UID: \"895a8f2e-590a-4270-9eb0-1f7c76da93d9\") " pod="openshift-marketplace/redhat-operators-559wb" Jan 22 16:32:29 crc kubenswrapper[4758]: I0122 16:32:29.538789 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/895a8f2e-590a-4270-9eb0-1f7c76da93d9-catalog-content\") pod \"redhat-operators-559wb\" (UID: \"895a8f2e-590a-4270-9eb0-1f7c76da93d9\") " pod="openshift-marketplace/redhat-operators-559wb" Jan 22 16:32:29 crc kubenswrapper[4758]: I0122 16:32:29.539245 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/895a8f2e-590a-4270-9eb0-1f7c76da93d9-utilities\") pod \"redhat-operators-559wb\" (UID: \"895a8f2e-590a-4270-9eb0-1f7c76da93d9\") " pod="openshift-marketplace/redhat-operators-559wb" Jan 22 16:32:29 crc kubenswrapper[4758]: I0122 16:32:29.539440 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/895a8f2e-590a-4270-9eb0-1f7c76da93d9-catalog-content\") pod \"redhat-operators-559wb\" (UID: \"895a8f2e-590a-4270-9eb0-1f7c76da93d9\") " pod="openshift-marketplace/redhat-operators-559wb" Jan 22 16:32:29 crc kubenswrapper[4758]: I0122 16:32:29.591069 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s7bgv" Jan 22 16:32:29 crc kubenswrapper[4758]: I0122 16:32:29.601071 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fp54\" (UniqueName: \"kubernetes.io/projected/895a8f2e-590a-4270-9eb0-1f7c76da93d9-kube-api-access-2fp54\") pod \"redhat-operators-559wb\" (UID: \"895a8f2e-590a-4270-9eb0-1f7c76da93d9\") " pod="openshift-marketplace/redhat-operators-559wb" Jan 22 16:32:29 crc kubenswrapper[4758]: I0122 16:32:29.688030 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-559wb" Jan 22 16:32:29 crc kubenswrapper[4758]: I0122 16:32:29.699577 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 22 16:32:30 crc kubenswrapper[4758]: I0122 16:32:30.073354 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-7jtcn" Jan 22 16:32:30 crc kubenswrapper[4758]: I0122 16:32:30.083072 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-7jtcn" Jan 22 16:32:30 crc kubenswrapper[4758]: I0122 16:32:30.135038 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s7bgv"] Jan 22 16:32:30 crc kubenswrapper[4758]: I0122 16:32:30.475772 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-559wb"] Jan 22 16:32:30 crc kubenswrapper[4758]: W0122 16:32:30.503543 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod895a8f2e_590a_4270_9eb0_1f7c76da93d9.slice/crio-27139c45d4140b5af7a08fb5e17b9f5d7f14f3a14c50375f804352b4adfb3170 WatchSource:0}: Error finding container 27139c45d4140b5af7a08fb5e17b9f5d7f14f3a14c50375f804352b4adfb3170: Status 404 returned error can't find the container with id 27139c45d4140b5af7a08fb5e17b9f5d7f14f3a14c50375f804352b4adfb3170 Jan 22 16:32:30 crc kubenswrapper[4758]: I0122 16:32:30.512896 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"83ce5f1a-67e0-43f8-b8cf-99636c7e25a2","Type":"ContainerStarted","Data":"addade01d6f42132f4c663bcc1cdb2b1b7e3f1ba636160a005f6818b33843783"} Jan 22 16:32:30 crc kubenswrapper[4758]: I0122 16:32:30.516179 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nthqj" event={"ID":"25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e","Type":"ContainerStarted","Data":"a913934e4409aaa5b93a33a016278889e8f1d89d95c7217a35d1c830b4dc92bb"} Jan 22 16:32:30 crc kubenswrapper[4758]: I0122 16:32:30.529445 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7bgv" event={"ID":"e88aa20b-e3aa-4cc2-856c-0dd5e9394992","Type":"ContainerStarted","Data":"d95a211d266181654e065fe79fa20039053a7cce147e1b240a6501b0d3cfaa03"} Jan 22 16:32:31 crc kubenswrapper[4758]: I0122 16:32:31.550401 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7bgv" event={"ID":"e88aa20b-e3aa-4cc2-856c-0dd5e9394992","Type":"ContainerStarted","Data":"d878934d875359337952188a2bebc2f7448d994129f6aa2d57436b2221188ed8"} Jan 22 16:32:31 crc kubenswrapper[4758]: I0122 16:32:31.555695 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"83ce5f1a-67e0-43f8-b8cf-99636c7e25a2","Type":"ContainerStarted","Data":"ae96846e5bb7f63f15b69b9b82f0b9c3ddd31533fc5ffd2f5c5312a85f955b3a"} Jan 22 16:32:31 crc kubenswrapper[4758]: I0122 16:32:31.560379 4758 generic.go:334] "Generic (PLEG): container finished" podID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" containerID="6a26ad8078d81d9531f2bbea178c58bbe2212adad5804e0620199758ada95f29" exitCode=0 Jan 22 16:32:31 crc kubenswrapper[4758]: I0122 16:32:31.560450 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nthqj" event={"ID":"25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e","Type":"ContainerDied","Data":"6a26ad8078d81d9531f2bbea178c58bbe2212adad5804e0620199758ada95f29"} Jan 22 16:32:31 crc kubenswrapper[4758]: I0122 16:32:31.562769 4758 generic.go:334] "Generic (PLEG): container finished" podID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" containerID="769905b650bf3b5b3b8be0a6146c1f7ba0f9a6d50f438f0fccc8f1f87fcdeefe" exitCode=0 Jan 22 16:32:31 crc kubenswrapper[4758]: I0122 16:32:31.562793 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-559wb" event={"ID":"895a8f2e-590a-4270-9eb0-1f7c76da93d9","Type":"ContainerDied","Data":"769905b650bf3b5b3b8be0a6146c1f7ba0f9a6d50f438f0fccc8f1f87fcdeefe"} Jan 22 16:32:31 crc kubenswrapper[4758]: I0122 16:32:31.562806 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-559wb" event={"ID":"895a8f2e-590a-4270-9eb0-1f7c76da93d9","Type":"ContainerStarted","Data":"27139c45d4140b5af7a08fb5e17b9f5d7f14f3a14c50375f804352b4adfb3170"} Jan 22 16:32:31 crc kubenswrapper[4758]: I0122 16:32:31.642844 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 22 16:32:31 crc kubenswrapper[4758]: I0122 16:32:31.643483 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 16:32:31 crc kubenswrapper[4758]: I0122 16:32:31.648884 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 22 16:32:31 crc kubenswrapper[4758]: I0122 16:32:31.650857 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 22 16:32:31 crc kubenswrapper[4758]: I0122 16:32:31.667586 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 22 16:32:31 crc kubenswrapper[4758]: I0122 16:32:31.689472 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c51caeef-6d2f-48b6-8bc0-7677dc92ae2c-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c51caeef-6d2f-48b6-8bc0-7677dc92ae2c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 16:32:31 crc kubenswrapper[4758]: I0122 16:32:31.689538 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c51caeef-6d2f-48b6-8bc0-7677dc92ae2c-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c51caeef-6d2f-48b6-8bc0-7677dc92ae2c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 16:32:31 crc kubenswrapper[4758]: I0122 16:32:31.791181 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c51caeef-6d2f-48b6-8bc0-7677dc92ae2c-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c51caeef-6d2f-48b6-8bc0-7677dc92ae2c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 16:32:31 crc kubenswrapper[4758]: I0122 16:32:31.791198 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c51caeef-6d2f-48b6-8bc0-7677dc92ae2c-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c51caeef-6d2f-48b6-8bc0-7677dc92ae2c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 16:32:31 crc kubenswrapper[4758]: I0122 16:32:31.791360 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c51caeef-6d2f-48b6-8bc0-7677dc92ae2c-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c51caeef-6d2f-48b6-8bc0-7677dc92ae2c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 16:32:31 crc kubenswrapper[4758]: I0122 16:32:31.822759 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c51caeef-6d2f-48b6-8bc0-7677dc92ae2c-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c51caeef-6d2f-48b6-8bc0-7677dc92ae2c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 16:32:31 crc kubenswrapper[4758]: I0122 16:32:31.967216 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 16:32:32 crc kubenswrapper[4758]: I0122 16:32:32.251277 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 22 16:32:32 crc kubenswrapper[4758]: I0122 16:32:32.369466 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-rfv8b" Jan 22 16:32:32 crc kubenswrapper[4758]: I0122 16:32:32.570143 4758 generic.go:334] "Generic (PLEG): container finished" podID="e88aa20b-e3aa-4cc2-856c-0dd5e9394992" containerID="d878934d875359337952188a2bebc2f7448d994129f6aa2d57436b2221188ed8" exitCode=0 Jan 22 16:32:32 crc kubenswrapper[4758]: I0122 16:32:32.570206 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7bgv" event={"ID":"e88aa20b-e3aa-4cc2-856c-0dd5e9394992","Type":"ContainerDied","Data":"d878934d875359337952188a2bebc2f7448d994129f6aa2d57436b2221188ed8"} Jan 22 16:32:32 crc kubenswrapper[4758]: I0122 16:32:32.573910 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-jfncv_7a8b9092-45e9-456e-b1bc-e997c96a9836/cluster-samples-operator/0.log" Jan 22 16:32:32 crc kubenswrapper[4758]: I0122 16:32:32.573965 4758 generic.go:334] "Generic (PLEG): container finished" podID="7a8b9092-45e9-456e-b1bc-e997c96a9836" containerID="1c25efbc5c6393e773df55eab54fccdfe1c541b22ba31dd35dee42ed499b5278" exitCode=2 Jan 22 16:32:32 crc kubenswrapper[4758]: I0122 16:32:32.574135 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jfncv" event={"ID":"7a8b9092-45e9-456e-b1bc-e997c96a9836","Type":"ContainerDied","Data":"1c25efbc5c6393e773df55eab54fccdfe1c541b22ba31dd35dee42ed499b5278"} Jan 22 16:32:32 crc kubenswrapper[4758]: I0122 16:32:32.574925 4758 scope.go:117] "RemoveContainer" containerID="1c25efbc5c6393e773df55eab54fccdfe1c541b22ba31dd35dee42ed499b5278" Jan 22 16:32:32 crc kubenswrapper[4758]: I0122 16:32:32.593877 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=4.593856735 podStartE2EDuration="4.593856735s" podCreationTimestamp="2026-01-22 16:32:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:32:32.592919859 +0000 UTC m=+174.076259154" watchObservedRunningTime="2026-01-22 16:32:32.593856735 +0000 UTC m=+174.077196020" Jan 22 16:32:33 crc kubenswrapper[4758]: I0122 16:32:33.608686 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-jfncv_7a8b9092-45e9-456e-b1bc-e997c96a9836/cluster-samples-operator/0.log" Jan 22 16:32:33 crc kubenswrapper[4758]: I0122 16:32:33.609243 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jfncv" event={"ID":"7a8b9092-45e9-456e-b1bc-e997c96a9836","Type":"ContainerStarted","Data":"326370ef41cef10caa378860f23a0eef903309f2265b318cb214a9882a81c7d7"} Jan 22 16:32:33 crc kubenswrapper[4758]: I0122 16:32:33.617552 4758 generic.go:334] "Generic (PLEG): container finished" podID="83ce5f1a-67e0-43f8-b8cf-99636c7e25a2" containerID="ae96846e5bb7f63f15b69b9b82f0b9c3ddd31533fc5ffd2f5c5312a85f955b3a" exitCode=0 Jan 22 16:32:33 crc kubenswrapper[4758]: I0122 16:32:33.617648 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"83ce5f1a-67e0-43f8-b8cf-99636c7e25a2","Type":"ContainerDied","Data":"ae96846e5bb7f63f15b69b9b82f0b9c3ddd31533fc5ffd2f5c5312a85f955b3a"} Jan 22 16:32:33 crc kubenswrapper[4758]: I0122 16:32:33.620958 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c51caeef-6d2f-48b6-8bc0-7677dc92ae2c","Type":"ContainerStarted","Data":"05f0f9f9ee1a35d98989d744e0b11178cd1cc76b8850f12f469b16580159afb2"} Jan 22 16:32:34 crc kubenswrapper[4758]: I0122 16:32:34.631392 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c51caeef-6d2f-48b6-8bc0-7677dc92ae2c","Type":"ContainerStarted","Data":"2d250d82f28027db2e90e7df695367011a7bac274b9b071872c9d86c4daa8f39"} Jan 22 16:32:35 crc kubenswrapper[4758]: I0122 16:32:35.046597 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 16:32:35 crc kubenswrapper[4758]: I0122 16:32:35.144967 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:32:35 crc kubenswrapper[4758]: I0122 16:32:35.166207 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/83ce5f1a-67e0-43f8-b8cf-99636c7e25a2-kubelet-dir\") pod \"83ce5f1a-67e0-43f8-b8cf-99636c7e25a2\" (UID: \"83ce5f1a-67e0-43f8-b8cf-99636c7e25a2\") " Jan 22 16:32:35 crc kubenswrapper[4758]: I0122 16:32:35.166393 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/83ce5f1a-67e0-43f8-b8cf-99636c7e25a2-kube-api-access\") pod \"83ce5f1a-67e0-43f8-b8cf-99636c7e25a2\" (UID: \"83ce5f1a-67e0-43f8-b8cf-99636c7e25a2\") " Jan 22 16:32:35 crc kubenswrapper[4758]: I0122 16:32:35.166637 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83ce5f1a-67e0-43f8-b8cf-99636c7e25a2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "83ce5f1a-67e0-43f8-b8cf-99636c7e25a2" (UID: "83ce5f1a-67e0-43f8-b8cf-99636c7e25a2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:32:35 crc kubenswrapper[4758]: I0122 16:32:35.172150 4758 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/83ce5f1a-67e0-43f8-b8cf-99636c7e25a2-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 16:32:35 crc kubenswrapper[4758]: I0122 16:32:35.183462 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83ce5f1a-67e0-43f8-b8cf-99636c7e25a2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "83ce5f1a-67e0-43f8-b8cf-99636c7e25a2" (UID: "83ce5f1a-67e0-43f8-b8cf-99636c7e25a2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:32:35 crc kubenswrapper[4758]: I0122 16:32:35.273492 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/83ce5f1a-67e0-43f8-b8cf-99636c7e25a2-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 16:32:35 crc kubenswrapper[4758]: I0122 16:32:35.647585 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"83ce5f1a-67e0-43f8-b8cf-99636c7e25a2","Type":"ContainerDied","Data":"addade01d6f42132f4c663bcc1cdb2b1b7e3f1ba636160a005f6818b33843783"} Jan 22 16:32:35 crc kubenswrapper[4758]: I0122 16:32:35.647617 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 16:32:35 crc kubenswrapper[4758]: I0122 16:32:35.647626 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="addade01d6f42132f4c663bcc1cdb2b1b7e3f1ba636160a005f6818b33843783" Jan 22 16:32:35 crc kubenswrapper[4758]: I0122 16:32:35.649436 4758 generic.go:334] "Generic (PLEG): container finished" podID="c51caeef-6d2f-48b6-8bc0-7677dc92ae2c" containerID="2d250d82f28027db2e90e7df695367011a7bac274b9b071872c9d86c4daa8f39" exitCode=0 Jan 22 16:32:35 crc kubenswrapper[4758]: I0122 16:32:35.649468 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c51caeef-6d2f-48b6-8bc0-7677dc92ae2c","Type":"ContainerDied","Data":"2d250d82f28027db2e90e7df695367011a7bac274b9b071872c9d86c4daa8f39"} Jan 22 16:32:36 crc kubenswrapper[4758]: I0122 16:32:36.226239 4758 patch_prober.go:28] interesting pod/downloads-7954f5f757-p5cqb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Jan 22 16:32:36 crc kubenswrapper[4758]: I0122 16:32:36.226510 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-p5cqb" podUID="327d43d9-41eb-4ef4-9df0-d38e0739b7df" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Jan 22 16:32:36 crc kubenswrapper[4758]: I0122 16:32:36.226307 4758 patch_prober.go:28] interesting pod/downloads-7954f5f757-p5cqb container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Jan 22 16:32:36 crc kubenswrapper[4758]: I0122 16:32:36.226826 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-p5cqb" podUID="327d43d9-41eb-4ef4-9df0-d38e0739b7df" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Jan 22 16:32:36 crc kubenswrapper[4758]: I0122 16:32:36.661127 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-jfncv_7a8b9092-45e9-456e-b1bc-e997c96a9836/cluster-samples-operator/1.log" Jan 22 16:32:36 crc kubenswrapper[4758]: I0122 16:32:36.663389 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-jfncv_7a8b9092-45e9-456e-b1bc-e997c96a9836/cluster-samples-operator/0.log" Jan 22 16:32:36 crc kubenswrapper[4758]: I0122 16:32:36.663428 4758 generic.go:334] "Generic (PLEG): container finished" podID="7a8b9092-45e9-456e-b1bc-e997c96a9836" containerID="326370ef41cef10caa378860f23a0eef903309f2265b318cb214a9882a81c7d7" exitCode=2 Jan 22 16:32:36 crc kubenswrapper[4758]: I0122 16:32:36.663559 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jfncv" event={"ID":"7a8b9092-45e9-456e-b1bc-e997c96a9836","Type":"ContainerDied","Data":"326370ef41cef10caa378860f23a0eef903309f2265b318cb214a9882a81c7d7"} Jan 22 16:32:36 crc kubenswrapper[4758]: I0122 16:32:36.663615 4758 scope.go:117] "RemoveContainer" containerID="1c25efbc5c6393e773df55eab54fccdfe1c541b22ba31dd35dee42ed499b5278" Jan 22 16:32:36 crc kubenswrapper[4758]: I0122 16:32:36.664502 4758 scope.go:117] "RemoveContainer" containerID="326370ef41cef10caa378860f23a0eef903309f2265b318cb214a9882a81c7d7" Jan 22 16:32:36 crc kubenswrapper[4758]: E0122 16:32:36.665001 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-samples-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-samples-operator pod=cluster-samples-operator-665b6dd947-jfncv_openshift-cluster-samples-operator(7a8b9092-45e9-456e-b1bc-e997c96a9836)\"" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jfncv" podUID="7a8b9092-45e9-456e-b1bc-e997c96a9836" Jan 22 16:32:36 crc kubenswrapper[4758]: I0122 16:32:36.990680 4758 patch_prober.go:28] interesting pod/console-f9d7485db-n2kln container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 22 16:32:36 crc kubenswrapper[4758]: I0122 16:32:36.990729 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-n2kln" podUID="8f67259d-8eec-4f78-929f-01e7abe893f6" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 22 16:32:37 crc kubenswrapper[4758]: I0122 16:32:37.259275 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-fjsgm" Jan 22 16:32:40 crc kubenswrapper[4758]: I0122 16:32:40.940571 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 16:32:41 crc kubenswrapper[4758]: I0122 16:32:41.065055 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c51caeef-6d2f-48b6-8bc0-7677dc92ae2c-kubelet-dir\") pod \"c51caeef-6d2f-48b6-8bc0-7677dc92ae2c\" (UID: \"c51caeef-6d2f-48b6-8bc0-7677dc92ae2c\") " Jan 22 16:32:41 crc kubenswrapper[4758]: I0122 16:32:41.065172 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c51caeef-6d2f-48b6-8bc0-7677dc92ae2c-kube-api-access\") pod \"c51caeef-6d2f-48b6-8bc0-7677dc92ae2c\" (UID: \"c51caeef-6d2f-48b6-8bc0-7677dc92ae2c\") " Jan 22 16:32:41 crc kubenswrapper[4758]: I0122 16:32:41.065197 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c51caeef-6d2f-48b6-8bc0-7677dc92ae2c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c51caeef-6d2f-48b6-8bc0-7677dc92ae2c" (UID: "c51caeef-6d2f-48b6-8bc0-7677dc92ae2c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:32:41 crc kubenswrapper[4758]: I0122 16:32:41.065401 4758 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c51caeef-6d2f-48b6-8bc0-7677dc92ae2c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 16:32:41 crc kubenswrapper[4758]: I0122 16:32:41.079365 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c51caeef-6d2f-48b6-8bc0-7677dc92ae2c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c51caeef-6d2f-48b6-8bc0-7677dc92ae2c" (UID: "c51caeef-6d2f-48b6-8bc0-7677dc92ae2c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:32:41 crc kubenswrapper[4758]: I0122 16:32:41.166141 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c51caeef-6d2f-48b6-8bc0-7677dc92ae2c-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 16:32:41 crc kubenswrapper[4758]: I0122 16:32:41.699472 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c51caeef-6d2f-48b6-8bc0-7677dc92ae2c","Type":"ContainerDied","Data":"05f0f9f9ee1a35d98989d744e0b11178cd1cc76b8850f12f469b16580159afb2"} Jan 22 16:32:41 crc kubenswrapper[4758]: I0122 16:32:41.699515 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05f0f9f9ee1a35d98989d744e0b11178cd1cc76b8850f12f469b16580159afb2" Jan 22 16:32:41 crc kubenswrapper[4758]: I0122 16:32:41.699540 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 16:32:43 crc kubenswrapper[4758]: I0122 16:32:43.837674 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:32:43 crc kubenswrapper[4758]: I0122 16:32:43.839270 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:32:46 crc kubenswrapper[4758]: I0122 16:32:46.256962 4758 patch_prober.go:28] interesting pod/downloads-7954f5f757-p5cqb container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Jan 22 16:32:46 crc kubenswrapper[4758]: I0122 16:32:46.257036 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-p5cqb" podUID="327d43d9-41eb-4ef4-9df0-d38e0739b7df" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Jan 22 16:32:46 crc kubenswrapper[4758]: I0122 16:32:46.257084 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-p5cqb" Jan 22 16:32:46 crc kubenswrapper[4758]: I0122 16:32:46.257421 4758 patch_prober.go:28] interesting pod/downloads-7954f5f757-p5cqb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Jan 22 16:32:46 crc kubenswrapper[4758]: I0122 16:32:46.257440 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-p5cqb" podUID="327d43d9-41eb-4ef4-9df0-d38e0739b7df" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Jan 22 16:32:46 crc kubenswrapper[4758]: I0122 16:32:46.257664 4758 patch_prober.go:28] interesting pod/downloads-7954f5f757-p5cqb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Jan 22 16:32:46 crc kubenswrapper[4758]: I0122 16:32:46.257689 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-p5cqb" podUID="327d43d9-41eb-4ef4-9df0-d38e0739b7df" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Jan 22 16:32:46 crc kubenswrapper[4758]: I0122 16:32:46.257690 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"ab1303618c1cf291efd95c335a5fe5c3db7734817b22ab68e757fa31d693d809"} pod="openshift-console/downloads-7954f5f757-p5cqb" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 22 16:32:46 crc kubenswrapper[4758]: I0122 16:32:46.257777 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-p5cqb" podUID="327d43d9-41eb-4ef4-9df0-d38e0739b7df" containerName="download-server" containerID="cri-o://ab1303618c1cf291efd95c335a5fe5c3db7734817b22ab68e757fa31d693d809" gracePeriod=2 Jan 22 16:32:46 crc kubenswrapper[4758]: I0122 16:32:46.901676 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:32:46 crc kubenswrapper[4758]: I0122 16:32:46.993866 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-n2kln" Jan 22 16:32:46 crc kubenswrapper[4758]: I0122 16:32:46.997624 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-n2kln" Jan 22 16:32:48 crc kubenswrapper[4758]: I0122 16:32:48.091017 4758 patch_prober.go:28] interesting pod/router-default-5444994796-7jtcn container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 16:32:48 crc kubenswrapper[4758]: I0122 16:32:48.091121 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-7jtcn" podUID="1d2c5bee-e237-4043-9f8a-73bb67ebf355" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 16:32:51 crc kubenswrapper[4758]: I0122 16:32:51.807592 4758 scope.go:117] "RemoveContainer" containerID="326370ef41cef10caa378860f23a0eef903309f2265b318cb214a9882a81c7d7" Jan 22 16:32:52 crc kubenswrapper[4758]: I0122 16:32:52.770650 4758 generic.go:334] "Generic (PLEG): container finished" podID="327d43d9-41eb-4ef4-9df0-d38e0739b7df" containerID="ab1303618c1cf291efd95c335a5fe5c3db7734817b22ab68e757fa31d693d809" exitCode=0 Jan 22 16:32:52 crc kubenswrapper[4758]: I0122 16:32:52.770707 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-p5cqb" event={"ID":"327d43d9-41eb-4ef4-9df0-d38e0739b7df","Type":"ContainerDied","Data":"ab1303618c1cf291efd95c335a5fe5c3db7734817b22ab68e757fa31d693d809"} Jan 22 16:32:56 crc kubenswrapper[4758]: I0122 16:32:56.227647 4758 patch_prober.go:28] interesting pod/downloads-7954f5f757-p5cqb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Jan 22 16:32:56 crc kubenswrapper[4758]: I0122 16:32:56.228041 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-p5cqb" podUID="327d43d9-41eb-4ef4-9df0-d38e0739b7df" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Jan 22 16:32:57 crc kubenswrapper[4758]: I0122 16:32:57.239198 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rjlbg" Jan 22 16:33:03 crc kubenswrapper[4758]: I0122 16:33:03.838807 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-jfncv_7a8b9092-45e9-456e-b1bc-e997c96a9836/cluster-samples-operator/1.log" Jan 22 16:33:03 crc kubenswrapper[4758]: E0122 16:33:03.948554 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 22 16:33:03 crc kubenswrapper[4758]: E0122 16:33:03.948819 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6gdng,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-mh88h_openshift-marketplace(0437f83e-83ed-42f5-88ab-110deeeac7a4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 16:33:03 crc kubenswrapper[4758]: E0122 16:33:03.949988 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-mh88h" podUID="0437f83e-83ed-42f5-88ab-110deeeac7a4" Jan 22 16:33:04 crc kubenswrapper[4758]: E0122 16:33:04.174013 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 22 16:33:04 crc kubenswrapper[4758]: E0122 16:33:04.174196 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hnx6v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-8v88c_openshift-marketplace(88b3808a-aa06-48ab-9b57-f474a2e1379a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 16:33:04 crc kubenswrapper[4758]: E0122 16:33:04.175600 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-8v88c" podUID="88b3808a-aa06-48ab-9b57-f474a2e1379a" Jan 22 16:33:06 crc kubenswrapper[4758]: I0122 16:33:06.226792 4758 patch_prober.go:28] interesting pod/downloads-7954f5f757-p5cqb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Jan 22 16:33:06 crc kubenswrapper[4758]: I0122 16:33:06.226865 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-p5cqb" podUID="327d43d9-41eb-4ef4-9df0-d38e0739b7df" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Jan 22 16:33:06 crc kubenswrapper[4758]: E0122 16:33:06.850869 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8v88c" podUID="88b3808a-aa06-48ab-9b57-f474a2e1379a" Jan 22 16:33:06 crc kubenswrapper[4758]: E0122 16:33:06.850872 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:bb28df9596f21787435f83dcb227d72eefd3603318b7d3461e9225570ddef962: Get \"https://registry.redhat.io/v2/redhat/redhat-operator-index/blobs/sha256:bb28df9596f21787435f83dcb227d72eefd3603318b7d3461e9225570ddef962\": context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 22 16:33:06 crc kubenswrapper[4758]: E0122 16:33:06.850928 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-mh88h" podUID="0437f83e-83ed-42f5-88ab-110deeeac7a4" Jan 22 16:33:06 crc kubenswrapper[4758]: E0122 16:33:06.851161 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2fp54,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-559wb_openshift-marketplace(895a8f2e-590a-4270-9eb0-1f7c76da93d9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:bb28df9596f21787435f83dcb227d72eefd3603318b7d3461e9225570ddef962: Get \"https://registry.redhat.io/v2/redhat/redhat-operator-index/blobs/sha256:bb28df9596f21787435f83dcb227d72eefd3603318b7d3461e9225570ddef962\": context canceled" logger="UnhandledError" Jan 22 16:33:06 crc kubenswrapper[4758]: E0122 16:33:06.852431 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:bb28df9596f21787435f83dcb227d72eefd3603318b7d3461e9225570ddef962: Get \\\"https://registry.redhat.io/v2/redhat/redhat-operator-index/blobs/sha256:bb28df9596f21787435f83dcb227d72eefd3603318b7d3461e9225570ddef962\\\": context canceled\"" pod="openshift-marketplace/redhat-operators-559wb" podUID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" Jan 22 16:33:07 crc kubenswrapper[4758]: E0122 16:33:07.021941 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 22 16:33:07 crc kubenswrapper[4758]: E0122 16:33:07.022084 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6kqdl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-c6qmr_openshift-marketplace(6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 16:33:07 crc kubenswrapper[4758]: E0122 16:33:07.023371 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-c6qmr" podUID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" Jan 22 16:33:07 crc kubenswrapper[4758]: E0122 16:33:07.191786 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 22 16:33:07 crc kubenswrapper[4758]: E0122 16:33:07.191957 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hg45h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-b2rzs_openshift-marketplace(8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 16:33:07 crc kubenswrapper[4758]: E0122 16:33:07.193163 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-b2rzs" podUID="8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9" Jan 22 16:33:08 crc kubenswrapper[4758]: E0122 16:33:08.938489 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 22 16:33:08 crc kubenswrapper[4758]: E0122 16:33:08.939102 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kvcxg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-wjs4t_openshift-marketplace(a12a62bb-3713-4f66-902e-673cc09db2ee): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 16:33:08 crc kubenswrapper[4758]: E0122 16:33:08.939145 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:bb28df9596f21787435f83dcb227d72eefd3603318b7d3461e9225570ddef962: Get \"https://registry.redhat.io/v2/redhat/redhat-operator-index/blobs/sha256:bb28df9596f21787435f83dcb227d72eefd3603318b7d3461e9225570ddef962\": context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 22 16:33:08 crc kubenswrapper[4758]: E0122 16:33:08.939226 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d864j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-s7bgv_openshift-marketplace(e88aa20b-e3aa-4cc2-856c-0dd5e9394992): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:bb28df9596f21787435f83dcb227d72eefd3603318b7d3461e9225570ddef962: Get \"https://registry.redhat.io/v2/redhat/redhat-operator-index/blobs/sha256:bb28df9596f21787435f83dcb227d72eefd3603318b7d3461e9225570ddef962\": context canceled" logger="UnhandledError" Jan 22 16:33:08 crc kubenswrapper[4758]: E0122 16:33:08.938893 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-b2rzs" podUID="8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9" Jan 22 16:33:08 crc kubenswrapper[4758]: E0122 16:33:08.939359 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-c6qmr" podUID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" Jan 22 16:33:08 crc kubenswrapper[4758]: E0122 16:33:08.941596 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:bb28df9596f21787435f83dcb227d72eefd3603318b7d3461e9225570ddef962: Get \\\"https://registry.redhat.io/v2/redhat/redhat-operator-index/blobs/sha256:bb28df9596f21787435f83dcb227d72eefd3603318b7d3461e9225570ddef962\\\": context canceled\"" pod="openshift-marketplace/redhat-operators-s7bgv" podUID="e88aa20b-e3aa-4cc2-856c-0dd5e9394992" Jan 22 16:33:08 crc kubenswrapper[4758]: E0122 16:33:08.942981 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-wjs4t" podUID="a12a62bb-3713-4f66-902e-673cc09db2ee" Jan 22 16:33:08 crc kubenswrapper[4758]: E0122 16:33:08.952617 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 22 16:33:08 crc kubenswrapper[4758]: E0122 16:33:08.952949 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7tnwm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-nthqj_openshift-marketplace(25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 16:33:08 crc kubenswrapper[4758]: E0122 16:33:08.954101 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-nthqj" podUID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" Jan 22 16:33:09 crc kubenswrapper[4758]: I0122 16:33:09.052665 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 22 16:33:09 crc kubenswrapper[4758]: E0122 16:33:09.053281 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83ce5f1a-67e0-43f8-b8cf-99636c7e25a2" containerName="pruner" Jan 22 16:33:09 crc kubenswrapper[4758]: I0122 16:33:09.053304 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="83ce5f1a-67e0-43f8-b8cf-99636c7e25a2" containerName="pruner" Jan 22 16:33:09 crc kubenswrapper[4758]: E0122 16:33:09.053324 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c51caeef-6d2f-48b6-8bc0-7677dc92ae2c" containerName="pruner" Jan 22 16:33:09 crc kubenswrapper[4758]: I0122 16:33:09.053332 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="c51caeef-6d2f-48b6-8bc0-7677dc92ae2c" containerName="pruner" Jan 22 16:33:09 crc kubenswrapper[4758]: I0122 16:33:09.053475 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="83ce5f1a-67e0-43f8-b8cf-99636c7e25a2" containerName="pruner" Jan 22 16:33:09 crc kubenswrapper[4758]: I0122 16:33:09.053487 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="c51caeef-6d2f-48b6-8bc0-7677dc92ae2c" containerName="pruner" Jan 22 16:33:09 crc kubenswrapper[4758]: I0122 16:33:09.053978 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 16:33:09 crc kubenswrapper[4758]: I0122 16:33:09.055888 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 22 16:33:09 crc kubenswrapper[4758]: I0122 16:33:09.055990 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 22 16:33:09 crc kubenswrapper[4758]: I0122 16:33:09.060580 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 22 16:33:09 crc kubenswrapper[4758]: I0122 16:33:09.227634 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/18a92181-c525-4fa5-b436-689963b5fad6-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"18a92181-c525-4fa5-b436-689963b5fad6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 16:33:09 crc kubenswrapper[4758]: I0122 16:33:09.227690 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/18a92181-c525-4fa5-b436-689963b5fad6-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"18a92181-c525-4fa5-b436-689963b5fad6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 16:33:09 crc kubenswrapper[4758]: I0122 16:33:09.329294 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/18a92181-c525-4fa5-b436-689963b5fad6-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"18a92181-c525-4fa5-b436-689963b5fad6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 16:33:09 crc kubenswrapper[4758]: I0122 16:33:09.329348 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/18a92181-c525-4fa5-b436-689963b5fad6-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"18a92181-c525-4fa5-b436-689963b5fad6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 16:33:09 crc kubenswrapper[4758]: I0122 16:33:09.329431 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/18a92181-c525-4fa5-b436-689963b5fad6-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"18a92181-c525-4fa5-b436-689963b5fad6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 16:33:09 crc kubenswrapper[4758]: I0122 16:33:09.349694 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/18a92181-c525-4fa5-b436-689963b5fad6-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"18a92181-c525-4fa5-b436-689963b5fad6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 16:33:09 crc kubenswrapper[4758]: I0122 16:33:09.393248 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 16:33:09 crc kubenswrapper[4758]: I0122 16:33:09.861548 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 22 16:33:09 crc kubenswrapper[4758]: W0122 16:33:09.868514 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod18a92181_c525_4fa5_b436_689963b5fad6.slice/crio-3234464eb8c7a9cbe1b85a128587a71106950e3726cea8762220bf085f03cbd0 WatchSource:0}: Error finding container 3234464eb8c7a9cbe1b85a128587a71106950e3726cea8762220bf085f03cbd0: Status 404 returned error can't find the container with id 3234464eb8c7a9cbe1b85a128587a71106950e3726cea8762220bf085f03cbd0 Jan 22 16:33:09 crc kubenswrapper[4758]: I0122 16:33:09.872208 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-jfncv_7a8b9092-45e9-456e-b1bc-e997c96a9836/cluster-samples-operator/1.log" Jan 22 16:33:09 crc kubenswrapper[4758]: I0122 16:33:09.872576 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jfncv" event={"ID":"7a8b9092-45e9-456e-b1bc-e997c96a9836","Type":"ContainerStarted","Data":"b44c9cf42bb766a6eb222017e493723fa5f6283125adde11c0e3aa2012ae0756"} Jan 22 16:33:09 crc kubenswrapper[4758]: I0122 16:33:09.875373 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-p5cqb" event={"ID":"327d43d9-41eb-4ef4-9df0-d38e0739b7df","Type":"ContainerStarted","Data":"964c69ba9851f2bb5d9540722e43beb6cdc9ad4e175295e51739d7cf00373c56"} Jan 22 16:33:09 crc kubenswrapper[4758]: I0122 16:33:09.875866 4758 patch_prober.go:28] interesting pod/downloads-7954f5f757-p5cqb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Jan 22 16:33:09 crc kubenswrapper[4758]: I0122 16:33:09.875912 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-p5cqb" podUID="327d43d9-41eb-4ef4-9df0-d38e0739b7df" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Jan 22 16:33:09 crc kubenswrapper[4758]: E0122 16:33:09.880665 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-s7bgv" podUID="e88aa20b-e3aa-4cc2-856c-0dd5e9394992" Jan 22 16:33:09 crc kubenswrapper[4758]: E0122 16:33:09.880665 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-nthqj" podUID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" Jan 22 16:33:09 crc kubenswrapper[4758]: E0122 16:33:09.880700 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wjs4t" podUID="a12a62bb-3713-4f66-902e-673cc09db2ee" Jan 22 16:33:11 crc kubenswrapper[4758]: I0122 16:33:10.881012 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"18a92181-c525-4fa5-b436-689963b5fad6","Type":"ContainerStarted","Data":"450352c55b9b66a0683249c2c26ac08faf6988e31acef816e353285f0f8d21ca"} Jan 22 16:33:11 crc kubenswrapper[4758]: I0122 16:33:10.881373 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"18a92181-c525-4fa5-b436-689963b5fad6","Type":"ContainerStarted","Data":"3234464eb8c7a9cbe1b85a128587a71106950e3726cea8762220bf085f03cbd0"} Jan 22 16:33:11 crc kubenswrapper[4758]: I0122 16:33:10.881393 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-p5cqb" Jan 22 16:33:11 crc kubenswrapper[4758]: I0122 16:33:10.882608 4758 patch_prober.go:28] interesting pod/downloads-7954f5f757-p5cqb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Jan 22 16:33:11 crc kubenswrapper[4758]: I0122 16:33:10.882651 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-p5cqb" podUID="327d43d9-41eb-4ef4-9df0-d38e0739b7df" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Jan 22 16:33:11 crc kubenswrapper[4758]: I0122 16:33:10.906338 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=1.90632329 podStartE2EDuration="1.90632329s" podCreationTimestamp="2026-01-22 16:33:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:33:10.90493098 +0000 UTC m=+212.388270265" watchObservedRunningTime="2026-01-22 16:33:10.90632329 +0000 UTC m=+212.389662575" Jan 22 16:33:12 crc kubenswrapper[4758]: I0122 16:33:12.013602 4758 patch_prober.go:28] interesting pod/downloads-7954f5f757-p5cqb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Jan 22 16:33:12 crc kubenswrapper[4758]: I0122 16:33:12.014124 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-p5cqb" podUID="327d43d9-41eb-4ef4-9df0-d38e0739b7df" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Jan 22 16:33:13 crc kubenswrapper[4758]: I0122 16:33:13.652642 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 22 16:33:13 crc kubenswrapper[4758]: I0122 16:33:13.655238 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 22 16:33:13 crc kubenswrapper[4758]: I0122 16:33:13.660650 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 22 16:33:13 crc kubenswrapper[4758]: I0122 16:33:13.805578 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3893a5d6-af77-48c5-a325-35d144e54f8a-kube-api-access\") pod \"installer-9-crc\" (UID: \"3893a5d6-af77-48c5-a325-35d144e54f8a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 16:33:13 crc kubenswrapper[4758]: I0122 16:33:13.805643 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3893a5d6-af77-48c5-a325-35d144e54f8a-var-lock\") pod \"installer-9-crc\" (UID: \"3893a5d6-af77-48c5-a325-35d144e54f8a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 16:33:13 crc kubenswrapper[4758]: I0122 16:33:13.805723 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3893a5d6-af77-48c5-a325-35d144e54f8a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3893a5d6-af77-48c5-a325-35d144e54f8a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 16:33:13 crc kubenswrapper[4758]: I0122 16:33:13.837881 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:33:13 crc kubenswrapper[4758]: I0122 16:33:13.837945 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:33:13 crc kubenswrapper[4758]: I0122 16:33:13.837994 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" Jan 22 16:33:13 crc kubenswrapper[4758]: I0122 16:33:13.838536 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4fbf5569b30ec6397014b282bf67eca77930756b413c7554ab366d2d31a4f548"} pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 16:33:13 crc kubenswrapper[4758]: I0122 16:33:13.838637 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" containerID="cri-o://4fbf5569b30ec6397014b282bf67eca77930756b413c7554ab366d2d31a4f548" gracePeriod=600 Jan 22 16:33:13 crc kubenswrapper[4758]: I0122 16:33:13.907547 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3893a5d6-af77-48c5-a325-35d144e54f8a-kube-api-access\") pod \"installer-9-crc\" (UID: \"3893a5d6-af77-48c5-a325-35d144e54f8a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 16:33:13 crc kubenswrapper[4758]: I0122 16:33:13.908037 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3893a5d6-af77-48c5-a325-35d144e54f8a-var-lock\") pod \"installer-9-crc\" (UID: \"3893a5d6-af77-48c5-a325-35d144e54f8a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 16:33:13 crc kubenswrapper[4758]: I0122 16:33:13.908205 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3893a5d6-af77-48c5-a325-35d144e54f8a-var-lock\") pod \"installer-9-crc\" (UID: \"3893a5d6-af77-48c5-a325-35d144e54f8a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 16:33:13 crc kubenswrapper[4758]: I0122 16:33:13.908671 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3893a5d6-af77-48c5-a325-35d144e54f8a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3893a5d6-af77-48c5-a325-35d144e54f8a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 16:33:13 crc kubenswrapper[4758]: I0122 16:33:13.908730 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3893a5d6-af77-48c5-a325-35d144e54f8a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3893a5d6-af77-48c5-a325-35d144e54f8a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 16:33:13 crc kubenswrapper[4758]: I0122 16:33:13.947902 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3893a5d6-af77-48c5-a325-35d144e54f8a-kube-api-access\") pod \"installer-9-crc\" (UID: \"3893a5d6-af77-48c5-a325-35d144e54f8a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 16:33:13 crc kubenswrapper[4758]: I0122 16:33:13.977510 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 22 16:33:14 crc kubenswrapper[4758]: I0122 16:33:14.023407 4758 generic.go:334] "Generic (PLEG): container finished" podID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerID="4fbf5569b30ec6397014b282bf67eca77930756b413c7554ab366d2d31a4f548" exitCode=0 Jan 22 16:33:14 crc kubenswrapper[4758]: I0122 16:33:14.023805 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerDied","Data":"4fbf5569b30ec6397014b282bf67eca77930756b413c7554ab366d2d31a4f548"} Jan 22 16:33:14 crc kubenswrapper[4758]: I0122 16:33:14.509635 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 22 16:33:14 crc kubenswrapper[4758]: W0122 16:33:14.938253 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3893a5d6_af77_48c5_a325_35d144e54f8a.slice/crio-23e21bfce556e5bca38f3671daad5440530fd74d88c4ae332699434c92cc2636 WatchSource:0}: Error finding container 23e21bfce556e5bca38f3671daad5440530fd74d88c4ae332699434c92cc2636: Status 404 returned error can't find the container with id 23e21bfce556e5bca38f3671daad5440530fd74d88c4ae332699434c92cc2636 Jan 22 16:33:15 crc kubenswrapper[4758]: I0122 16:33:15.031321 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3893a5d6-af77-48c5-a325-35d144e54f8a","Type":"ContainerStarted","Data":"23e21bfce556e5bca38f3671daad5440530fd74d88c4ae332699434c92cc2636"} Jan 22 16:33:16 crc kubenswrapper[4758]: I0122 16:33:16.040687 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerStarted","Data":"a7cae046a3bb22e5d3a084fb0fecaa7e3bddc05b5196ba2795a8cbf04c254117"} Jan 22 16:33:16 crc kubenswrapper[4758]: I0122 16:33:16.042368 4758 generic.go:334] "Generic (PLEG): container finished" podID="18a92181-c525-4fa5-b436-689963b5fad6" containerID="450352c55b9b66a0683249c2c26ac08faf6988e31acef816e353285f0f8d21ca" exitCode=0 Jan 22 16:33:16 crc kubenswrapper[4758]: I0122 16:33:16.042432 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"18a92181-c525-4fa5-b436-689963b5fad6","Type":"ContainerDied","Data":"450352c55b9b66a0683249c2c26ac08faf6988e31acef816e353285f0f8d21ca"} Jan 22 16:33:16 crc kubenswrapper[4758]: I0122 16:33:16.044931 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3893a5d6-af77-48c5-a325-35d144e54f8a","Type":"ContainerStarted","Data":"07d6e00e2f24be95948ac7290716b0a25b0444aef40bdd61ac83888d200febf2"} Jan 22 16:33:16 crc kubenswrapper[4758]: I0122 16:33:16.227932 4758 patch_prober.go:28] interesting pod/downloads-7954f5f757-p5cqb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Jan 22 16:33:16 crc kubenswrapper[4758]: I0122 16:33:16.228019 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-p5cqb" podUID="327d43d9-41eb-4ef4-9df0-d38e0739b7df" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Jan 22 16:33:16 crc kubenswrapper[4758]: I0122 16:33:16.228031 4758 patch_prober.go:28] interesting pod/downloads-7954f5f757-p5cqb container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Jan 22 16:33:16 crc kubenswrapper[4758]: I0122 16:33:16.228117 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-p5cqb" podUID="327d43d9-41eb-4ef4-9df0-d38e0739b7df" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Jan 22 16:33:17 crc kubenswrapper[4758]: I0122 16:33:17.346242 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 16:33:17 crc kubenswrapper[4758]: I0122 16:33:17.479636 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/18a92181-c525-4fa5-b436-689963b5fad6-kube-api-access\") pod \"18a92181-c525-4fa5-b436-689963b5fad6\" (UID: \"18a92181-c525-4fa5-b436-689963b5fad6\") " Jan 22 16:33:17 crc kubenswrapper[4758]: I0122 16:33:17.480201 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/18a92181-c525-4fa5-b436-689963b5fad6-kubelet-dir\") pod \"18a92181-c525-4fa5-b436-689963b5fad6\" (UID: \"18a92181-c525-4fa5-b436-689963b5fad6\") " Jan 22 16:33:17 crc kubenswrapper[4758]: I0122 16:33:17.480373 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18a92181-c525-4fa5-b436-689963b5fad6-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "18a92181-c525-4fa5-b436-689963b5fad6" (UID: "18a92181-c525-4fa5-b436-689963b5fad6"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:33:17 crc kubenswrapper[4758]: I0122 16:33:17.480485 4758 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/18a92181-c525-4fa5-b436-689963b5fad6-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 16:33:17 crc kubenswrapper[4758]: I0122 16:33:17.486318 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18a92181-c525-4fa5-b436-689963b5fad6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "18a92181-c525-4fa5-b436-689963b5fad6" (UID: "18a92181-c525-4fa5-b436-689963b5fad6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:33:17 crc kubenswrapper[4758]: I0122 16:33:17.581707 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/18a92181-c525-4fa5-b436-689963b5fad6-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 16:33:18 crc kubenswrapper[4758]: I0122 16:33:18.059690 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 16:33:18 crc kubenswrapper[4758]: I0122 16:33:18.059690 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"18a92181-c525-4fa5-b436-689963b5fad6","Type":"ContainerDied","Data":"3234464eb8c7a9cbe1b85a128587a71106950e3726cea8762220bf085f03cbd0"} Jan 22 16:33:18 crc kubenswrapper[4758]: I0122 16:33:18.060362 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3234464eb8c7a9cbe1b85a128587a71106950e3726cea8762220bf085f03cbd0" Jan 22 16:33:18 crc kubenswrapper[4758]: I0122 16:33:18.093236 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=5.093216551 podStartE2EDuration="5.093216551s" podCreationTimestamp="2026-01-22 16:33:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:33:18.087299395 +0000 UTC m=+219.570638690" watchObservedRunningTime="2026-01-22 16:33:18.093216551 +0000 UTC m=+219.576555826" Jan 22 16:33:21 crc kubenswrapper[4758]: I0122 16:33:21.085368 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mh88h" event={"ID":"0437f83e-83ed-42f5-88ab-110deeeac7a4","Type":"ContainerStarted","Data":"2df174418e884b4cf3b67404b07f226cf8e1296b25c4ff1d7cab69ccd1fdd01c"} Jan 22 16:33:22 crc kubenswrapper[4758]: I0122 16:33:22.122359 4758 generic.go:334] "Generic (PLEG): container finished" podID="0437f83e-83ed-42f5-88ab-110deeeac7a4" containerID="2df174418e884b4cf3b67404b07f226cf8e1296b25c4ff1d7cab69ccd1fdd01c" exitCode=0 Jan 22 16:33:22 crc kubenswrapper[4758]: I0122 16:33:22.122461 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mh88h" event={"ID":"0437f83e-83ed-42f5-88ab-110deeeac7a4","Type":"ContainerDied","Data":"2df174418e884b4cf3b67404b07f226cf8e1296b25c4ff1d7cab69ccd1fdd01c"} Jan 22 16:33:22 crc kubenswrapper[4758]: I0122 16:33:22.132636 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2rzs" event={"ID":"8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9","Type":"ContainerStarted","Data":"1e0acb8ed556cc14512fc308b5a524d120c95ce37b647251de19030c581bf8d9"} Jan 22 16:33:23 crc kubenswrapper[4758]: I0122 16:33:23.141436 4758 generic.go:334] "Generic (PLEG): container finished" podID="8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9" containerID="1e0acb8ed556cc14512fc308b5a524d120c95ce37b647251de19030c581bf8d9" exitCode=0 Jan 22 16:33:23 crc kubenswrapper[4758]: I0122 16:33:23.141536 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2rzs" event={"ID":"8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9","Type":"ContainerDied","Data":"1e0acb8ed556cc14512fc308b5a524d120c95ce37b647251de19030c581bf8d9"} Jan 22 16:33:23 crc kubenswrapper[4758]: I0122 16:33:23.149769 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8v88c" event={"ID":"88b3808a-aa06-48ab-9b57-f474a2e1379a","Type":"ContainerStarted","Data":"9e9cc8c35fc8f5cdfd74e6abff53ae2eac7dfda663c9ae64f12d5a594faef9cf"} Jan 22 16:33:24 crc kubenswrapper[4758]: I0122 16:33:24.158471 4758 generic.go:334] "Generic (PLEG): container finished" podID="88b3808a-aa06-48ab-9b57-f474a2e1379a" containerID="9e9cc8c35fc8f5cdfd74e6abff53ae2eac7dfda663c9ae64f12d5a594faef9cf" exitCode=0 Jan 22 16:33:24 crc kubenswrapper[4758]: I0122 16:33:24.158514 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8v88c" event={"ID":"88b3808a-aa06-48ab-9b57-f474a2e1379a","Type":"ContainerDied","Data":"9e9cc8c35fc8f5cdfd74e6abff53ae2eac7dfda663c9ae64f12d5a594faef9cf"} Jan 22 16:33:26 crc kubenswrapper[4758]: I0122 16:33:26.235391 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-p5cqb" Jan 22 16:33:43 crc kubenswrapper[4758]: E0122 16:33:43.487761 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 22 16:33:43 crc kubenswrapper[4758]: E0122 16:33:43.488483 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2fp54,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-559wb_openshift-marketplace(895a8f2e-590a-4270-9eb0-1f7c76da93d9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 16:33:43 crc kubenswrapper[4758]: E0122 16:33:43.489676 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-559wb" podUID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" Jan 22 16:33:53 crc kubenswrapper[4758]: I0122 16:33:53.365360 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6qmr" event={"ID":"6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4","Type":"ContainerStarted","Data":"5c7fd3b6b998083fd2f09c10a4cf8852ce10f3d758f76897756b5137a5c54138"} Jan 22 16:33:53 crc kubenswrapper[4758]: I0122 16:33:53.369882 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mh88h" event={"ID":"0437f83e-83ed-42f5-88ab-110deeeac7a4","Type":"ContainerStarted","Data":"a8ce6f54e301d9403da36f9643f9eb1b971cacf3a128f5837b85f0f3053db213"} Jan 22 16:33:53 crc kubenswrapper[4758]: I0122 16:33:53.485574 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8v88c" event={"ID":"88b3808a-aa06-48ab-9b57-f474a2e1379a","Type":"ContainerStarted","Data":"4f09d3a6ac11c76f074883c751146dc2e0c65ff684250918a6b12f70d1815a59"} Jan 22 16:33:53 crc kubenswrapper[4758]: I0122 16:33:53.487210 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7bgv" event={"ID":"e88aa20b-e3aa-4cc2-856c-0dd5e9394992","Type":"ContainerStarted","Data":"4a8e74249b93d523f6ff46053629cce981e04cada3afea5a2fe676f782f9c84a"} Jan 22 16:33:53 crc kubenswrapper[4758]: I0122 16:33:53.488550 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2rzs" event={"ID":"8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9","Type":"ContainerStarted","Data":"c0d8a02460c67b646af3631ad0dce7aa077a5a9e907c2b8a02543b1c3c968606"} Jan 22 16:33:53 crc kubenswrapper[4758]: I0122 16:33:53.489923 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nthqj" event={"ID":"25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e","Type":"ContainerStarted","Data":"461d9bec2eafa95296bfe9ed6d6ed0382e6d240aa56ca0934f9076d1c3e426f6"} Jan 22 16:33:53 crc kubenswrapper[4758]: I0122 16:33:53.491234 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wjs4t" event={"ID":"a12a62bb-3713-4f66-902e-673cc09db2ee","Type":"ContainerStarted","Data":"4bd4d7a6ce0f5eabb7a54beaa4c2580649af38778d45594fed69a145ecfcece7"} Jan 22 16:33:53 crc kubenswrapper[4758]: I0122 16:33:53.546687 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b2rzs" podStartSLOduration=3.278991198 podStartE2EDuration="1m27.54666955s" podCreationTimestamp="2026-01-22 16:32:26 +0000 UTC" firstStartedPulling="2026-01-22 16:32:28.343593984 +0000 UTC m=+169.826933269" lastFinishedPulling="2026-01-22 16:33:52.611272336 +0000 UTC m=+254.094611621" observedRunningTime="2026-01-22 16:33:53.540614349 +0000 UTC m=+255.023953654" watchObservedRunningTime="2026-01-22 16:33:53.54666955 +0000 UTC m=+255.030008835" Jan 22 16:33:53 crc kubenswrapper[4758]: I0122 16:33:53.601950 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8v88c" podStartSLOduration=3.303887509 podStartE2EDuration="1m27.601924606s" podCreationTimestamp="2026-01-22 16:32:26 +0000 UTC" firstStartedPulling="2026-01-22 16:32:28.341927897 +0000 UTC m=+169.825267202" lastFinishedPulling="2026-01-22 16:33:52.639964994 +0000 UTC m=+254.123304299" observedRunningTime="2026-01-22 16:33:53.599839168 +0000 UTC m=+255.083178453" watchObservedRunningTime="2026-01-22 16:33:53.601924606 +0000 UTC m=+255.085263891" Jan 22 16:33:53 crc kubenswrapper[4758]: I0122 16:33:53.681446 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mh88h" podStartSLOduration=3.40405075 podStartE2EDuration="1m27.681412675s" podCreationTimestamp="2026-01-22 16:32:26 +0000 UTC" firstStartedPulling="2026-01-22 16:32:28.348823411 +0000 UTC m=+169.832162696" lastFinishedPulling="2026-01-22 16:33:52.626185336 +0000 UTC m=+254.109524621" observedRunningTime="2026-01-22 16:33:53.676372442 +0000 UTC m=+255.159711727" watchObservedRunningTime="2026-01-22 16:33:53.681412675 +0000 UTC m=+255.164751960" Jan 22 16:33:53 crc kubenswrapper[4758]: I0122 16:33:53.972791 4758 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 16:33:53 crc kubenswrapper[4758]: I0122 16:33:53.973131 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790" gracePeriod=15 Jan 22 16:33:53 crc kubenswrapper[4758]: I0122 16:33:53.973252 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b" gracePeriod=15 Jan 22 16:33:53 crc kubenswrapper[4758]: I0122 16:33:53.973311 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649" gracePeriod=15 Jan 22 16:33:53 crc kubenswrapper[4758]: I0122 16:33:53.973453 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb" gracePeriod=15 Jan 22 16:33:53 crc kubenswrapper[4758]: I0122 16:33:53.974182 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6" gracePeriod=15 Jan 22 16:33:53 crc kubenswrapper[4758]: I0122 16:33:53.975525 4758 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 16:33:53 crc kubenswrapper[4758]: E0122 16:33:53.976174 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 22 16:33:53 crc kubenswrapper[4758]: I0122 16:33:53.976200 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 22 16:33:53 crc kubenswrapper[4758]: E0122 16:33:53.976235 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 22 16:33:53 crc kubenswrapper[4758]: I0122 16:33:53.976244 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 22 16:33:53 crc kubenswrapper[4758]: E0122 16:33:53.976265 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 16:33:53 crc kubenswrapper[4758]: I0122 16:33:53.976273 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 16:33:53 crc kubenswrapper[4758]: E0122 16:33:53.976284 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 22 16:33:53 crc kubenswrapper[4758]: I0122 16:33:53.976292 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 22 16:33:53 crc kubenswrapper[4758]: E0122 16:33:53.976303 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 22 16:33:53 crc kubenswrapper[4758]: I0122 16:33:53.976313 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 22 16:33:53 crc kubenswrapper[4758]: E0122 16:33:53.976326 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a92181-c525-4fa5-b436-689963b5fad6" containerName="pruner" Jan 22 16:33:53 crc kubenswrapper[4758]: I0122 16:33:53.976335 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a92181-c525-4fa5-b436-689963b5fad6" containerName="pruner" Jan 22 16:33:53 crc kubenswrapper[4758]: E0122 16:33:53.976347 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 16:33:53 crc kubenswrapper[4758]: I0122 16:33:53.976355 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 16:33:53 crc kubenswrapper[4758]: E0122 16:33:53.976407 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 22 16:33:53 crc kubenswrapper[4758]: I0122 16:33:53.976416 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 22 16:33:53 crc kubenswrapper[4758]: I0122 16:33:53.976566 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 16:33:53 crc kubenswrapper[4758]: I0122 16:33:53.976577 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 22 16:33:53 crc kubenswrapper[4758]: I0122 16:33:53.976584 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 22 16:33:53 crc kubenswrapper[4758]: I0122 16:33:53.976594 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 22 16:33:53 crc kubenswrapper[4758]: I0122 16:33:53.976602 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 22 16:33:53 crc kubenswrapper[4758]: I0122 16:33:53.976616 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a92181-c525-4fa5-b436-689963b5fad6" containerName="pruner" Jan 22 16:33:53 crc kubenswrapper[4758]: I0122 16:33:53.976901 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 16:33:53 crc kubenswrapper[4758]: I0122 16:33:53.978546 4758 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 16:33:53 crc kubenswrapper[4758]: I0122 16:33:53.979207 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 16:33:53 crc kubenswrapper[4758]: I0122 16:33:53.985926 4758 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 22 16:33:54 crc kubenswrapper[4758]: I0122 16:33:54.003051 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:33:54 crc kubenswrapper[4758]: I0122 16:33:54.003220 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 16:33:54 crc kubenswrapper[4758]: I0122 16:33:54.003350 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:33:54 crc kubenswrapper[4758]: I0122 16:33:54.003579 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:33:54 crc kubenswrapper[4758]: I0122 16:33:54.003681 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 16:33:54 crc kubenswrapper[4758]: I0122 16:33:54.003783 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 16:33:54 crc kubenswrapper[4758]: I0122 16:33:54.004011 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 16:33:54 crc kubenswrapper[4758]: I0122 16:33:54.004163 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 16:33:54 crc kubenswrapper[4758]: I0122 16:33:54.106007 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 16:33:54 crc kubenswrapper[4758]: I0122 16:33:54.106079 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:33:54 crc kubenswrapper[4758]: I0122 16:33:54.106111 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 16:33:54 crc kubenswrapper[4758]: I0122 16:33:54.106140 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:33:54 crc kubenswrapper[4758]: I0122 16:33:54.106155 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:33:54 crc kubenswrapper[4758]: I0122 16:33:54.106173 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 16:33:54 crc kubenswrapper[4758]: I0122 16:33:54.106187 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 16:33:54 crc kubenswrapper[4758]: I0122 16:33:54.106224 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 16:33:54 crc kubenswrapper[4758]: I0122 16:33:54.106289 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 16:33:54 crc kubenswrapper[4758]: I0122 16:33:54.106334 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 16:33:54 crc kubenswrapper[4758]: I0122 16:33:54.106376 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:33:54 crc kubenswrapper[4758]: I0122 16:33:54.106404 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 16:33:54 crc kubenswrapper[4758]: I0122 16:33:54.106431 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:33:54 crc kubenswrapper[4758]: I0122 16:33:54.106457 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:33:54 crc kubenswrapper[4758]: I0122 16:33:54.106478 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 16:33:54 crc kubenswrapper[4758]: I0122 16:33:54.106496 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 16:33:54 crc kubenswrapper[4758]: E0122 16:33:54.221596 4758 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:54 crc kubenswrapper[4758]: E0122 16:33:54.222428 4758 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:54 crc kubenswrapper[4758]: E0122 16:33:54.223276 4758 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:54 crc kubenswrapper[4758]: E0122 16:33:54.223662 4758 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:54 crc kubenswrapper[4758]: E0122 16:33:54.223989 4758 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:54 crc kubenswrapper[4758]: I0122 16:33:54.224025 4758 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 22 16:33:54 crc kubenswrapper[4758]: E0122 16:33:54.224452 4758 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" interval="200ms" Jan 22 16:33:54 crc kubenswrapper[4758]: E0122 16:33:54.425880 4758 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" interval="400ms" Jan 22 16:33:54 crc kubenswrapper[4758]: I0122 16:33:54.527507 4758 generic.go:334] "Generic (PLEG): container finished" podID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" containerID="461d9bec2eafa95296bfe9ed6d6ed0382e6d240aa56ca0934f9076d1c3e426f6" exitCode=0 Jan 22 16:33:54 crc kubenswrapper[4758]: I0122 16:33:54.527570 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nthqj" event={"ID":"25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e","Type":"ContainerDied","Data":"461d9bec2eafa95296bfe9ed6d6ed0382e6d240aa56ca0934f9076d1c3e426f6"} Jan 22 16:33:54 crc kubenswrapper[4758]: I0122 16:33:54.528588 4758 status_manager.go:851] "Failed to get status for pod" podUID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" pod="openshift-marketplace/redhat-marketplace-nthqj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nthqj\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:54 crc kubenswrapper[4758]: E0122 16:33:54.529992 4758 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.223:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-marketplace-nthqj.188d1abe98675b3a openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-nthqj,UID:25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e,APIVersion:v1,ResourceVersion:28597,FieldPath:spec.containers{registry-server},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\",Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 16:33:54.52949177 +0000 UTC m=+256.012831045,LastTimestamp:2026-01-22 16:33:54.52949177 +0000 UTC m=+256.012831045,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 16:33:54 crc kubenswrapper[4758]: I0122 16:33:54.530787 4758 generic.go:334] "Generic (PLEG): container finished" podID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" containerID="5c7fd3b6b998083fd2f09c10a4cf8852ce10f3d758f76897756b5137a5c54138" exitCode=0 Jan 22 16:33:54 crc kubenswrapper[4758]: I0122 16:33:54.530891 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6qmr" event={"ID":"6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4","Type":"ContainerDied","Data":"5c7fd3b6b998083fd2f09c10a4cf8852ce10f3d758f76897756b5137a5c54138"} Jan 22 16:33:54 crc kubenswrapper[4758]: I0122 16:33:54.531545 4758 status_manager.go:851] "Failed to get status for pod" podUID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" pod="openshift-marketplace/redhat-marketplace-nthqj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nthqj\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:54 crc kubenswrapper[4758]: I0122 16:33:54.531772 4758 status_manager.go:851] "Failed to get status for pod" podUID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" pod="openshift-marketplace/community-operators-c6qmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c6qmr\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:54 crc kubenswrapper[4758]: I0122 16:33:54.534506 4758 generic.go:334] "Generic (PLEG): container finished" podID="a12a62bb-3713-4f66-902e-673cc09db2ee" containerID="4bd4d7a6ce0f5eabb7a54beaa4c2580649af38778d45594fed69a145ecfcece7" exitCode=0 Jan 22 16:33:54 crc kubenswrapper[4758]: I0122 16:33:54.534577 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wjs4t" event={"ID":"a12a62bb-3713-4f66-902e-673cc09db2ee","Type":"ContainerDied","Data":"4bd4d7a6ce0f5eabb7a54beaa4c2580649af38778d45594fed69a145ecfcece7"} Jan 22 16:33:54 crc kubenswrapper[4758]: I0122 16:33:54.535876 4758 status_manager.go:851] "Failed to get status for pod" podUID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" pod="openshift-marketplace/community-operators-c6qmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c6qmr\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:54 crc kubenswrapper[4758]: I0122 16:33:54.536110 4758 status_manager.go:851] "Failed to get status for pod" podUID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" pod="openshift-marketplace/redhat-marketplace-nthqj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nthqj\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:54 crc kubenswrapper[4758]: I0122 16:33:54.536405 4758 status_manager.go:851] "Failed to get status for pod" podUID="a12a62bb-3713-4f66-902e-673cc09db2ee" pod="openshift-marketplace/redhat-marketplace-wjs4t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wjs4t\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:54 crc kubenswrapper[4758]: I0122 16:33:54.539355 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 22 16:33:54 crc kubenswrapper[4758]: I0122 16:33:54.540526 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 22 16:33:54 crc kubenswrapper[4758]: I0122 16:33:54.541392 4758 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6" exitCode=0 Jan 22 16:33:54 crc kubenswrapper[4758]: I0122 16:33:54.541420 4758 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649" exitCode=0 Jan 22 16:33:54 crc kubenswrapper[4758]: I0122 16:33:54.541430 4758 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb" exitCode=0 Jan 22 16:33:54 crc kubenswrapper[4758]: I0122 16:33:54.541440 4758 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b" exitCode=2 Jan 22 16:33:54 crc kubenswrapper[4758]: I0122 16:33:54.541493 4758 scope.go:117] "RemoveContainer" containerID="5b6fb073b50f33fe8f95bdb6efdcc4cbf59f909344bad9932a1db1e84bd48a43" Jan 22 16:33:54 crc kubenswrapper[4758]: I0122 16:33:54.810967 4758 status_manager.go:851] "Failed to get status for pod" podUID="a12a62bb-3713-4f66-902e-673cc09db2ee" pod="openshift-marketplace/redhat-marketplace-wjs4t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wjs4t\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:54 crc kubenswrapper[4758]: I0122 16:33:54.811328 4758 status_manager.go:851] "Failed to get status for pod" podUID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" pod="openshift-marketplace/community-operators-c6qmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c6qmr\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:54 crc kubenswrapper[4758]: I0122 16:33:54.811461 4758 status_manager.go:851] "Failed to get status for pod" podUID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" pod="openshift-marketplace/redhat-operators-559wb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-559wb\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:54 crc kubenswrapper[4758]: I0122 16:33:54.811595 4758 status_manager.go:851] "Failed to get status for pod" podUID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" pod="openshift-marketplace/redhat-marketplace-nthqj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nthqj\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:54 crc kubenswrapper[4758]: E0122 16:33:54.814871 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-559wb" podUID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" Jan 22 16:33:54 crc kubenswrapper[4758]: E0122 16:33:54.892261 4758 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" interval="800ms" Jan 22 16:33:55 crc kubenswrapper[4758]: I0122 16:33:55.549321 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 22 16:33:55 crc kubenswrapper[4758]: E0122 16:33:55.698569 4758 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" interval="1.6s" Jan 22 16:33:56 crc kubenswrapper[4758]: E0122 16:33:56.251306 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:33:56Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:33:56Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:33:56Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:33:56Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0934f30eb8f9333151bdb8fb7ad24fe19bb186a20d28b0541182f909fb8f0145\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:dac313fa046b5a0965a26ce6996a51a0a3a77668fdbe4a5e5beea707e8024a2f\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1202844902},{\\\"names\\\":[],\\\"sizeBytes\\\":1178956511},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:e8b80caacac4b73aab52e45466d44499a5cf4750b1a632509a28c1edda1f1a0d\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:e8e328555353cb9f84f5a8b2142aff1ebb0f41f8b6db91fa21f05b580d5cfce8\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1170343151},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:56 crc kubenswrapper[4758]: E0122 16:33:56.257203 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:56 crc kubenswrapper[4758]: E0122 16:33:56.260229 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:56 crc kubenswrapper[4758]: E0122 16:33:56.260408 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:56 crc kubenswrapper[4758]: E0122 16:33:56.260566 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:56 crc kubenswrapper[4758]: E0122 16:33:56.260579 4758 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.393451 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8v88c" Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.393839 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8v88c" Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.557714 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6qmr" event={"ID":"6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4","Type":"ContainerStarted","Data":"b62c8a1fcabd0d3a97f1533146cf8f0b11b055bc3905fefb7b0f4dd045495ade"} Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.559217 4758 status_manager.go:851] "Failed to get status for pod" podUID="a12a62bb-3713-4f66-902e-673cc09db2ee" pod="openshift-marketplace/redhat-marketplace-wjs4t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wjs4t\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.559954 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wjs4t" event={"ID":"a12a62bb-3713-4f66-902e-673cc09db2ee","Type":"ContainerStarted","Data":"b9a1b8bc551fc1a90f093ecbae7e6a2e5dee6207119888b22c551b5e4ad3baf0"} Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.560092 4758 status_manager.go:851] "Failed to get status for pod" podUID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" pod="openshift-marketplace/community-operators-c6qmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c6qmr\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.560784 4758 status_manager.go:851] "Failed to get status for pod" podUID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" pod="openshift-marketplace/redhat-operators-559wb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-559wb\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.561140 4758 status_manager.go:851] "Failed to get status for pod" podUID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" pod="openshift-marketplace/redhat-marketplace-nthqj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nthqj\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.561597 4758 status_manager.go:851] "Failed to get status for pod" podUID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" pod="openshift-marketplace/redhat-marketplace-nthqj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nthqj\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.562009 4758 status_manager.go:851] "Failed to get status for pod" podUID="a12a62bb-3713-4f66-902e-673cc09db2ee" pod="openshift-marketplace/redhat-marketplace-wjs4t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wjs4t\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.562196 4758 status_manager.go:851] "Failed to get status for pod" podUID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" pod="openshift-marketplace/community-operators-c6qmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c6qmr\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.562454 4758 status_manager.go:851] "Failed to get status for pod" podUID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" pod="openshift-marketplace/redhat-operators-559wb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-559wb\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.563018 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.563698 4758 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790" exitCode=0 Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.565441 4758 generic.go:334] "Generic (PLEG): container finished" podID="e88aa20b-e3aa-4cc2-856c-0dd5e9394992" containerID="4a8e74249b93d523f6ff46053629cce981e04cada3afea5a2fe676f782f9c84a" exitCode=0 Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.565495 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7bgv" event={"ID":"e88aa20b-e3aa-4cc2-856c-0dd5e9394992","Type":"ContainerDied","Data":"4a8e74249b93d523f6ff46053629cce981e04cada3afea5a2fe676f782f9c84a"} Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.566042 4758 status_manager.go:851] "Failed to get status for pod" podUID="a12a62bb-3713-4f66-902e-673cc09db2ee" pod="openshift-marketplace/redhat-marketplace-wjs4t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wjs4t\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.567100 4758 status_manager.go:851] "Failed to get status for pod" podUID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" pod="openshift-marketplace/community-operators-c6qmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c6qmr\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.567691 4758 status_manager.go:851] "Failed to get status for pod" podUID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" pod="openshift-marketplace/redhat-operators-559wb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-559wb\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.567766 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nthqj" event={"ID":"25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e","Type":"ContainerStarted","Data":"5280a70e36ea601ca10423751b3ae6b4478b1c7552ba2a0beb14a05778f13a39"} Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.569111 4758 status_manager.go:851] "Failed to get status for pod" podUID="e88aa20b-e3aa-4cc2-856c-0dd5e9394992" pod="openshift-marketplace/redhat-operators-s7bgv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s7bgv\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.569345 4758 status_manager.go:851] "Failed to get status for pod" podUID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" pod="openshift-marketplace/redhat-marketplace-nthqj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nthqj\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.569618 4758 status_manager.go:851] "Failed to get status for pod" podUID="a12a62bb-3713-4f66-902e-673cc09db2ee" pod="openshift-marketplace/redhat-marketplace-wjs4t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wjs4t\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.569797 4758 status_manager.go:851] "Failed to get status for pod" podUID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" pod="openshift-marketplace/community-operators-c6qmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c6qmr\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.569937 4758 status_manager.go:851] "Failed to get status for pod" podUID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" pod="openshift-marketplace/redhat-operators-559wb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-559wb\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.570071 4758 status_manager.go:851] "Failed to get status for pod" podUID="e88aa20b-e3aa-4cc2-856c-0dd5e9394992" pod="openshift-marketplace/redhat-operators-s7bgv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s7bgv\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.570204 4758 status_manager.go:851] "Failed to get status for pod" podUID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" pod="openshift-marketplace/redhat-marketplace-nthqj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nthqj\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.658470 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mh88h" Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.658528 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mh88h" Jan 22 16:33:56 crc kubenswrapper[4758]: E0122 16:33:56.810718 4758 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.223:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" volumeName="registry-storage" Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.888370 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-b2rzs" Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.888408 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-b2rzs" Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.928396 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8v88c" Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.928721 4758 status_manager.go:851] "Failed to get status for pod" podUID="a12a62bb-3713-4f66-902e-673cc09db2ee" pod="openshift-marketplace/redhat-marketplace-wjs4t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wjs4t\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.928972 4758 status_manager.go:851] "Failed to get status for pod" podUID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" pod="openshift-marketplace/community-operators-c6qmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c6qmr\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.929148 4758 status_manager.go:851] "Failed to get status for pod" podUID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" pod="openshift-marketplace/redhat-operators-559wb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-559wb\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.929380 4758 status_manager.go:851] "Failed to get status for pod" podUID="e88aa20b-e3aa-4cc2-856c-0dd5e9394992" pod="openshift-marketplace/redhat-operators-s7bgv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s7bgv\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.929654 4758 status_manager.go:851] "Failed to get status for pod" podUID="88b3808a-aa06-48ab-9b57-f474a2e1379a" pod="openshift-marketplace/certified-operators-8v88c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8v88c\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.929918 4758 status_manager.go:851] "Failed to get status for pod" podUID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" pod="openshift-marketplace/redhat-marketplace-nthqj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nthqj\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.934723 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mh88h" Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.935086 4758 status_manager.go:851] "Failed to get status for pod" podUID="e88aa20b-e3aa-4cc2-856c-0dd5e9394992" pod="openshift-marketplace/redhat-operators-s7bgv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s7bgv\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.935288 4758 status_manager.go:851] "Failed to get status for pod" podUID="88b3808a-aa06-48ab-9b57-f474a2e1379a" pod="openshift-marketplace/certified-operators-8v88c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8v88c\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.935513 4758 status_manager.go:851] "Failed to get status for pod" podUID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" pod="openshift-marketplace/redhat-marketplace-nthqj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nthqj\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.935785 4758 status_manager.go:851] "Failed to get status for pod" podUID="0437f83e-83ed-42f5-88ab-110deeeac7a4" pod="openshift-marketplace/certified-operators-mh88h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mh88h\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.936063 4758 status_manager.go:851] "Failed to get status for pod" podUID="a12a62bb-3713-4f66-902e-673cc09db2ee" pod="openshift-marketplace/redhat-marketplace-wjs4t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wjs4t\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.936284 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-b2rzs" Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.936435 4758 status_manager.go:851] "Failed to get status for pod" podUID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" pod="openshift-marketplace/community-operators-c6qmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c6qmr\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.936715 4758 status_manager.go:851] "Failed to get status for pod" podUID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" pod="openshift-marketplace/redhat-operators-559wb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-559wb\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.936972 4758 status_manager.go:851] "Failed to get status for pod" podUID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" pod="openshift-marketplace/redhat-operators-559wb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-559wb\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.937182 4758 status_manager.go:851] "Failed to get status for pod" podUID="8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9" pod="openshift-marketplace/community-operators-b2rzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-b2rzs\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.937382 4758 status_manager.go:851] "Failed to get status for pod" podUID="e88aa20b-e3aa-4cc2-856c-0dd5e9394992" pod="openshift-marketplace/redhat-operators-s7bgv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s7bgv\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.937618 4758 status_manager.go:851] "Failed to get status for pod" podUID="88b3808a-aa06-48ab-9b57-f474a2e1379a" pod="openshift-marketplace/certified-operators-8v88c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8v88c\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.937851 4758 status_manager.go:851] "Failed to get status for pod" podUID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" pod="openshift-marketplace/redhat-marketplace-nthqj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nthqj\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.938152 4758 status_manager.go:851] "Failed to get status for pod" podUID="0437f83e-83ed-42f5-88ab-110deeeac7a4" pod="openshift-marketplace/certified-operators-mh88h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mh88h\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.938367 4758 status_manager.go:851] "Failed to get status for pod" podUID="a12a62bb-3713-4f66-902e-673cc09db2ee" pod="openshift-marketplace/redhat-marketplace-wjs4t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wjs4t\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:56 crc kubenswrapper[4758]: I0122 16:33:56.938587 4758 status_manager.go:851] "Failed to get status for pod" podUID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" pod="openshift-marketplace/community-operators-c6qmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c6qmr\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.182291 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.184440 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.184973 4758 status_manager.go:851] "Failed to get status for pod" podUID="88b3808a-aa06-48ab-9b57-f474a2e1379a" pod="openshift-marketplace/certified-operators-8v88c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8v88c\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.185262 4758 status_manager.go:851] "Failed to get status for pod" podUID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" pod="openshift-marketplace/redhat-marketplace-nthqj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nthqj\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.185671 4758 status_manager.go:851] "Failed to get status for pod" podUID="0437f83e-83ed-42f5-88ab-110deeeac7a4" pod="openshift-marketplace/certified-operators-mh88h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mh88h\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.185935 4758 status_manager.go:851] "Failed to get status for pod" podUID="a12a62bb-3713-4f66-902e-673cc09db2ee" pod="openshift-marketplace/redhat-marketplace-wjs4t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wjs4t\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.186204 4758 status_manager.go:851] "Failed to get status for pod" podUID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" pod="openshift-marketplace/community-operators-c6qmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c6qmr\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.186433 4758 status_manager.go:851] "Failed to get status for pod" podUID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" pod="openshift-marketplace/redhat-operators-559wb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-559wb\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.186699 4758 status_manager.go:851] "Failed to get status for pod" podUID="8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9" pod="openshift-marketplace/community-operators-b2rzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-b2rzs\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.186983 4758 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.187198 4758 status_manager.go:851] "Failed to get status for pod" podUID="e88aa20b-e3aa-4cc2-856c-0dd5e9394992" pod="openshift-marketplace/redhat-operators-s7bgv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s7bgv\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:57 crc kubenswrapper[4758]: E0122 16:33:57.299356 4758 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" interval="3.2s" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.314527 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.314574 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.314701 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.314964 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.314995 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.315011 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.415505 4758 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.415533 4758 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.415541 4758 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.576839 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.577766 4758 scope.go:117] "RemoveContainer" containerID="87c18b3906201284f2540b773d4f5fbffaea57daacfefce1029d93d720194dd6" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.577865 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.600508 4758 scope.go:117] "RemoveContainer" containerID="d8a81e000000ba4aa645351dcf434edb5b12528964db33474e60876746683649" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.600730 4758 status_manager.go:851] "Failed to get status for pod" podUID="a12a62bb-3713-4f66-902e-673cc09db2ee" pod="openshift-marketplace/redhat-marketplace-wjs4t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wjs4t\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.601096 4758 status_manager.go:851] "Failed to get status for pod" podUID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" pod="openshift-marketplace/community-operators-c6qmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c6qmr\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.601355 4758 status_manager.go:851] "Failed to get status for pod" podUID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" pod="openshift-marketplace/redhat-operators-559wb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-559wb\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.601686 4758 status_manager.go:851] "Failed to get status for pod" podUID="8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9" pod="openshift-marketplace/community-operators-b2rzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-b2rzs\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.601992 4758 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.602253 4758 status_manager.go:851] "Failed to get status for pod" podUID="e88aa20b-e3aa-4cc2-856c-0dd5e9394992" pod="openshift-marketplace/redhat-operators-s7bgv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s7bgv\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.602425 4758 status_manager.go:851] "Failed to get status for pod" podUID="88b3808a-aa06-48ab-9b57-f474a2e1379a" pod="openshift-marketplace/certified-operators-8v88c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8v88c\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.602591 4758 status_manager.go:851] "Failed to get status for pod" podUID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" pod="openshift-marketplace/redhat-marketplace-nthqj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nthqj\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.603961 4758 status_manager.go:851] "Failed to get status for pod" podUID="0437f83e-83ed-42f5-88ab-110deeeac7a4" pod="openshift-marketplace/certified-operators-mh88h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mh88h\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.622208 4758 scope.go:117] "RemoveContainer" containerID="fedf76405ddde13b0c8f7bc13033a7ba622f1be6ac2afcaaf1a7a4a60ac040eb" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.642653 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mh88h" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.643136 4758 status_manager.go:851] "Failed to get status for pod" podUID="e88aa20b-e3aa-4cc2-856c-0dd5e9394992" pod="openshift-marketplace/redhat-operators-s7bgv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s7bgv\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.643316 4758 status_manager.go:851] "Failed to get status for pod" podUID="88b3808a-aa06-48ab-9b57-f474a2e1379a" pod="openshift-marketplace/certified-operators-8v88c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8v88c\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.643514 4758 status_manager.go:851] "Failed to get status for pod" podUID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" pod="openshift-marketplace/redhat-marketplace-nthqj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nthqj\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.643800 4758 status_manager.go:851] "Failed to get status for pod" podUID="0437f83e-83ed-42f5-88ab-110deeeac7a4" pod="openshift-marketplace/certified-operators-mh88h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mh88h\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.644104 4758 status_manager.go:851] "Failed to get status for pod" podUID="a12a62bb-3713-4f66-902e-673cc09db2ee" pod="openshift-marketplace/redhat-marketplace-wjs4t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wjs4t\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.644584 4758 status_manager.go:851] "Failed to get status for pod" podUID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" pod="openshift-marketplace/community-operators-c6qmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c6qmr\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.644834 4758 status_manager.go:851] "Failed to get status for pod" podUID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" pod="openshift-marketplace/redhat-operators-559wb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-559wb\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.645083 4758 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.645324 4758 status_manager.go:851] "Failed to get status for pod" podUID="8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9" pod="openshift-marketplace/community-operators-b2rzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-b2rzs\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.647943 4758 scope.go:117] "RemoveContainer" containerID="d59803b0f757f6233c5e4c1cc56879aa0296bee1355d841c776e1558c427b35b" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.664623 4758 scope.go:117] "RemoveContainer" containerID="9d526b111a87700ab734b327bebd78e420a67d05db7318cedc9a1d1ecd1a9790" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.687459 4758 scope.go:117] "RemoveContainer" containerID="36275017d22352ed71de19d12ac55d0c008b0f4abbee86b7760f3f557ff4ebe4" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.689683 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-b2rzs" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.690538 4758 status_manager.go:851] "Failed to get status for pod" podUID="a12a62bb-3713-4f66-902e-673cc09db2ee" pod="openshift-marketplace/redhat-marketplace-wjs4t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wjs4t\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.690890 4758 status_manager.go:851] "Failed to get status for pod" podUID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" pod="openshift-marketplace/community-operators-c6qmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c6qmr\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.691160 4758 status_manager.go:851] "Failed to get status for pod" podUID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" pod="openshift-marketplace/redhat-operators-559wb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-559wb\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.691579 4758 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.694879 4758 status_manager.go:851] "Failed to get status for pod" podUID="8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9" pod="openshift-marketplace/community-operators-b2rzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-b2rzs\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.695486 4758 status_manager.go:851] "Failed to get status for pod" podUID="e88aa20b-e3aa-4cc2-856c-0dd5e9394992" pod="openshift-marketplace/redhat-operators-s7bgv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s7bgv\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.695865 4758 status_manager.go:851] "Failed to get status for pod" podUID="88b3808a-aa06-48ab-9b57-f474a2e1379a" pod="openshift-marketplace/certified-operators-8v88c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8v88c\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.696309 4758 status_manager.go:851] "Failed to get status for pod" podUID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" pod="openshift-marketplace/redhat-marketplace-nthqj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nthqj\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:57 crc kubenswrapper[4758]: I0122 16:33:57.696957 4758 status_manager.go:851] "Failed to get status for pod" podUID="0437f83e-83ed-42f5-88ab-110deeeac7a4" pod="openshift-marketplace/certified-operators-mh88h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mh88h\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:58 crc kubenswrapper[4758]: I0122 16:33:58.471731 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wjs4t" Jan 22 16:33:58 crc kubenswrapper[4758]: I0122 16:33:58.472068 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wjs4t" Jan 22 16:33:58 crc kubenswrapper[4758]: I0122 16:33:58.507639 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wjs4t" Jan 22 16:33:58 crc kubenswrapper[4758]: I0122 16:33:58.508375 4758 status_manager.go:851] "Failed to get status for pod" podUID="a12a62bb-3713-4f66-902e-673cc09db2ee" pod="openshift-marketplace/redhat-marketplace-wjs4t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wjs4t\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:58 crc kubenswrapper[4758]: I0122 16:33:58.508831 4758 status_manager.go:851] "Failed to get status for pod" podUID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" pod="openshift-marketplace/community-operators-c6qmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c6qmr\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:58 crc kubenswrapper[4758]: I0122 16:33:58.509112 4758 status_manager.go:851] "Failed to get status for pod" podUID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" pod="openshift-marketplace/redhat-operators-559wb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-559wb\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:58 crc kubenswrapper[4758]: I0122 16:33:58.509468 4758 status_manager.go:851] "Failed to get status for pod" podUID="8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9" pod="openshift-marketplace/community-operators-b2rzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-b2rzs\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:58 crc kubenswrapper[4758]: I0122 16:33:58.509974 4758 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:58 crc kubenswrapper[4758]: I0122 16:33:58.510228 4758 status_manager.go:851] "Failed to get status for pod" podUID="e88aa20b-e3aa-4cc2-856c-0dd5e9394992" pod="openshift-marketplace/redhat-operators-s7bgv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s7bgv\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:58 crc kubenswrapper[4758]: I0122 16:33:58.510455 4758 status_manager.go:851] "Failed to get status for pod" podUID="88b3808a-aa06-48ab-9b57-f474a2e1379a" pod="openshift-marketplace/certified-operators-8v88c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8v88c\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:58 crc kubenswrapper[4758]: I0122 16:33:58.510770 4758 status_manager.go:851] "Failed to get status for pod" podUID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" pod="openshift-marketplace/redhat-marketplace-nthqj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nthqj\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:58 crc kubenswrapper[4758]: I0122 16:33:58.511039 4758 status_manager.go:851] "Failed to get status for pod" podUID="0437f83e-83ed-42f5-88ab-110deeeac7a4" pod="openshift-marketplace/certified-operators-mh88h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mh88h\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:58 crc kubenswrapper[4758]: I0122 16:33:58.589718 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7bgv" event={"ID":"e88aa20b-e3aa-4cc2-856c-0dd5e9394992","Type":"ContainerStarted","Data":"c8fe988cb0db8cceebd7070d798a4e7d4a5e4221466e370ef86ff48d66c220f6"} Jan 22 16:33:58 crc kubenswrapper[4758]: I0122 16:33:58.592333 4758 status_manager.go:851] "Failed to get status for pod" podUID="a12a62bb-3713-4f66-902e-673cc09db2ee" pod="openshift-marketplace/redhat-marketplace-wjs4t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wjs4t\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:58 crc kubenswrapper[4758]: I0122 16:33:58.592703 4758 status_manager.go:851] "Failed to get status for pod" podUID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" pod="openshift-marketplace/community-operators-c6qmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c6qmr\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:58 crc kubenswrapper[4758]: I0122 16:33:58.593214 4758 status_manager.go:851] "Failed to get status for pod" podUID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" pod="openshift-marketplace/redhat-operators-559wb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-559wb\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:58 crc kubenswrapper[4758]: I0122 16:33:58.593572 4758 status_manager.go:851] "Failed to get status for pod" podUID="8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9" pod="openshift-marketplace/community-operators-b2rzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-b2rzs\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:58 crc kubenswrapper[4758]: I0122 16:33:58.593916 4758 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:58 crc kubenswrapper[4758]: I0122 16:33:58.594336 4758 status_manager.go:851] "Failed to get status for pod" podUID="e88aa20b-e3aa-4cc2-856c-0dd5e9394992" pod="openshift-marketplace/redhat-operators-s7bgv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s7bgv\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:58 crc kubenswrapper[4758]: I0122 16:33:58.594553 4758 status_manager.go:851] "Failed to get status for pod" podUID="88b3808a-aa06-48ab-9b57-f474a2e1379a" pod="openshift-marketplace/certified-operators-8v88c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8v88c\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:58 crc kubenswrapper[4758]: I0122 16:33:58.594849 4758 status_manager.go:851] "Failed to get status for pod" podUID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" pod="openshift-marketplace/redhat-marketplace-nthqj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nthqj\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:58 crc kubenswrapper[4758]: I0122 16:33:58.595330 4758 status_manager.go:851] "Failed to get status for pod" podUID="0437f83e-83ed-42f5-88ab-110deeeac7a4" pod="openshift-marketplace/certified-operators-mh88h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mh88h\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:58 crc kubenswrapper[4758]: I0122 16:33:58.810514 4758 status_manager.go:851] "Failed to get status for pod" podUID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" pod="openshift-marketplace/community-operators-c6qmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c6qmr\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:58 crc kubenswrapper[4758]: I0122 16:33:58.810684 4758 status_manager.go:851] "Failed to get status for pod" podUID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" pod="openshift-marketplace/redhat-operators-559wb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-559wb\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:58 crc kubenswrapper[4758]: I0122 16:33:58.810845 4758 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:58 crc kubenswrapper[4758]: I0122 16:33:58.810986 4758 status_manager.go:851] "Failed to get status for pod" podUID="8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9" pod="openshift-marketplace/community-operators-b2rzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-b2rzs\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:58 crc kubenswrapper[4758]: I0122 16:33:58.811125 4758 status_manager.go:851] "Failed to get status for pod" podUID="e88aa20b-e3aa-4cc2-856c-0dd5e9394992" pod="openshift-marketplace/redhat-operators-s7bgv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s7bgv\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:58 crc kubenswrapper[4758]: I0122 16:33:58.811261 4758 status_manager.go:851] "Failed to get status for pod" podUID="88b3808a-aa06-48ab-9b57-f474a2e1379a" pod="openshift-marketplace/certified-operators-8v88c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8v88c\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:58 crc kubenswrapper[4758]: I0122 16:33:58.811413 4758 status_manager.go:851] "Failed to get status for pod" podUID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" pod="openshift-marketplace/redhat-marketplace-nthqj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nthqj\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:58 crc kubenswrapper[4758]: I0122 16:33:58.811564 4758 status_manager.go:851] "Failed to get status for pod" podUID="0437f83e-83ed-42f5-88ab-110deeeac7a4" pod="openshift-marketplace/certified-operators-mh88h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mh88h\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:58 crc kubenswrapper[4758]: I0122 16:33:58.811697 4758 status_manager.go:851] "Failed to get status for pod" podUID="a12a62bb-3713-4f66-902e-673cc09db2ee" pod="openshift-marketplace/redhat-marketplace-wjs4t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wjs4t\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:58 crc kubenswrapper[4758]: I0122 16:33:58.816357 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 22 16:33:58 crc kubenswrapper[4758]: I0122 16:33:58.884709 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nthqj" Jan 22 16:33:58 crc kubenswrapper[4758]: I0122 16:33:58.885340 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nthqj" Jan 22 16:33:58 crc kubenswrapper[4758]: I0122 16:33:58.926991 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nthqj" Jan 22 16:33:58 crc kubenswrapper[4758]: I0122 16:33:58.927502 4758 status_manager.go:851] "Failed to get status for pod" podUID="88b3808a-aa06-48ab-9b57-f474a2e1379a" pod="openshift-marketplace/certified-operators-8v88c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8v88c\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:58 crc kubenswrapper[4758]: I0122 16:33:58.927842 4758 status_manager.go:851] "Failed to get status for pod" podUID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" pod="openshift-marketplace/redhat-marketplace-nthqj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nthqj\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:58 crc kubenswrapper[4758]: I0122 16:33:58.928239 4758 status_manager.go:851] "Failed to get status for pod" podUID="0437f83e-83ed-42f5-88ab-110deeeac7a4" pod="openshift-marketplace/certified-operators-mh88h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mh88h\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:58 crc kubenswrapper[4758]: I0122 16:33:58.928533 4758 status_manager.go:851] "Failed to get status for pod" podUID="a12a62bb-3713-4f66-902e-673cc09db2ee" pod="openshift-marketplace/redhat-marketplace-wjs4t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wjs4t\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:58 crc kubenswrapper[4758]: I0122 16:33:58.928829 4758 status_manager.go:851] "Failed to get status for pod" podUID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" pod="openshift-marketplace/community-operators-c6qmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c6qmr\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:58 crc kubenswrapper[4758]: I0122 16:33:58.929134 4758 status_manager.go:851] "Failed to get status for pod" podUID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" pod="openshift-marketplace/redhat-operators-559wb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-559wb\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:58 crc kubenswrapper[4758]: I0122 16:33:58.929378 4758 status_manager.go:851] "Failed to get status for pod" podUID="8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9" pod="openshift-marketplace/community-operators-b2rzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-b2rzs\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:58 crc kubenswrapper[4758]: I0122 16:33:58.929632 4758 status_manager.go:851] "Failed to get status for pod" podUID="e88aa20b-e3aa-4cc2-856c-0dd5e9394992" pod="openshift-marketplace/redhat-operators-s7bgv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s7bgv\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:33:59 crc kubenswrapper[4758]: E0122 16:33:59.005285 4758 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.223:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 16:33:59 crc kubenswrapper[4758]: I0122 16:33:59.005801 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 16:33:59 crc kubenswrapper[4758]: I0122 16:33:59.591659 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s7bgv" Jan 22 16:33:59 crc kubenswrapper[4758]: I0122 16:33:59.591965 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s7bgv" Jan 22 16:33:59 crc kubenswrapper[4758]: I0122 16:33:59.592326 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"e9d7c4f8d892319f3b754ad3336e7dc3fe04a276bb32ef51ee6952f265cc107d"} Jan 22 16:34:00 crc kubenswrapper[4758]: E0122 16:34:00.499853 4758 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" interval="6.4s" Jan 22 16:34:00 crc kubenswrapper[4758]: I0122 16:34:00.643961 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-s7bgv" podUID="e88aa20b-e3aa-4cc2-856c-0dd5e9394992" containerName="registry-server" probeResult="failure" output=< Jan 22 16:34:00 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 22 16:34:00 crc kubenswrapper[4758]: > Jan 22 16:34:00 crc kubenswrapper[4758]: I0122 16:34:00.676666 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nthqj" Jan 22 16:34:00 crc kubenswrapper[4758]: I0122 16:34:00.677790 4758 status_manager.go:851] "Failed to get status for pod" podUID="88b3808a-aa06-48ab-9b57-f474a2e1379a" pod="openshift-marketplace/certified-operators-8v88c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8v88c\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:00 crc kubenswrapper[4758]: I0122 16:34:00.678530 4758 status_manager.go:851] "Failed to get status for pod" podUID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" pod="openshift-marketplace/redhat-marketplace-nthqj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nthqj\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:00 crc kubenswrapper[4758]: I0122 16:34:00.679400 4758 status_manager.go:851] "Failed to get status for pod" podUID="0437f83e-83ed-42f5-88ab-110deeeac7a4" pod="openshift-marketplace/certified-operators-mh88h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mh88h\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:00 crc kubenswrapper[4758]: I0122 16:34:00.679851 4758 status_manager.go:851] "Failed to get status for pod" podUID="a12a62bb-3713-4f66-902e-673cc09db2ee" pod="openshift-marketplace/redhat-marketplace-wjs4t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wjs4t\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:00 crc kubenswrapper[4758]: I0122 16:34:00.680223 4758 status_manager.go:851] "Failed to get status for pod" podUID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" pod="openshift-marketplace/community-operators-c6qmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c6qmr\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:00 crc kubenswrapper[4758]: I0122 16:34:00.680604 4758 status_manager.go:851] "Failed to get status for pod" podUID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" pod="openshift-marketplace/redhat-operators-559wb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-559wb\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:00 crc kubenswrapper[4758]: I0122 16:34:00.680953 4758 status_manager.go:851] "Failed to get status for pod" podUID="8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9" pod="openshift-marketplace/community-operators-b2rzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-b2rzs\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:00 crc kubenswrapper[4758]: I0122 16:34:00.681269 4758 status_manager.go:851] "Failed to get status for pod" podUID="e88aa20b-e3aa-4cc2-856c-0dd5e9394992" pod="openshift-marketplace/redhat-operators-s7bgv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s7bgv\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:00 crc kubenswrapper[4758]: E0122 16:34:00.688657 4758 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.223:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-marketplace-nthqj.188d1abe98675b3a openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-nthqj,UID:25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e,APIVersion:v1,ResourceVersion:28597,FieldPath:spec.containers{registry-server},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\",Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 16:33:54.52949177 +0000 UTC m=+256.012831045,LastTimestamp:2026-01-22 16:33:54.52949177 +0000 UTC m=+256.012831045,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 16:34:02 crc kubenswrapper[4758]: I0122 16:34:02.613646 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"488e56973319c20746a71443384c19979ff0582b2d2bf8b3c346a98e39acfe96"} Jan 22 16:34:02 crc kubenswrapper[4758]: I0122 16:34:02.614507 4758 status_manager.go:851] "Failed to get status for pod" podUID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" pod="openshift-marketplace/redhat-operators-559wb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-559wb\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:02 crc kubenswrapper[4758]: I0122 16:34:02.615227 4758 status_manager.go:851] "Failed to get status for pod" podUID="8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9" pod="openshift-marketplace/community-operators-b2rzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-b2rzs\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:02 crc kubenswrapper[4758]: E0122 16:34:02.615464 4758 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.223:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 16:34:02 crc kubenswrapper[4758]: I0122 16:34:02.615795 4758 status_manager.go:851] "Failed to get status for pod" podUID="e88aa20b-e3aa-4cc2-856c-0dd5e9394992" pod="openshift-marketplace/redhat-operators-s7bgv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s7bgv\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:02 crc kubenswrapper[4758]: I0122 16:34:02.616263 4758 status_manager.go:851] "Failed to get status for pod" podUID="88b3808a-aa06-48ab-9b57-f474a2e1379a" pod="openshift-marketplace/certified-operators-8v88c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8v88c\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:02 crc kubenswrapper[4758]: I0122 16:34:02.616380 4758 generic.go:334] "Generic (PLEG): container finished" podID="3893a5d6-af77-48c5-a325-35d144e54f8a" containerID="07d6e00e2f24be95948ac7290716b0a25b0444aef40bdd61ac83888d200febf2" exitCode=0 Jan 22 16:34:02 crc kubenswrapper[4758]: I0122 16:34:02.616636 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3893a5d6-af77-48c5-a325-35d144e54f8a","Type":"ContainerDied","Data":"07d6e00e2f24be95948ac7290716b0a25b0444aef40bdd61ac83888d200febf2"} Jan 22 16:34:02 crc kubenswrapper[4758]: I0122 16:34:02.617191 4758 status_manager.go:851] "Failed to get status for pod" podUID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" pod="openshift-marketplace/redhat-marketplace-nthqj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nthqj\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:02 crc kubenswrapper[4758]: I0122 16:34:02.617527 4758 status_manager.go:851] "Failed to get status for pod" podUID="0437f83e-83ed-42f5-88ab-110deeeac7a4" pod="openshift-marketplace/certified-operators-mh88h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mh88h\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:02 crc kubenswrapper[4758]: I0122 16:34:02.617889 4758 status_manager.go:851] "Failed to get status for pod" podUID="a12a62bb-3713-4f66-902e-673cc09db2ee" pod="openshift-marketplace/redhat-marketplace-wjs4t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wjs4t\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:02 crc kubenswrapper[4758]: I0122 16:34:02.618319 4758 status_manager.go:851] "Failed to get status for pod" podUID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" pod="openshift-marketplace/community-operators-c6qmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c6qmr\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:02 crc kubenswrapper[4758]: I0122 16:34:02.618860 4758 status_manager.go:851] "Failed to get status for pod" podUID="88b3808a-aa06-48ab-9b57-f474a2e1379a" pod="openshift-marketplace/certified-operators-8v88c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8v88c\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:02 crc kubenswrapper[4758]: I0122 16:34:02.619338 4758 status_manager.go:851] "Failed to get status for pod" podUID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" pod="openshift-marketplace/redhat-marketplace-nthqj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nthqj\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:02 crc kubenswrapper[4758]: I0122 16:34:02.619672 4758 status_manager.go:851] "Failed to get status for pod" podUID="0437f83e-83ed-42f5-88ab-110deeeac7a4" pod="openshift-marketplace/certified-operators-mh88h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mh88h\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:02 crc kubenswrapper[4758]: I0122 16:34:02.620362 4758 status_manager.go:851] "Failed to get status for pod" podUID="3893a5d6-af77-48c5-a325-35d144e54f8a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:02 crc kubenswrapper[4758]: I0122 16:34:02.621274 4758 status_manager.go:851] "Failed to get status for pod" podUID="a12a62bb-3713-4f66-902e-673cc09db2ee" pod="openshift-marketplace/redhat-marketplace-wjs4t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wjs4t\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:02 crc kubenswrapper[4758]: I0122 16:34:02.624256 4758 status_manager.go:851] "Failed to get status for pod" podUID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" pod="openshift-marketplace/community-operators-c6qmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c6qmr\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:02 crc kubenswrapper[4758]: I0122 16:34:02.624924 4758 status_manager.go:851] "Failed to get status for pod" podUID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" pod="openshift-marketplace/redhat-operators-559wb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-559wb\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:02 crc kubenswrapper[4758]: I0122 16:34:02.625364 4758 status_manager.go:851] "Failed to get status for pod" podUID="8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9" pod="openshift-marketplace/community-operators-b2rzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-b2rzs\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:02 crc kubenswrapper[4758]: I0122 16:34:02.625833 4758 status_manager.go:851] "Failed to get status for pod" podUID="e88aa20b-e3aa-4cc2-856c-0dd5e9394992" pod="openshift-marketplace/redhat-operators-s7bgv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s7bgv\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:03 crc kubenswrapper[4758]: E0122 16:34:03.622573 4758 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.223:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 16:34:03 crc kubenswrapper[4758]: I0122 16:34:03.840508 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 22 16:34:03 crc kubenswrapper[4758]: I0122 16:34:03.843027 4758 status_manager.go:851] "Failed to get status for pod" podUID="e88aa20b-e3aa-4cc2-856c-0dd5e9394992" pod="openshift-marketplace/redhat-operators-s7bgv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s7bgv\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:03 crc kubenswrapper[4758]: I0122 16:34:03.843245 4758 status_manager.go:851] "Failed to get status for pod" podUID="88b3808a-aa06-48ab-9b57-f474a2e1379a" pod="openshift-marketplace/certified-operators-8v88c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8v88c\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:03 crc kubenswrapper[4758]: I0122 16:34:03.843483 4758 status_manager.go:851] "Failed to get status for pod" podUID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" pod="openshift-marketplace/redhat-marketplace-nthqj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nthqj\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:03 crc kubenswrapper[4758]: I0122 16:34:03.843687 4758 status_manager.go:851] "Failed to get status for pod" podUID="0437f83e-83ed-42f5-88ab-110deeeac7a4" pod="openshift-marketplace/certified-operators-mh88h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mh88h\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:03 crc kubenswrapper[4758]: I0122 16:34:03.843901 4758 status_manager.go:851] "Failed to get status for pod" podUID="3893a5d6-af77-48c5-a325-35d144e54f8a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:03 crc kubenswrapper[4758]: I0122 16:34:03.844081 4758 status_manager.go:851] "Failed to get status for pod" podUID="a12a62bb-3713-4f66-902e-673cc09db2ee" pod="openshift-marketplace/redhat-marketplace-wjs4t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wjs4t\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:03 crc kubenswrapper[4758]: I0122 16:34:03.844282 4758 status_manager.go:851] "Failed to get status for pod" podUID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" pod="openshift-marketplace/community-operators-c6qmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c6qmr\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:03 crc kubenswrapper[4758]: I0122 16:34:03.844473 4758 status_manager.go:851] "Failed to get status for pod" podUID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" pod="openshift-marketplace/redhat-operators-559wb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-559wb\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:03 crc kubenswrapper[4758]: I0122 16:34:03.844653 4758 status_manager.go:851] "Failed to get status for pod" podUID="8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9" pod="openshift-marketplace/community-operators-b2rzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-b2rzs\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:03 crc kubenswrapper[4758]: I0122 16:34:03.908152 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3893a5d6-af77-48c5-a325-35d144e54f8a-kubelet-dir\") pod \"3893a5d6-af77-48c5-a325-35d144e54f8a\" (UID: \"3893a5d6-af77-48c5-a325-35d144e54f8a\") " Jan 22 16:34:03 crc kubenswrapper[4758]: I0122 16:34:03.908191 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3893a5d6-af77-48c5-a325-35d144e54f8a-var-lock\") pod \"3893a5d6-af77-48c5-a325-35d144e54f8a\" (UID: \"3893a5d6-af77-48c5-a325-35d144e54f8a\") " Jan 22 16:34:03 crc kubenswrapper[4758]: I0122 16:34:03.908215 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3893a5d6-af77-48c5-a325-35d144e54f8a-kube-api-access\") pod \"3893a5d6-af77-48c5-a325-35d144e54f8a\" (UID: \"3893a5d6-af77-48c5-a325-35d144e54f8a\") " Jan 22 16:34:03 crc kubenswrapper[4758]: I0122 16:34:03.909220 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3893a5d6-af77-48c5-a325-35d144e54f8a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3893a5d6-af77-48c5-a325-35d144e54f8a" (UID: "3893a5d6-af77-48c5-a325-35d144e54f8a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:34:03 crc kubenswrapper[4758]: I0122 16:34:03.909249 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3893a5d6-af77-48c5-a325-35d144e54f8a-var-lock" (OuterVolumeSpecName: "var-lock") pod "3893a5d6-af77-48c5-a325-35d144e54f8a" (UID: "3893a5d6-af77-48c5-a325-35d144e54f8a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:34:03 crc kubenswrapper[4758]: I0122 16:34:03.914522 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3893a5d6-af77-48c5-a325-35d144e54f8a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3893a5d6-af77-48c5-a325-35d144e54f8a" (UID: "3893a5d6-af77-48c5-a325-35d144e54f8a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:34:04 crc kubenswrapper[4758]: I0122 16:34:04.010222 4758 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3893a5d6-af77-48c5-a325-35d144e54f8a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 16:34:04 crc kubenswrapper[4758]: I0122 16:34:04.010257 4758 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3893a5d6-af77-48c5-a325-35d144e54f8a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 22 16:34:04 crc kubenswrapper[4758]: I0122 16:34:04.010265 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3893a5d6-af77-48c5-a325-35d144e54f8a-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 16:34:04 crc kubenswrapper[4758]: I0122 16:34:04.627819 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3893a5d6-af77-48c5-a325-35d144e54f8a","Type":"ContainerDied","Data":"23e21bfce556e5bca38f3671daad5440530fd74d88c4ae332699434c92cc2636"} Jan 22 16:34:04 crc kubenswrapper[4758]: I0122 16:34:04.627857 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 22 16:34:04 crc kubenswrapper[4758]: I0122 16:34:04.627860 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23e21bfce556e5bca38f3671daad5440530fd74d88c4ae332699434c92cc2636" Jan 22 16:34:04 crc kubenswrapper[4758]: I0122 16:34:04.644606 4758 status_manager.go:851] "Failed to get status for pod" podUID="e88aa20b-e3aa-4cc2-856c-0dd5e9394992" pod="openshift-marketplace/redhat-operators-s7bgv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s7bgv\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:04 crc kubenswrapper[4758]: I0122 16:34:04.645112 4758 status_manager.go:851] "Failed to get status for pod" podUID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" pod="openshift-marketplace/redhat-marketplace-nthqj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nthqj\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:04 crc kubenswrapper[4758]: I0122 16:34:04.645578 4758 status_manager.go:851] "Failed to get status for pod" podUID="88b3808a-aa06-48ab-9b57-f474a2e1379a" pod="openshift-marketplace/certified-operators-8v88c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8v88c\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:04 crc kubenswrapper[4758]: I0122 16:34:04.646312 4758 status_manager.go:851] "Failed to get status for pod" podUID="0437f83e-83ed-42f5-88ab-110deeeac7a4" pod="openshift-marketplace/certified-operators-mh88h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mh88h\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:04 crc kubenswrapper[4758]: I0122 16:34:04.647018 4758 status_manager.go:851] "Failed to get status for pod" podUID="3893a5d6-af77-48c5-a325-35d144e54f8a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:04 crc kubenswrapper[4758]: I0122 16:34:04.647488 4758 status_manager.go:851] "Failed to get status for pod" podUID="a12a62bb-3713-4f66-902e-673cc09db2ee" pod="openshift-marketplace/redhat-marketplace-wjs4t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wjs4t\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:04 crc kubenswrapper[4758]: I0122 16:34:04.647922 4758 status_manager.go:851] "Failed to get status for pod" podUID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" pod="openshift-marketplace/community-operators-c6qmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c6qmr\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:04 crc kubenswrapper[4758]: I0122 16:34:04.648197 4758 status_manager.go:851] "Failed to get status for pod" podUID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" pod="openshift-marketplace/redhat-operators-559wb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-559wb\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:04 crc kubenswrapper[4758]: I0122 16:34:04.648748 4758 status_manager.go:851] "Failed to get status for pod" podUID="8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9" pod="openshift-marketplace/community-operators-b2rzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-b2rzs\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:06 crc kubenswrapper[4758]: I0122 16:34:06.462581 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8v88c" Jan 22 16:34:06 crc kubenswrapper[4758]: I0122 16:34:06.463415 4758 status_manager.go:851] "Failed to get status for pod" podUID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" pod="openshift-marketplace/redhat-marketplace-nthqj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nthqj\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:06 crc kubenswrapper[4758]: I0122 16:34:06.463844 4758 status_manager.go:851] "Failed to get status for pod" podUID="88b3808a-aa06-48ab-9b57-f474a2e1379a" pod="openshift-marketplace/certified-operators-8v88c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8v88c\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:06 crc kubenswrapper[4758]: I0122 16:34:06.463944 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c6qmr" Jan 22 16:34:06 crc kubenswrapper[4758]: I0122 16:34:06.464368 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c6qmr" Jan 22 16:34:06 crc kubenswrapper[4758]: I0122 16:34:06.464894 4758 status_manager.go:851] "Failed to get status for pod" podUID="0437f83e-83ed-42f5-88ab-110deeeac7a4" pod="openshift-marketplace/certified-operators-mh88h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mh88h\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:06 crc kubenswrapper[4758]: I0122 16:34:06.466054 4758 status_manager.go:851] "Failed to get status for pod" podUID="3893a5d6-af77-48c5-a325-35d144e54f8a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:06 crc kubenswrapper[4758]: I0122 16:34:06.467012 4758 status_manager.go:851] "Failed to get status for pod" podUID="a12a62bb-3713-4f66-902e-673cc09db2ee" pod="openshift-marketplace/redhat-marketplace-wjs4t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wjs4t\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:06 crc kubenswrapper[4758]: I0122 16:34:06.467714 4758 status_manager.go:851] "Failed to get status for pod" podUID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" pod="openshift-marketplace/community-operators-c6qmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c6qmr\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:06 crc kubenswrapper[4758]: I0122 16:34:06.468403 4758 status_manager.go:851] "Failed to get status for pod" podUID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" pod="openshift-marketplace/redhat-operators-559wb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-559wb\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:06 crc kubenswrapper[4758]: I0122 16:34:06.469209 4758 status_manager.go:851] "Failed to get status for pod" podUID="8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9" pod="openshift-marketplace/community-operators-b2rzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-b2rzs\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:06 crc kubenswrapper[4758]: I0122 16:34:06.469784 4758 status_manager.go:851] "Failed to get status for pod" podUID="e88aa20b-e3aa-4cc2-856c-0dd5e9394992" pod="openshift-marketplace/redhat-operators-s7bgv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s7bgv\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:06 crc kubenswrapper[4758]: I0122 16:34:06.506384 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c6qmr" Jan 22 16:34:06 crc kubenswrapper[4758]: I0122 16:34:06.506963 4758 status_manager.go:851] "Failed to get status for pod" podUID="e88aa20b-e3aa-4cc2-856c-0dd5e9394992" pod="openshift-marketplace/redhat-operators-s7bgv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s7bgv\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:06 crc kubenswrapper[4758]: I0122 16:34:06.507154 4758 status_manager.go:851] "Failed to get status for pod" podUID="88b3808a-aa06-48ab-9b57-f474a2e1379a" pod="openshift-marketplace/certified-operators-8v88c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8v88c\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:06 crc kubenswrapper[4758]: I0122 16:34:06.507331 4758 status_manager.go:851] "Failed to get status for pod" podUID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" pod="openshift-marketplace/redhat-marketplace-nthqj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nthqj\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:06 crc kubenswrapper[4758]: I0122 16:34:06.507573 4758 status_manager.go:851] "Failed to get status for pod" podUID="0437f83e-83ed-42f5-88ab-110deeeac7a4" pod="openshift-marketplace/certified-operators-mh88h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mh88h\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:06 crc kubenswrapper[4758]: I0122 16:34:06.507860 4758 status_manager.go:851] "Failed to get status for pod" podUID="3893a5d6-af77-48c5-a325-35d144e54f8a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:06 crc kubenswrapper[4758]: I0122 16:34:06.508016 4758 status_manager.go:851] "Failed to get status for pod" podUID="a12a62bb-3713-4f66-902e-673cc09db2ee" pod="openshift-marketplace/redhat-marketplace-wjs4t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wjs4t\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:06 crc kubenswrapper[4758]: I0122 16:34:06.508183 4758 status_manager.go:851] "Failed to get status for pod" podUID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" pod="openshift-marketplace/community-operators-c6qmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c6qmr\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:06 crc kubenswrapper[4758]: I0122 16:34:06.508974 4758 status_manager.go:851] "Failed to get status for pod" podUID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" pod="openshift-marketplace/redhat-operators-559wb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-559wb\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:06 crc kubenswrapper[4758]: I0122 16:34:06.509411 4758 status_manager.go:851] "Failed to get status for pod" podUID="8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9" pod="openshift-marketplace/community-operators-b2rzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-b2rzs\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:06 crc kubenswrapper[4758]: E0122 16:34:06.656984 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:34:06Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:34:06Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:34:06Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:34:06Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0934f30eb8f9333151bdb8fb7ad24fe19bb186a20d28b0541182f909fb8f0145\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:dac313fa046b5a0965a26ce6996a51a0a3a77668fdbe4a5e5beea707e8024a2f\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1202844902},{\\\"names\\\":[],\\\"sizeBytes\\\":1178956511},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:e8b80caacac4b73aab52e45466d44499a5cf4750b1a632509a28c1edda1f1a0d\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:e8e328555353cb9f84f5a8b2142aff1ebb0f41f8b6db91fa21f05b580d5cfce8\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1170343151},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:06 crc kubenswrapper[4758]: E0122 16:34:06.657672 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:06 crc kubenswrapper[4758]: E0122 16:34:06.658078 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:06 crc kubenswrapper[4758]: E0122 16:34:06.659716 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:06 crc kubenswrapper[4758]: E0122 16:34:06.660129 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:06 crc kubenswrapper[4758]: E0122 16:34:06.660163 4758 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 16:34:06 crc kubenswrapper[4758]: I0122 16:34:06.674836 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c6qmr" Jan 22 16:34:06 crc kubenswrapper[4758]: I0122 16:34:06.675480 4758 status_manager.go:851] "Failed to get status for pod" podUID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" pod="openshift-marketplace/community-operators-c6qmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c6qmr\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:06 crc kubenswrapper[4758]: I0122 16:34:06.676978 4758 status_manager.go:851] "Failed to get status for pod" podUID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" pod="openshift-marketplace/redhat-operators-559wb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-559wb\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:06 crc kubenswrapper[4758]: I0122 16:34:06.677214 4758 status_manager.go:851] "Failed to get status for pod" podUID="8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9" pod="openshift-marketplace/community-operators-b2rzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-b2rzs\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:06 crc kubenswrapper[4758]: I0122 16:34:06.677409 4758 status_manager.go:851] "Failed to get status for pod" podUID="e88aa20b-e3aa-4cc2-856c-0dd5e9394992" pod="openshift-marketplace/redhat-operators-s7bgv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s7bgv\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:06 crc kubenswrapper[4758]: I0122 16:34:06.677594 4758 status_manager.go:851] "Failed to get status for pod" podUID="88b3808a-aa06-48ab-9b57-f474a2e1379a" pod="openshift-marketplace/certified-operators-8v88c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8v88c\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:06 crc kubenswrapper[4758]: I0122 16:34:06.677818 4758 status_manager.go:851] "Failed to get status for pod" podUID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" pod="openshift-marketplace/redhat-marketplace-nthqj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nthqj\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:06 crc kubenswrapper[4758]: I0122 16:34:06.678155 4758 status_manager.go:851] "Failed to get status for pod" podUID="0437f83e-83ed-42f5-88ab-110deeeac7a4" pod="openshift-marketplace/certified-operators-mh88h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mh88h\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:06 crc kubenswrapper[4758]: I0122 16:34:06.678551 4758 status_manager.go:851] "Failed to get status for pod" podUID="3893a5d6-af77-48c5-a325-35d144e54f8a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:06 crc kubenswrapper[4758]: I0122 16:34:06.678860 4758 status_manager.go:851] "Failed to get status for pod" podUID="a12a62bb-3713-4f66-902e-673cc09db2ee" pod="openshift-marketplace/redhat-marketplace-wjs4t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wjs4t\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:06 crc kubenswrapper[4758]: I0122 16:34:06.808587 4758 status_manager.go:851] "Failed to get status for pod" podUID="88b3808a-aa06-48ab-9b57-f474a2e1379a" pod="openshift-marketplace/certified-operators-8v88c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8v88c\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:06 crc kubenswrapper[4758]: I0122 16:34:06.809053 4758 status_manager.go:851] "Failed to get status for pod" podUID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" pod="openshift-marketplace/redhat-marketplace-nthqj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nthqj\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:06 crc kubenswrapper[4758]: I0122 16:34:06.809314 4758 status_manager.go:851] "Failed to get status for pod" podUID="0437f83e-83ed-42f5-88ab-110deeeac7a4" pod="openshift-marketplace/certified-operators-mh88h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mh88h\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:06 crc kubenswrapper[4758]: I0122 16:34:06.809608 4758 status_manager.go:851] "Failed to get status for pod" podUID="3893a5d6-af77-48c5-a325-35d144e54f8a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:06 crc kubenswrapper[4758]: I0122 16:34:06.809925 4758 status_manager.go:851] "Failed to get status for pod" podUID="a12a62bb-3713-4f66-902e-673cc09db2ee" pod="openshift-marketplace/redhat-marketplace-wjs4t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wjs4t\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:06 crc kubenswrapper[4758]: I0122 16:34:06.810265 4758 status_manager.go:851] "Failed to get status for pod" podUID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" pod="openshift-marketplace/community-operators-c6qmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c6qmr\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:06 crc kubenswrapper[4758]: I0122 16:34:06.810561 4758 status_manager.go:851] "Failed to get status for pod" podUID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" pod="openshift-marketplace/redhat-operators-559wb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-559wb\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:06 crc kubenswrapper[4758]: I0122 16:34:06.810814 4758 status_manager.go:851] "Failed to get status for pod" podUID="8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9" pod="openshift-marketplace/community-operators-b2rzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-b2rzs\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:06 crc kubenswrapper[4758]: I0122 16:34:06.811069 4758 status_manager.go:851] "Failed to get status for pod" podUID="e88aa20b-e3aa-4cc2-856c-0dd5e9394992" pod="openshift-marketplace/redhat-operators-s7bgv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s7bgv\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:06 crc kubenswrapper[4758]: E0122 16:34:06.900719 4758 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" interval="7s" Jan 22 16:34:08 crc kubenswrapper[4758]: I0122 16:34:08.268016 4758 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 22 16:34:08 crc kubenswrapper[4758]: I0122 16:34:08.268131 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 22 16:34:08 crc kubenswrapper[4758]: I0122 16:34:08.530262 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wjs4t" Jan 22 16:34:08 crc kubenswrapper[4758]: I0122 16:34:08.531111 4758 status_manager.go:851] "Failed to get status for pod" podUID="e88aa20b-e3aa-4cc2-856c-0dd5e9394992" pod="openshift-marketplace/redhat-operators-s7bgv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s7bgv\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:08 crc kubenswrapper[4758]: I0122 16:34:08.531721 4758 status_manager.go:851] "Failed to get status for pod" podUID="88b3808a-aa06-48ab-9b57-f474a2e1379a" pod="openshift-marketplace/certified-operators-8v88c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8v88c\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:08 crc kubenswrapper[4758]: I0122 16:34:08.532195 4758 status_manager.go:851] "Failed to get status for pod" podUID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" pod="openshift-marketplace/redhat-marketplace-nthqj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nthqj\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:08 crc kubenswrapper[4758]: I0122 16:34:08.532632 4758 status_manager.go:851] "Failed to get status for pod" podUID="0437f83e-83ed-42f5-88ab-110deeeac7a4" pod="openshift-marketplace/certified-operators-mh88h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mh88h\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:08 crc kubenswrapper[4758]: I0122 16:34:08.533097 4758 status_manager.go:851] "Failed to get status for pod" podUID="3893a5d6-af77-48c5-a325-35d144e54f8a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:08 crc kubenswrapper[4758]: I0122 16:34:08.533446 4758 status_manager.go:851] "Failed to get status for pod" podUID="a12a62bb-3713-4f66-902e-673cc09db2ee" pod="openshift-marketplace/redhat-marketplace-wjs4t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wjs4t\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:08 crc kubenswrapper[4758]: I0122 16:34:08.533809 4758 status_manager.go:851] "Failed to get status for pod" podUID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" pod="openshift-marketplace/community-operators-c6qmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c6qmr\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:08 crc kubenswrapper[4758]: I0122 16:34:08.534168 4758 status_manager.go:851] "Failed to get status for pod" podUID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" pod="openshift-marketplace/redhat-operators-559wb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-559wb\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:08 crc kubenswrapper[4758]: I0122 16:34:08.534604 4758 status_manager.go:851] "Failed to get status for pod" podUID="8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9" pod="openshift-marketplace/community-operators-b2rzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-b2rzs\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:08 crc kubenswrapper[4758]: I0122 16:34:08.808237 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:34:08 crc kubenswrapper[4758]: I0122 16:34:08.812893 4758 status_manager.go:851] "Failed to get status for pod" podUID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" pod="openshift-marketplace/redhat-operators-559wb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-559wb\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:08 crc kubenswrapper[4758]: I0122 16:34:08.813171 4758 status_manager.go:851] "Failed to get status for pod" podUID="8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9" pod="openshift-marketplace/community-operators-b2rzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-b2rzs\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:08 crc kubenswrapper[4758]: I0122 16:34:08.813376 4758 status_manager.go:851] "Failed to get status for pod" podUID="e88aa20b-e3aa-4cc2-856c-0dd5e9394992" pod="openshift-marketplace/redhat-operators-s7bgv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s7bgv\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:08 crc kubenswrapper[4758]: I0122 16:34:08.813722 4758 status_manager.go:851] "Failed to get status for pod" podUID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" pod="openshift-marketplace/redhat-marketplace-nthqj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nthqj\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:08 crc kubenswrapper[4758]: I0122 16:34:08.814247 4758 status_manager.go:851] "Failed to get status for pod" podUID="88b3808a-aa06-48ab-9b57-f474a2e1379a" pod="openshift-marketplace/certified-operators-8v88c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8v88c\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:08 crc kubenswrapper[4758]: I0122 16:34:08.814574 4758 status_manager.go:851] "Failed to get status for pod" podUID="0437f83e-83ed-42f5-88ab-110deeeac7a4" pod="openshift-marketplace/certified-operators-mh88h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mh88h\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:08 crc kubenswrapper[4758]: I0122 16:34:08.814897 4758 status_manager.go:851] "Failed to get status for pod" podUID="3893a5d6-af77-48c5-a325-35d144e54f8a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:08 crc kubenswrapper[4758]: I0122 16:34:08.815225 4758 status_manager.go:851] "Failed to get status for pod" podUID="a12a62bb-3713-4f66-902e-673cc09db2ee" pod="openshift-marketplace/redhat-marketplace-wjs4t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wjs4t\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:08 crc kubenswrapper[4758]: I0122 16:34:08.815626 4758 status_manager.go:851] "Failed to get status for pod" podUID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" pod="openshift-marketplace/community-operators-c6qmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c6qmr\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:08 crc kubenswrapper[4758]: I0122 16:34:08.816123 4758 status_manager.go:851] "Failed to get status for pod" podUID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" pod="openshift-marketplace/community-operators-c6qmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c6qmr\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:08 crc kubenswrapper[4758]: I0122 16:34:08.816422 4758 status_manager.go:851] "Failed to get status for pod" podUID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" pod="openshift-marketplace/redhat-operators-559wb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-559wb\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:08 crc kubenswrapper[4758]: I0122 16:34:08.816860 4758 status_manager.go:851] "Failed to get status for pod" podUID="8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9" pod="openshift-marketplace/community-operators-b2rzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-b2rzs\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:08 crc kubenswrapper[4758]: I0122 16:34:08.817133 4758 status_manager.go:851] "Failed to get status for pod" podUID="e88aa20b-e3aa-4cc2-856c-0dd5e9394992" pod="openshift-marketplace/redhat-operators-s7bgv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s7bgv\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:08 crc kubenswrapper[4758]: I0122 16:34:08.817387 4758 status_manager.go:851] "Failed to get status for pod" podUID="88b3808a-aa06-48ab-9b57-f474a2e1379a" pod="openshift-marketplace/certified-operators-8v88c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8v88c\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:08 crc kubenswrapper[4758]: I0122 16:34:08.817691 4758 status_manager.go:851] "Failed to get status for pod" podUID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" pod="openshift-marketplace/redhat-marketplace-nthqj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nthqj\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:08 crc kubenswrapper[4758]: I0122 16:34:08.817983 4758 status_manager.go:851] "Failed to get status for pod" podUID="0437f83e-83ed-42f5-88ab-110deeeac7a4" pod="openshift-marketplace/certified-operators-mh88h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mh88h\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:08 crc kubenswrapper[4758]: I0122 16:34:08.818231 4758 status_manager.go:851] "Failed to get status for pod" podUID="3893a5d6-af77-48c5-a325-35d144e54f8a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:08 crc kubenswrapper[4758]: I0122 16:34:08.818587 4758 status_manager.go:851] "Failed to get status for pod" podUID="a12a62bb-3713-4f66-902e-673cc09db2ee" pod="openshift-marketplace/redhat-marketplace-wjs4t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wjs4t\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:08 crc kubenswrapper[4758]: I0122 16:34:08.830528 4758 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f128c8ae-2e32-4884-a296-728579141589" Jan 22 16:34:08 crc kubenswrapper[4758]: I0122 16:34:08.830571 4758 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f128c8ae-2e32-4884-a296-728579141589" Jan 22 16:34:08 crc kubenswrapper[4758]: E0122 16:34:08.831099 4758 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:34:08 crc kubenswrapper[4758]: I0122 16:34:08.831691 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:34:08 crc kubenswrapper[4758]: W0122 16:34:08.874435 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-3e3235fda29df1d380d483430ade5510ef1664310e853f1d8da110edb87e4070 WatchSource:0}: Error finding container 3e3235fda29df1d380d483430ade5510ef1664310e853f1d8da110edb87e4070: Status 404 returned error can't find the container with id 3e3235fda29df1d380d483430ade5510ef1664310e853f1d8da110edb87e4070 Jan 22 16:34:09 crc kubenswrapper[4758]: I0122 16:34:09.660204 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3e3235fda29df1d380d483430ade5510ef1664310e853f1d8da110edb87e4070"} Jan 22 16:34:09 crc kubenswrapper[4758]: I0122 16:34:09.667214 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s7bgv" Jan 22 16:34:09 crc kubenswrapper[4758]: I0122 16:34:09.667921 4758 status_manager.go:851] "Failed to get status for pod" podUID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" pod="openshift-marketplace/community-operators-c6qmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c6qmr\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:09 crc kubenswrapper[4758]: I0122 16:34:09.668261 4758 status_manager.go:851] "Failed to get status for pod" podUID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" pod="openshift-marketplace/redhat-operators-559wb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-559wb\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:09 crc kubenswrapper[4758]: I0122 16:34:09.668731 4758 status_manager.go:851] "Failed to get status for pod" podUID="8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9" pod="openshift-marketplace/community-operators-b2rzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-b2rzs\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:09 crc kubenswrapper[4758]: I0122 16:34:09.669401 4758 status_manager.go:851] "Failed to get status for pod" podUID="e88aa20b-e3aa-4cc2-856c-0dd5e9394992" pod="openshift-marketplace/redhat-operators-s7bgv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s7bgv\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:09 crc kubenswrapper[4758]: I0122 16:34:09.669721 4758 status_manager.go:851] "Failed to get status for pod" podUID="88b3808a-aa06-48ab-9b57-f474a2e1379a" pod="openshift-marketplace/certified-operators-8v88c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8v88c\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:09 crc kubenswrapper[4758]: I0122 16:34:09.670134 4758 status_manager.go:851] "Failed to get status for pod" podUID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" pod="openshift-marketplace/redhat-marketplace-nthqj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nthqj\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:09 crc kubenswrapper[4758]: I0122 16:34:09.670568 4758 status_manager.go:851] "Failed to get status for pod" podUID="0437f83e-83ed-42f5-88ab-110deeeac7a4" pod="openshift-marketplace/certified-operators-mh88h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mh88h\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:09 crc kubenswrapper[4758]: I0122 16:34:09.670974 4758 status_manager.go:851] "Failed to get status for pod" podUID="3893a5d6-af77-48c5-a325-35d144e54f8a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:09 crc kubenswrapper[4758]: I0122 16:34:09.671610 4758 status_manager.go:851] "Failed to get status for pod" podUID="a12a62bb-3713-4f66-902e-673cc09db2ee" pod="openshift-marketplace/redhat-marketplace-wjs4t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wjs4t\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:09 crc kubenswrapper[4758]: I0122 16:34:09.714722 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s7bgv" Jan 22 16:34:09 crc kubenswrapper[4758]: I0122 16:34:09.715315 4758 status_manager.go:851] "Failed to get status for pod" podUID="e88aa20b-e3aa-4cc2-856c-0dd5e9394992" pod="openshift-marketplace/redhat-operators-s7bgv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s7bgv\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:09 crc kubenswrapper[4758]: I0122 16:34:09.715833 4758 status_manager.go:851] "Failed to get status for pod" podUID="88b3808a-aa06-48ab-9b57-f474a2e1379a" pod="openshift-marketplace/certified-operators-8v88c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8v88c\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:09 crc kubenswrapper[4758]: I0122 16:34:09.716174 4758 status_manager.go:851] "Failed to get status for pod" podUID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" pod="openshift-marketplace/redhat-marketplace-nthqj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nthqj\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:09 crc kubenswrapper[4758]: I0122 16:34:09.716431 4758 status_manager.go:851] "Failed to get status for pod" podUID="0437f83e-83ed-42f5-88ab-110deeeac7a4" pod="openshift-marketplace/certified-operators-mh88h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mh88h\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:09 crc kubenswrapper[4758]: I0122 16:34:09.716741 4758 status_manager.go:851] "Failed to get status for pod" podUID="3893a5d6-af77-48c5-a325-35d144e54f8a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:09 crc kubenswrapper[4758]: I0122 16:34:09.717041 4758 status_manager.go:851] "Failed to get status for pod" podUID="a12a62bb-3713-4f66-902e-673cc09db2ee" pod="openshift-marketplace/redhat-marketplace-wjs4t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wjs4t\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:09 crc kubenswrapper[4758]: I0122 16:34:09.717292 4758 status_manager.go:851] "Failed to get status for pod" podUID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" pod="openshift-marketplace/community-operators-c6qmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c6qmr\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:09 crc kubenswrapper[4758]: I0122 16:34:09.717633 4758 status_manager.go:851] "Failed to get status for pod" podUID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" pod="openshift-marketplace/redhat-operators-559wb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-559wb\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:09 crc kubenswrapper[4758]: I0122 16:34:09.717874 4758 status_manager.go:851] "Failed to get status for pod" podUID="8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9" pod="openshift-marketplace/community-operators-b2rzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-b2rzs\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:10 crc kubenswrapper[4758]: E0122 16:34:10.690091 4758 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.223:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-marketplace-nthqj.188d1abe98675b3a openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-nthqj,UID:25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e,APIVersion:v1,ResourceVersion:28597,FieldPath:spec.containers{registry-server},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\",Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 16:33:54.52949177 +0000 UTC m=+256.012831045,LastTimestamp:2026-01-22 16:33:54.52949177 +0000 UTC m=+256.012831045,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 16:34:13 crc kubenswrapper[4758]: I0122 16:34:13.686657 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 22 16:34:13 crc kubenswrapper[4758]: I0122 16:34:13.687137 4758 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b" exitCode=1 Jan 22 16:34:13 crc kubenswrapper[4758]: I0122 16:34:13.687178 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b"} Jan 22 16:34:13 crc kubenswrapper[4758]: I0122 16:34:13.687715 4758 scope.go:117] "RemoveContainer" containerID="9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b" Jan 22 16:34:13 crc kubenswrapper[4758]: I0122 16:34:13.687979 4758 status_manager.go:851] "Failed to get status for pod" podUID="e88aa20b-e3aa-4cc2-856c-0dd5e9394992" pod="openshift-marketplace/redhat-operators-s7bgv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s7bgv\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:13 crc kubenswrapper[4758]: I0122 16:34:13.688429 4758 status_manager.go:851] "Failed to get status for pod" podUID="88b3808a-aa06-48ab-9b57-f474a2e1379a" pod="openshift-marketplace/certified-operators-8v88c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8v88c\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:13 crc kubenswrapper[4758]: I0122 16:34:13.688707 4758 status_manager.go:851] "Failed to get status for pod" podUID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" pod="openshift-marketplace/redhat-marketplace-nthqj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nthqj\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:13 crc kubenswrapper[4758]: I0122 16:34:13.688950 4758 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:13 crc kubenswrapper[4758]: I0122 16:34:13.689265 4758 status_manager.go:851] "Failed to get status for pod" podUID="0437f83e-83ed-42f5-88ab-110deeeac7a4" pod="openshift-marketplace/certified-operators-mh88h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mh88h\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:13 crc kubenswrapper[4758]: I0122 16:34:13.689589 4758 status_manager.go:851] "Failed to get status for pod" podUID="3893a5d6-af77-48c5-a325-35d144e54f8a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:13 crc kubenswrapper[4758]: I0122 16:34:13.689906 4758 status_manager.go:851] "Failed to get status for pod" podUID="a12a62bb-3713-4f66-902e-673cc09db2ee" pod="openshift-marketplace/redhat-marketplace-wjs4t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wjs4t\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:13 crc kubenswrapper[4758]: I0122 16:34:13.690279 4758 status_manager.go:851] "Failed to get status for pod" podUID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" pod="openshift-marketplace/community-operators-c6qmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c6qmr\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:13 crc kubenswrapper[4758]: I0122 16:34:13.690612 4758 status_manager.go:851] "Failed to get status for pod" podUID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" pod="openshift-marketplace/redhat-operators-559wb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-559wb\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:13 crc kubenswrapper[4758]: I0122 16:34:13.690980 4758 status_manager.go:851] "Failed to get status for pod" podUID="8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9" pod="openshift-marketplace/community-operators-b2rzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-b2rzs\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:13 crc kubenswrapper[4758]: E0122 16:34:13.901831 4758 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" interval="7s" Jan 22 16:34:14 crc kubenswrapper[4758]: I0122 16:34:14.693420 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cd290fb89be5c33bb174a21568fe0c1a36df61d8437560cdf7f9ebf98b5621c1"} Jan 22 16:34:15 crc kubenswrapper[4758]: I0122 16:34:15.295893 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 16:34:15 crc kubenswrapper[4758]: I0122 16:34:15.699946 4758 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="cd290fb89be5c33bb174a21568fe0c1a36df61d8437560cdf7f9ebf98b5621c1" exitCode=0 Jan 22 16:34:15 crc kubenswrapper[4758]: I0122 16:34:15.699994 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"cd290fb89be5c33bb174a21568fe0c1a36df61d8437560cdf7f9ebf98b5621c1"} Jan 22 16:34:16 crc kubenswrapper[4758]: I0122 16:34:16.712063 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 22 16:34:16 crc kubenswrapper[4758]: I0122 16:34:16.712702 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"087a29a92b87397845777f3d37268935361fbcdc0080c0ed7d757240b78974bb"} Jan 22 16:34:16 crc kubenswrapper[4758]: I0122 16:34:16.713668 4758 status_manager.go:851] "Failed to get status for pod" podUID="3893a5d6-af77-48c5-a325-35d144e54f8a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:16 crc kubenswrapper[4758]: I0122 16:34:16.713907 4758 status_manager.go:851] "Failed to get status for pod" podUID="a12a62bb-3713-4f66-902e-673cc09db2ee" pod="openshift-marketplace/redhat-marketplace-wjs4t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wjs4t\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:16 crc kubenswrapper[4758]: I0122 16:34:16.714407 4758 status_manager.go:851] "Failed to get status for pod" podUID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" pod="openshift-marketplace/community-operators-c6qmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c6qmr\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:16 crc kubenswrapper[4758]: I0122 16:34:16.714786 4758 status_manager.go:851] "Failed to get status for pod" podUID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" pod="openshift-marketplace/redhat-operators-559wb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-559wb\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:16 crc kubenswrapper[4758]: I0122 16:34:16.714960 4758 status_manager.go:851] "Failed to get status for pod" podUID="8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9" pod="openshift-marketplace/community-operators-b2rzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-b2rzs\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:16 crc kubenswrapper[4758]: I0122 16:34:16.715175 4758 status_manager.go:851] "Failed to get status for pod" podUID="e88aa20b-e3aa-4cc2-856c-0dd5e9394992" pod="openshift-marketplace/redhat-operators-s7bgv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s7bgv\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:16 crc kubenswrapper[4758]: I0122 16:34:16.715419 4758 status_manager.go:851] "Failed to get status for pod" podUID="88b3808a-aa06-48ab-9b57-f474a2e1379a" pod="openshift-marketplace/certified-operators-8v88c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8v88c\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:16 crc kubenswrapper[4758]: I0122 16:34:16.716565 4758 status_manager.go:851] "Failed to get status for pod" podUID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" pod="openshift-marketplace/redhat-marketplace-nthqj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nthqj\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:16 crc kubenswrapper[4758]: I0122 16:34:16.716809 4758 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:16 crc kubenswrapper[4758]: I0122 16:34:16.717062 4758 status_manager.go:851] "Failed to get status for pod" podUID="0437f83e-83ed-42f5-88ab-110deeeac7a4" pod="openshift-marketplace/certified-operators-mh88h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mh88h\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:16 crc kubenswrapper[4758]: I0122 16:34:16.717553 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-559wb" event={"ID":"895a8f2e-590a-4270-9eb0-1f7c76da93d9","Type":"ContainerStarted","Data":"e8ed5fc8196221585826d54aa6de4928df87bba04e5bc995b771c9ee1463907a"} Jan 22 16:34:16 crc kubenswrapper[4758]: I0122 16:34:16.717836 4758 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f128c8ae-2e32-4884-a296-728579141589" Jan 22 16:34:16 crc kubenswrapper[4758]: I0122 16:34:16.717860 4758 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f128c8ae-2e32-4884-a296-728579141589" Jan 22 16:34:16 crc kubenswrapper[4758]: I0122 16:34:16.718346 4758 status_manager.go:851] "Failed to get status for pod" podUID="88b3808a-aa06-48ab-9b57-f474a2e1379a" pod="openshift-marketplace/certified-operators-8v88c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8v88c\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:16 crc kubenswrapper[4758]: E0122 16:34:16.718341 4758 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:34:16 crc kubenswrapper[4758]: I0122 16:34:16.718556 4758 status_manager.go:851] "Failed to get status for pod" podUID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" pod="openshift-marketplace/redhat-marketplace-nthqj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nthqj\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:16 crc kubenswrapper[4758]: I0122 16:34:16.718727 4758 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:16 crc kubenswrapper[4758]: I0122 16:34:16.718977 4758 status_manager.go:851] "Failed to get status for pod" podUID="0437f83e-83ed-42f5-88ab-110deeeac7a4" pod="openshift-marketplace/certified-operators-mh88h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mh88h\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:16 crc kubenswrapper[4758]: I0122 16:34:16.719286 4758 status_manager.go:851] "Failed to get status for pod" podUID="3893a5d6-af77-48c5-a325-35d144e54f8a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:16 crc kubenswrapper[4758]: I0122 16:34:16.719514 4758 status_manager.go:851] "Failed to get status for pod" podUID="a12a62bb-3713-4f66-902e-673cc09db2ee" pod="openshift-marketplace/redhat-marketplace-wjs4t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wjs4t\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:16 crc kubenswrapper[4758]: I0122 16:34:16.719875 4758 status_manager.go:851] "Failed to get status for pod" podUID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" pod="openshift-marketplace/community-operators-c6qmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c6qmr\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:16 crc kubenswrapper[4758]: I0122 16:34:16.720073 4758 status_manager.go:851] "Failed to get status for pod" podUID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" pod="openshift-marketplace/redhat-operators-559wb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-559wb\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:16 crc kubenswrapper[4758]: I0122 16:34:16.720333 4758 status_manager.go:851] "Failed to get status for pod" podUID="8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9" pod="openshift-marketplace/community-operators-b2rzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-b2rzs\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:16 crc kubenswrapper[4758]: I0122 16:34:16.720907 4758 status_manager.go:851] "Failed to get status for pod" podUID="e88aa20b-e3aa-4cc2-856c-0dd5e9394992" pod="openshift-marketplace/redhat-operators-s7bgv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s7bgv\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:16 crc kubenswrapper[4758]: I0122 16:34:16.721205 4758 status_manager.go:851] "Failed to get status for pod" podUID="e88aa20b-e3aa-4cc2-856c-0dd5e9394992" pod="openshift-marketplace/redhat-operators-s7bgv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s7bgv\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:16 crc kubenswrapper[4758]: I0122 16:34:16.721520 4758 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:16 crc kubenswrapper[4758]: I0122 16:34:16.721880 4758 status_manager.go:851] "Failed to get status for pod" podUID="88b3808a-aa06-48ab-9b57-f474a2e1379a" pod="openshift-marketplace/certified-operators-8v88c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8v88c\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:16 crc kubenswrapper[4758]: I0122 16:34:16.722233 4758 status_manager.go:851] "Failed to get status for pod" podUID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" pod="openshift-marketplace/redhat-marketplace-nthqj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nthqj\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:16 crc kubenswrapper[4758]: I0122 16:34:16.722648 4758 status_manager.go:851] "Failed to get status for pod" podUID="0437f83e-83ed-42f5-88ab-110deeeac7a4" pod="openshift-marketplace/certified-operators-mh88h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mh88h\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:16 crc kubenswrapper[4758]: I0122 16:34:16.723021 4758 status_manager.go:851] "Failed to get status for pod" podUID="3893a5d6-af77-48c5-a325-35d144e54f8a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:16 crc kubenswrapper[4758]: I0122 16:34:16.723447 4758 status_manager.go:851] "Failed to get status for pod" podUID="a12a62bb-3713-4f66-902e-673cc09db2ee" pod="openshift-marketplace/redhat-marketplace-wjs4t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wjs4t\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:16 crc kubenswrapper[4758]: I0122 16:34:16.723983 4758 status_manager.go:851] "Failed to get status for pod" podUID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" pod="openshift-marketplace/community-operators-c6qmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c6qmr\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:16 crc kubenswrapper[4758]: I0122 16:34:16.724288 4758 status_manager.go:851] "Failed to get status for pod" podUID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" pod="openshift-marketplace/redhat-operators-559wb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-559wb\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:16 crc kubenswrapper[4758]: I0122 16:34:16.724591 4758 status_manager.go:851] "Failed to get status for pod" podUID="8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9" pod="openshift-marketplace/community-operators-b2rzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-b2rzs\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:16 crc kubenswrapper[4758]: E0122 16:34:16.770305 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:34:16Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:34:16Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:34:16Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:34:16Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0934f30eb8f9333151bdb8fb7ad24fe19bb186a20d28b0541182f909fb8f0145\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:dac313fa046b5a0965a26ce6996a51a0a3a77668fdbe4a5e5beea707e8024a2f\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1202844902},{\\\"names\\\":[],\\\"sizeBytes\\\":1178956511},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:e8b80caacac4b73aab52e45466d44499a5cf4750b1a632509a28c1edda1f1a0d\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:e8e328555353cb9f84f5a8b2142aff1ebb0f41f8b6db91fa21f05b580d5cfce8\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1170343151},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:16 crc kubenswrapper[4758]: E0122 16:34:16.771089 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:16 crc kubenswrapper[4758]: E0122 16:34:16.771324 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:16 crc kubenswrapper[4758]: E0122 16:34:16.771666 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:16 crc kubenswrapper[4758]: E0122 16:34:16.772182 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:16 crc kubenswrapper[4758]: E0122 16:34:16.772204 4758 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 16:34:16 crc kubenswrapper[4758]: I0122 16:34:16.806810 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 16:34:16 crc kubenswrapper[4758]: I0122 16:34:16.818360 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 16:34:16 crc kubenswrapper[4758]: I0122 16:34:16.819651 4758 status_manager.go:851] "Failed to get status for pod" podUID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" pod="openshift-marketplace/community-operators-c6qmr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c6qmr\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:16 crc kubenswrapper[4758]: I0122 16:34:16.820104 4758 status_manager.go:851] "Failed to get status for pod" podUID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" pod="openshift-marketplace/redhat-operators-559wb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-559wb\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:16 crc kubenswrapper[4758]: I0122 16:34:16.820403 4758 status_manager.go:851] "Failed to get status for pod" podUID="8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9" pod="openshift-marketplace/community-operators-b2rzs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-b2rzs\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:16 crc kubenswrapper[4758]: I0122 16:34:16.820627 4758 status_manager.go:851] "Failed to get status for pod" podUID="e88aa20b-e3aa-4cc2-856c-0dd5e9394992" pod="openshift-marketplace/redhat-operators-s7bgv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s7bgv\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:16 crc kubenswrapper[4758]: I0122 16:34:16.820953 4758 status_manager.go:851] "Failed to get status for pod" podUID="88b3808a-aa06-48ab-9b57-f474a2e1379a" pod="openshift-marketplace/certified-operators-8v88c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8v88c\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:16 crc kubenswrapper[4758]: I0122 16:34:16.821190 4758 status_manager.go:851] "Failed to get status for pod" podUID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" pod="openshift-marketplace/redhat-marketplace-nthqj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nthqj\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:16 crc kubenswrapper[4758]: I0122 16:34:16.821470 4758 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:16 crc kubenswrapper[4758]: I0122 16:34:16.821684 4758 status_manager.go:851] "Failed to get status for pod" podUID="0437f83e-83ed-42f5-88ab-110deeeac7a4" pod="openshift-marketplace/certified-operators-mh88h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mh88h\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:16 crc kubenswrapper[4758]: I0122 16:34:16.821918 4758 status_manager.go:851] "Failed to get status for pod" podUID="3893a5d6-af77-48c5-a325-35d144e54f8a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:16 crc kubenswrapper[4758]: I0122 16:34:16.822167 4758 status_manager.go:851] "Failed to get status for pod" podUID="a12a62bb-3713-4f66-902e-673cc09db2ee" pod="openshift-marketplace/redhat-marketplace-wjs4t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wjs4t\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 16:34:17 crc kubenswrapper[4758]: I0122 16:34:17.725645 4758 generic.go:334] "Generic (PLEG): container finished" podID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" containerID="e8ed5fc8196221585826d54aa6de4928df87bba04e5bc995b771c9ee1463907a" exitCode=0 Jan 22 16:34:17 crc kubenswrapper[4758]: I0122 16:34:17.725726 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-559wb" event={"ID":"895a8f2e-590a-4270-9eb0-1f7c76da93d9","Type":"ContainerDied","Data":"e8ed5fc8196221585826d54aa6de4928df87bba04e5bc995b771c9ee1463907a"} Jan 22 16:34:17 crc kubenswrapper[4758]: I0122 16:34:17.728767 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"974ff0c47d6de1ef92f2a947eccbb3f0e896c31f3162411594f7b4c4ce071949"} Jan 22 16:34:17 crc kubenswrapper[4758]: I0122 16:34:17.729143 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 16:34:17 crc kubenswrapper[4758]: I0122 16:34:17.729159 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8e33eb125ab84769bb47bfb5bbf4c3643562a9ae950fe7f4a6f3ddde4057d86b"} Jan 22 16:34:18 crc kubenswrapper[4758]: I0122 16:34:18.738879 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8524625596fcbfe582a5732e64ee3e18558e6130d45354cf6e4e018f72b72ca4"} Jan 22 16:34:18 crc kubenswrapper[4758]: I0122 16:34:18.738928 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d85ec8bd9b038aaa3b4ff14660278fde919e1c14f95d6bd87146e2cb8c6e4573"} Jan 22 16:34:20 crc kubenswrapper[4758]: I0122 16:34:20.751730 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c6cfebd4cd91c2a252ecf6d339e64e6df35067db13561a3991dea9fe21832d48"} Jan 22 16:34:22 crc kubenswrapper[4758]: I0122 16:34:21.757976 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:34:22 crc kubenswrapper[4758]: I0122 16:34:21.758071 4758 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f128c8ae-2e32-4884-a296-728579141589" Jan 22 16:34:22 crc kubenswrapper[4758]: I0122 16:34:21.758245 4758 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f128c8ae-2e32-4884-a296-728579141589" Jan 22 16:34:22 crc kubenswrapper[4758]: I0122 16:34:21.767287 4758 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:34:22 crc kubenswrapper[4758]: I0122 16:34:22.762940 4758 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f128c8ae-2e32-4884-a296-728579141589" Jan 22 16:34:22 crc kubenswrapper[4758]: I0122 16:34:22.762974 4758 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f128c8ae-2e32-4884-a296-728579141589" Jan 22 16:34:23 crc kubenswrapper[4758]: I0122 16:34:23.778860 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-559wb" event={"ID":"895a8f2e-590a-4270-9eb0-1f7c76da93d9","Type":"ContainerStarted","Data":"b05fae3fddde3f7f1e9fa6cefbb6b68ceb3550b54594bedb88e87d7e3f0fa3b3"} Jan 22 16:34:23 crc kubenswrapper[4758]: I0122 16:34:23.832453 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:34:23 crc kubenswrapper[4758]: I0122 16:34:23.832513 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:34:23 crc kubenswrapper[4758]: I0122 16:34:23.832985 4758 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f128c8ae-2e32-4884-a296-728579141589" Jan 22 16:34:23 crc kubenswrapper[4758]: I0122 16:34:23.833010 4758 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f128c8ae-2e32-4884-a296-728579141589" Jan 22 16:34:23 crc kubenswrapper[4758]: I0122 16:34:23.839982 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:34:23 crc kubenswrapper[4758]: I0122 16:34:23.935064 4758 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="1c554ef0-8c26-48e1-8edd-12d9fe089743" Jan 22 16:34:24 crc kubenswrapper[4758]: I0122 16:34:24.783204 4758 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f128c8ae-2e32-4884-a296-728579141589" Jan 22 16:34:24 crc kubenswrapper[4758]: I0122 16:34:24.783234 4758 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f128c8ae-2e32-4884-a296-728579141589" Jan 22 16:34:24 crc kubenswrapper[4758]: I0122 16:34:24.786700 4758 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="1c554ef0-8c26-48e1-8edd-12d9fe089743" Jan 22 16:34:28 crc kubenswrapper[4758]: I0122 16:34:28.272266 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 16:34:29 crc kubenswrapper[4758]: I0122 16:34:29.689103 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-559wb" Jan 22 16:34:29 crc kubenswrapper[4758]: I0122 16:34:29.689293 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-559wb" Jan 22 16:34:29 crc kubenswrapper[4758]: I0122 16:34:29.727236 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-559wb" Jan 22 16:34:29 crc kubenswrapper[4758]: I0122 16:34:29.852170 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-559wb" Jan 22 16:34:36 crc kubenswrapper[4758]: I0122 16:34:36.755661 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 22 16:34:37 crc kubenswrapper[4758]: I0122 16:34:37.136342 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 22 16:34:37 crc kubenswrapper[4758]: I0122 16:34:37.138563 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 22 16:34:37 crc kubenswrapper[4758]: I0122 16:34:37.230959 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 22 16:34:38 crc kubenswrapper[4758]: I0122 16:34:38.540996 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 22 16:34:38 crc kubenswrapper[4758]: I0122 16:34:38.701438 4758 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 22 16:34:40 crc kubenswrapper[4758]: I0122 16:34:40.109905 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 22 16:34:41 crc kubenswrapper[4758]: I0122 16:34:41.393365 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 22 16:34:41 crc kubenswrapper[4758]: I0122 16:34:41.952685 4758 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 22 16:34:41 crc kubenswrapper[4758]: I0122 16:34:41.972889 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 22 16:34:42 crc kubenswrapper[4758]: I0122 16:34:42.207814 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 22 16:34:42 crc kubenswrapper[4758]: I0122 16:34:42.864461 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 22 16:34:43 crc kubenswrapper[4758]: I0122 16:34:43.510999 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 22 16:34:43 crc kubenswrapper[4758]: I0122 16:34:43.719221 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 22 16:34:43 crc kubenswrapper[4758]: I0122 16:34:43.983692 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 22 16:34:45 crc kubenswrapper[4758]: I0122 16:34:45.424868 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 22 16:34:45 crc kubenswrapper[4758]: I0122 16:34:45.793213 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 22 16:34:47 crc kubenswrapper[4758]: I0122 16:34:47.588214 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 22 16:34:47 crc kubenswrapper[4758]: I0122 16:34:47.699883 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 22 16:34:48 crc kubenswrapper[4758]: I0122 16:34:48.006588 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 22 16:34:48 crc kubenswrapper[4758]: I0122 16:34:48.878390 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 22 16:34:49 crc kubenswrapper[4758]: I0122 16:34:49.038874 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 22 16:34:52 crc kubenswrapper[4758]: I0122 16:34:52.287235 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 22 16:34:52 crc kubenswrapper[4758]: I0122 16:34:52.495825 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 22 16:34:55 crc kubenswrapper[4758]: I0122 16:34:55.398405 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 22 16:34:55 crc kubenswrapper[4758]: I0122 16:34:55.902863 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 22 16:34:56 crc kubenswrapper[4758]: I0122 16:34:56.669096 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 22 16:34:56 crc kubenswrapper[4758]: I0122 16:34:56.921944 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 22 16:34:56 crc kubenswrapper[4758]: I0122 16:34:56.946984 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 22 16:34:57 crc kubenswrapper[4758]: I0122 16:34:57.045848 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 22 16:34:57 crc kubenswrapper[4758]: I0122 16:34:57.628145 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 22 16:34:57 crc kubenswrapper[4758]: I0122 16:34:57.941172 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 22 16:34:57 crc kubenswrapper[4758]: I0122 16:34:57.961101 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 22 16:34:58 crc kubenswrapper[4758]: I0122 16:34:58.136585 4758 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 22 16:34:58 crc kubenswrapper[4758]: I0122 16:34:58.146869 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 22 16:34:58 crc kubenswrapper[4758]: I0122 16:34:58.178072 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 22 16:34:58 crc kubenswrapper[4758]: I0122 16:34:58.398589 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 22 16:34:58 crc kubenswrapper[4758]: I0122 16:34:58.744947 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 22 16:34:58 crc kubenswrapper[4758]: I0122 16:34:58.795833 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 22 16:34:58 crc kubenswrapper[4758]: I0122 16:34:58.993538 4758 generic.go:334] "Generic (PLEG): container finished" podID="5caed3c6-9037-4ecf-b0db-778db52bd3ee" containerID="ad7762057c01299f540360f0792d6ba76ce7864075c83239ba128aa10145c676" exitCode=0 Jan 22 16:34:58 crc kubenswrapper[4758]: I0122 16:34:58.993585 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fjsgm" event={"ID":"5caed3c6-9037-4ecf-b0db-778db52bd3ee","Type":"ContainerDied","Data":"ad7762057c01299f540360f0792d6ba76ce7864075c83239ba128aa10145c676"} Jan 22 16:34:58 crc kubenswrapper[4758]: I0122 16:34:58.994206 4758 scope.go:117] "RemoveContainer" containerID="ad7762057c01299f540360f0792d6ba76ce7864075c83239ba128aa10145c676" Jan 22 16:34:59 crc kubenswrapper[4758]: I0122 16:34:59.032710 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 22 16:34:59 crc kubenswrapper[4758]: I0122 16:34:59.072889 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 22 16:34:59 crc kubenswrapper[4758]: I0122 16:34:59.092502 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 22 16:34:59 crc kubenswrapper[4758]: I0122 16:34:59.281545 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 22 16:34:59 crc kubenswrapper[4758]: I0122 16:34:59.377354 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 22 16:34:59 crc kubenswrapper[4758]: I0122 16:34:59.429734 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 22 16:34:59 crc kubenswrapper[4758]: I0122 16:34:59.853685 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 22 16:34:59 crc kubenswrapper[4758]: I0122 16:34:59.860251 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 22 16:35:00 crc kubenswrapper[4758]: I0122 16:35:00.004933 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-fjsgm_5caed3c6-9037-4ecf-b0db-778db52bd3ee/marketplace-operator/1.log" Jan 22 16:35:00 crc kubenswrapper[4758]: I0122 16:35:00.005474 4758 generic.go:334] "Generic (PLEG): container finished" podID="5caed3c6-9037-4ecf-b0db-778db52bd3ee" containerID="c6916366a4d57ca512d7ef0ae340c6bba9aab5100e71d5290324b747cfaaa815" exitCode=1 Jan 22 16:35:00 crc kubenswrapper[4758]: I0122 16:35:00.005534 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fjsgm" event={"ID":"5caed3c6-9037-4ecf-b0db-778db52bd3ee","Type":"ContainerDied","Data":"c6916366a4d57ca512d7ef0ae340c6bba9aab5100e71d5290324b747cfaaa815"} Jan 22 16:35:00 crc kubenswrapper[4758]: I0122 16:35:00.005589 4758 scope.go:117] "RemoveContainer" containerID="ad7762057c01299f540360f0792d6ba76ce7864075c83239ba128aa10145c676" Jan 22 16:35:00 crc kubenswrapper[4758]: I0122 16:35:00.006128 4758 scope.go:117] "RemoveContainer" containerID="c6916366a4d57ca512d7ef0ae340c6bba9aab5100e71d5290324b747cfaaa815" Jan 22 16:35:00 crc kubenswrapper[4758]: E0122 16:35:00.006360 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-fjsgm_openshift-marketplace(5caed3c6-9037-4ecf-b0db-778db52bd3ee)\"" pod="openshift-marketplace/marketplace-operator-79b997595-fjsgm" podUID="5caed3c6-9037-4ecf-b0db-778db52bd3ee" Jan 22 16:35:00 crc kubenswrapper[4758]: I0122 16:35:00.792571 4758 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 22 16:35:00 crc kubenswrapper[4758]: I0122 16:35:00.857672 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 22 16:35:00 crc kubenswrapper[4758]: I0122 16:35:00.898526 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 22 16:35:01 crc kubenswrapper[4758]: I0122 16:35:01.013099 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-fjsgm_5caed3c6-9037-4ecf-b0db-778db52bd3ee/marketplace-operator/1.log" Jan 22 16:35:01 crc kubenswrapper[4758]: I0122 16:35:01.045950 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 22 16:35:01 crc kubenswrapper[4758]: I0122 16:35:01.068642 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 22 16:35:01 crc kubenswrapper[4758]: I0122 16:35:01.110222 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 22 16:35:01 crc kubenswrapper[4758]: I0122 16:35:01.117963 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 22 16:35:01 crc kubenswrapper[4758]: I0122 16:35:01.163673 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 22 16:35:01 crc kubenswrapper[4758]: I0122 16:35:01.376774 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 22 16:35:01 crc kubenswrapper[4758]: I0122 16:35:01.736161 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 22 16:35:01 crc kubenswrapper[4758]: I0122 16:35:01.890715 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 22 16:35:02 crc kubenswrapper[4758]: I0122 16:35:02.178278 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 22 16:35:02 crc kubenswrapper[4758]: I0122 16:35:02.634700 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 22 16:35:02 crc kubenswrapper[4758]: I0122 16:35:02.635685 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 22 16:35:02 crc kubenswrapper[4758]: I0122 16:35:02.652243 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 22 16:35:02 crc kubenswrapper[4758]: I0122 16:35:02.683004 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 22 16:35:02 crc kubenswrapper[4758]: I0122 16:35:02.754336 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 22 16:35:02 crc kubenswrapper[4758]: I0122 16:35:02.817287 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 22 16:35:02 crc kubenswrapper[4758]: I0122 16:35:02.819105 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 22 16:35:03 crc kubenswrapper[4758]: I0122 16:35:03.431537 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 22 16:35:03 crc kubenswrapper[4758]: I0122 16:35:03.448543 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 22 16:35:03 crc kubenswrapper[4758]: I0122 16:35:03.488619 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 22 16:35:03 crc kubenswrapper[4758]: I0122 16:35:03.793565 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 22 16:35:03 crc kubenswrapper[4758]: I0122 16:35:03.801005 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 22 16:35:03 crc kubenswrapper[4758]: I0122 16:35:03.912499 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 22 16:35:04 crc kubenswrapper[4758]: I0122 16:35:04.011001 4758 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 22 16:35:04 crc kubenswrapper[4758]: I0122 16:35:04.258142 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 22 16:35:04 crc kubenswrapper[4758]: I0122 16:35:04.337487 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 22 16:35:04 crc kubenswrapper[4758]: I0122 16:35:04.419835 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 22 16:35:04 crc kubenswrapper[4758]: I0122 16:35:04.434610 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 22 16:35:04 crc kubenswrapper[4758]: I0122 16:35:04.462497 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 22 16:35:04 crc kubenswrapper[4758]: I0122 16:35:04.695322 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 22 16:35:04 crc kubenswrapper[4758]: I0122 16:35:04.929587 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 22 16:35:05 crc kubenswrapper[4758]: I0122 16:35:05.125717 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 22 16:35:05 crc kubenswrapper[4758]: I0122 16:35:05.450109 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 22 16:35:05 crc kubenswrapper[4758]: I0122 16:35:05.778932 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 22 16:35:06 crc kubenswrapper[4758]: I0122 16:35:06.408450 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 22 16:35:06 crc kubenswrapper[4758]: I0122 16:35:06.702148 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 22 16:35:06 crc kubenswrapper[4758]: I0122 16:35:06.925244 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 22 16:35:07 crc kubenswrapper[4758]: I0122 16:35:07.058819 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 22 16:35:07 crc kubenswrapper[4758]: I0122 16:35:07.135205 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 22 16:35:07 crc kubenswrapper[4758]: I0122 16:35:07.199121 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 22 16:35:07 crc kubenswrapper[4758]: I0122 16:35:07.209391 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 22 16:35:07 crc kubenswrapper[4758]: I0122 16:35:07.253164 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-fjsgm" Jan 22 16:35:07 crc kubenswrapper[4758]: I0122 16:35:07.253896 4758 scope.go:117] "RemoveContainer" containerID="c6916366a4d57ca512d7ef0ae340c6bba9aab5100e71d5290324b747cfaaa815" Jan 22 16:35:07 crc kubenswrapper[4758]: E0122 16:35:07.254189 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-fjsgm_openshift-marketplace(5caed3c6-9037-4ecf-b0db-778db52bd3ee)\"" pod="openshift-marketplace/marketplace-operator-79b997595-fjsgm" podUID="5caed3c6-9037-4ecf-b0db-778db52bd3ee" Jan 22 16:35:07 crc kubenswrapper[4758]: I0122 16:35:07.254511 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-fjsgm" Jan 22 16:35:07 crc kubenswrapper[4758]: I0122 16:35:07.413141 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 22 16:35:07 crc kubenswrapper[4758]: I0122 16:35:07.602532 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 22 16:35:07 crc kubenswrapper[4758]: I0122 16:35:07.720121 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 22 16:35:07 crc kubenswrapper[4758]: I0122 16:35:07.915002 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 22 16:35:08 crc kubenswrapper[4758]: I0122 16:35:08.055000 4758 scope.go:117] "RemoveContainer" containerID="c6916366a4d57ca512d7ef0ae340c6bba9aab5100e71d5290324b747cfaaa815" Jan 22 16:35:08 crc kubenswrapper[4758]: E0122 16:35:08.055566 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-fjsgm_openshift-marketplace(5caed3c6-9037-4ecf-b0db-778db52bd3ee)\"" pod="openshift-marketplace/marketplace-operator-79b997595-fjsgm" podUID="5caed3c6-9037-4ecf-b0db-778db52bd3ee" Jan 22 16:35:08 crc kubenswrapper[4758]: I0122 16:35:08.062221 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 22 16:35:08 crc kubenswrapper[4758]: I0122 16:35:08.234317 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 22 16:35:08 crc kubenswrapper[4758]: I0122 16:35:08.301919 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 22 16:35:08 crc kubenswrapper[4758]: I0122 16:35:08.489901 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 22 16:35:08 crc kubenswrapper[4758]: I0122 16:35:08.601458 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 22 16:35:08 crc kubenswrapper[4758]: I0122 16:35:08.612138 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 22 16:35:08 crc kubenswrapper[4758]: I0122 16:35:08.753985 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 22 16:35:09 crc kubenswrapper[4758]: I0122 16:35:09.074891 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 22 16:35:09 crc kubenswrapper[4758]: I0122 16:35:09.084458 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 22 16:35:09 crc kubenswrapper[4758]: I0122 16:35:09.100627 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 22 16:35:09 crc kubenswrapper[4758]: I0122 16:35:09.106199 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 22 16:35:09 crc kubenswrapper[4758]: I0122 16:35:09.264253 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 22 16:35:09 crc kubenswrapper[4758]: I0122 16:35:09.320311 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 22 16:35:09 crc kubenswrapper[4758]: I0122 16:35:09.395207 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 22 16:35:09 crc kubenswrapper[4758]: I0122 16:35:09.437627 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 22 16:35:09 crc kubenswrapper[4758]: I0122 16:35:09.437638 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 22 16:35:09 crc kubenswrapper[4758]: I0122 16:35:09.781323 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 22 16:35:09 crc kubenswrapper[4758]: I0122 16:35:09.940355 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 22 16:35:09 crc kubenswrapper[4758]: I0122 16:35:09.988327 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 22 16:35:10 crc kubenswrapper[4758]: I0122 16:35:10.101302 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 22 16:35:10 crc kubenswrapper[4758]: I0122 16:35:10.448121 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 22 16:35:10 crc kubenswrapper[4758]: I0122 16:35:10.591540 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 22 16:35:10 crc kubenswrapper[4758]: I0122 16:35:10.676557 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 22 16:35:10 crc kubenswrapper[4758]: I0122 16:35:10.689275 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 22 16:35:10 crc kubenswrapper[4758]: I0122 16:35:10.930334 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 22 16:35:11 crc kubenswrapper[4758]: I0122 16:35:11.184878 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 22 16:35:11 crc kubenswrapper[4758]: I0122 16:35:11.273866 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 22 16:35:11 crc kubenswrapper[4758]: I0122 16:35:11.466821 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 22 16:35:11 crc kubenswrapper[4758]: I0122 16:35:11.528800 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 22 16:35:11 crc kubenswrapper[4758]: I0122 16:35:11.624031 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 22 16:35:11 crc kubenswrapper[4758]: I0122 16:35:11.716516 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 22 16:35:11 crc kubenswrapper[4758]: I0122 16:35:11.745083 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 22 16:35:11 crc kubenswrapper[4758]: I0122 16:35:11.761209 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 22 16:35:11 crc kubenswrapper[4758]: I0122 16:35:11.769119 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 22 16:35:11 crc kubenswrapper[4758]: I0122 16:35:11.773185 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 22 16:35:11 crc kubenswrapper[4758]: I0122 16:35:11.947972 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 22 16:35:12 crc kubenswrapper[4758]: I0122 16:35:12.034779 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 22 16:35:12 crc kubenswrapper[4758]: I0122 16:35:12.041130 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 22 16:35:12 crc kubenswrapper[4758]: I0122 16:35:12.077456 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 22 16:35:12 crc kubenswrapper[4758]: I0122 16:35:12.214310 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 22 16:35:12 crc kubenswrapper[4758]: I0122 16:35:12.276864 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 22 16:35:12 crc kubenswrapper[4758]: I0122 16:35:12.482289 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 22 16:35:12 crc kubenswrapper[4758]: I0122 16:35:12.711024 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 22 16:35:12 crc kubenswrapper[4758]: I0122 16:35:12.722713 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 22 16:35:12 crc kubenswrapper[4758]: I0122 16:35:12.761111 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 22 16:35:12 crc kubenswrapper[4758]: I0122 16:35:12.795152 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 22 16:35:13 crc kubenswrapper[4758]: I0122 16:35:13.206334 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 22 16:35:13 crc kubenswrapper[4758]: I0122 16:35:13.353320 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 22 16:35:13 crc kubenswrapper[4758]: I0122 16:35:13.369718 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 22 16:35:13 crc kubenswrapper[4758]: I0122 16:35:13.446334 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 22 16:35:13 crc kubenswrapper[4758]: I0122 16:35:13.487444 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 22 16:35:13 crc kubenswrapper[4758]: I0122 16:35:13.606526 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 22 16:35:13 crc kubenswrapper[4758]: I0122 16:35:13.670929 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 22 16:35:13 crc kubenswrapper[4758]: I0122 16:35:13.756120 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 22 16:35:14 crc kubenswrapper[4758]: I0122 16:35:14.051198 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 22 16:35:14 crc kubenswrapper[4758]: I0122 16:35:14.074209 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 22 16:35:14 crc kubenswrapper[4758]: I0122 16:35:14.102668 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 22 16:35:14 crc kubenswrapper[4758]: I0122 16:35:14.338906 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 22 16:35:14 crc kubenswrapper[4758]: I0122 16:35:14.346843 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 22 16:35:14 crc kubenswrapper[4758]: I0122 16:35:14.405571 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 22 16:35:14 crc kubenswrapper[4758]: I0122 16:35:14.765508 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 22 16:35:14 crc kubenswrapper[4758]: I0122 16:35:14.778424 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 22 16:35:14 crc kubenswrapper[4758]: I0122 16:35:14.813168 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 22 16:35:14 crc kubenswrapper[4758]: I0122 16:35:14.830385 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 22 16:35:14 crc kubenswrapper[4758]: I0122 16:35:14.836619 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 22 16:35:15 crc kubenswrapper[4758]: I0122 16:35:15.068094 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 22 16:35:15 crc kubenswrapper[4758]: I0122 16:35:15.096097 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 22 16:35:15 crc kubenswrapper[4758]: I0122 16:35:15.146955 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 22 16:35:15 crc kubenswrapper[4758]: I0122 16:35:15.262441 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 22 16:35:15 crc kubenswrapper[4758]: I0122 16:35:15.500219 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 22 16:35:15 crc kubenswrapper[4758]: I0122 16:35:15.745423 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 22 16:35:15 crc kubenswrapper[4758]: I0122 16:35:15.793075 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 22 16:35:15 crc kubenswrapper[4758]: I0122 16:35:15.879305 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 22 16:35:15 crc kubenswrapper[4758]: I0122 16:35:15.905503 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 22 16:35:15 crc kubenswrapper[4758]: I0122 16:35:15.926042 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 22 16:35:16 crc kubenswrapper[4758]: I0122 16:35:16.152916 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 22 16:35:16 crc kubenswrapper[4758]: I0122 16:35:16.287526 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 22 16:35:16 crc kubenswrapper[4758]: I0122 16:35:16.403519 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 22 16:35:16 crc kubenswrapper[4758]: I0122 16:35:16.574328 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 22 16:35:16 crc kubenswrapper[4758]: I0122 16:35:16.647647 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 22 16:35:16 crc kubenswrapper[4758]: I0122 16:35:16.917534 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 22 16:35:17 crc kubenswrapper[4758]: I0122 16:35:17.121444 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 22 16:35:17 crc kubenswrapper[4758]: I0122 16:35:17.269545 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 22 16:35:17 crc kubenswrapper[4758]: I0122 16:35:17.320857 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 22 16:35:17 crc kubenswrapper[4758]: I0122 16:35:17.493851 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 22 16:35:17 crc kubenswrapper[4758]: I0122 16:35:17.745318 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 22 16:35:17 crc kubenswrapper[4758]: I0122 16:35:17.797559 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 22 16:35:17 crc kubenswrapper[4758]: I0122 16:35:17.854321 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 22 16:35:17 crc kubenswrapper[4758]: I0122 16:35:17.881311 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 22 16:35:17 crc kubenswrapper[4758]: I0122 16:35:17.886421 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 22 16:35:17 crc kubenswrapper[4758]: I0122 16:35:17.936507 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 22 16:35:18 crc kubenswrapper[4758]: I0122 16:35:18.091315 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 22 16:35:18 crc kubenswrapper[4758]: I0122 16:35:18.103646 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 22 16:35:18 crc kubenswrapper[4758]: I0122 16:35:18.322014 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 22 16:35:18 crc kubenswrapper[4758]: I0122 16:35:18.342159 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 22 16:35:18 crc kubenswrapper[4758]: I0122 16:35:18.370081 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 22 16:35:18 crc kubenswrapper[4758]: I0122 16:35:18.407595 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 22 16:35:18 crc kubenswrapper[4758]: I0122 16:35:18.529572 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 22 16:35:18 crc kubenswrapper[4758]: I0122 16:35:18.590887 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 22 16:35:18 crc kubenswrapper[4758]: I0122 16:35:18.600586 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 22 16:35:18 crc kubenswrapper[4758]: I0122 16:35:18.667380 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 22 16:35:18 crc kubenswrapper[4758]: I0122 16:35:18.725318 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 22 16:35:18 crc kubenswrapper[4758]: I0122 16:35:18.991607 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 22 16:35:18 crc kubenswrapper[4758]: I0122 16:35:18.998434 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 22 16:35:19 crc kubenswrapper[4758]: I0122 16:35:19.031282 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 22 16:35:19 crc kubenswrapper[4758]: I0122 16:35:19.038977 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 22 16:35:19 crc kubenswrapper[4758]: I0122 16:35:19.399016 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 22 16:35:19 crc kubenswrapper[4758]: I0122 16:35:19.443404 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 22 16:35:19 crc kubenswrapper[4758]: I0122 16:35:19.495590 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 22 16:35:19 crc kubenswrapper[4758]: I0122 16:35:19.529075 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 22 16:35:19 crc kubenswrapper[4758]: I0122 16:35:19.647083 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 22 16:35:19 crc kubenswrapper[4758]: I0122 16:35:19.947021 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 22 16:35:19 crc kubenswrapper[4758]: I0122 16:35:19.974243 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 22 16:35:19 crc kubenswrapper[4758]: I0122 16:35:19.979973 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 22 16:35:19 crc kubenswrapper[4758]: I0122 16:35:19.992879 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 22 16:35:20 crc kubenswrapper[4758]: I0122 16:35:20.062870 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 22 16:35:20 crc kubenswrapper[4758]: I0122 16:35:20.166613 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 22 16:35:20 crc kubenswrapper[4758]: I0122 16:35:20.356571 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 22 16:35:20 crc kubenswrapper[4758]: I0122 16:35:20.361534 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 22 16:35:20 crc kubenswrapper[4758]: I0122 16:35:20.885512 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 22 16:35:21 crc kubenswrapper[4758]: I0122 16:35:21.010793 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 22 16:35:21 crc kubenswrapper[4758]: I0122 16:35:21.318413 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 22 16:35:21 crc kubenswrapper[4758]: I0122 16:35:21.478384 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 22 16:35:21 crc kubenswrapper[4758]: I0122 16:35:21.707309 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 22 16:35:21 crc kubenswrapper[4758]: I0122 16:35:21.808253 4758 scope.go:117] "RemoveContainer" containerID="c6916366a4d57ca512d7ef0ae340c6bba9aab5100e71d5290324b747cfaaa815" Jan 22 16:35:21 crc kubenswrapper[4758]: I0122 16:35:21.836440 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 22 16:35:22 crc kubenswrapper[4758]: I0122 16:35:22.003414 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 22 16:35:22 crc kubenswrapper[4758]: I0122 16:35:22.202240 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 22 16:35:22 crc kubenswrapper[4758]: I0122 16:35:22.406265 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 22 16:35:22 crc kubenswrapper[4758]: I0122 16:35:22.447152 4758 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 22 16:35:22 crc kubenswrapper[4758]: I0122 16:35:22.448053 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 22 16:35:22 crc kubenswrapper[4758]: I0122 16:35:22.448906 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-559wb" podStartSLOduration=63.30380507 podStartE2EDuration="2m53.448890456s" podCreationTimestamp="2026-01-22 16:32:29 +0000 UTC" firstStartedPulling="2026-01-22 16:32:32.657390394 +0000 UTC m=+174.140729669" lastFinishedPulling="2026-01-22 16:34:22.80247577 +0000 UTC m=+284.285815055" observedRunningTime="2026-01-22 16:34:23.91556573 +0000 UTC m=+285.398905015" watchObservedRunningTime="2026-01-22 16:35:22.448890456 +0000 UTC m=+343.932229731" Jan 22 16:35:22 crc kubenswrapper[4758]: I0122 16:35:22.450480 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nthqj" podStartSLOduration=90.226689723 podStartE2EDuration="2m54.450472291s" podCreationTimestamp="2026-01-22 16:32:28 +0000 UTC" firstStartedPulling="2026-01-22 16:32:31.562056105 +0000 UTC m=+173.045395390" lastFinishedPulling="2026-01-22 16:33:55.785838673 +0000 UTC m=+257.269177958" observedRunningTime="2026-01-22 16:34:23.750393146 +0000 UTC m=+285.233732431" watchObservedRunningTime="2026-01-22 16:35:22.450472291 +0000 UTC m=+343.933811576" Jan 22 16:35:22 crc kubenswrapper[4758]: I0122 16:35:22.450926 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wjs4t" podStartSLOduration=88.277285231 podStartE2EDuration="2m54.450918903s" podCreationTimestamp="2026-01-22 16:32:28 +0000 UTC" firstStartedPulling="2026-01-22 16:32:29.504764305 +0000 UTC m=+170.988103590" lastFinishedPulling="2026-01-22 16:33:55.678397977 +0000 UTC m=+257.161737262" observedRunningTime="2026-01-22 16:34:23.831068244 +0000 UTC m=+285.314407529" watchObservedRunningTime="2026-01-22 16:35:22.450918903 +0000 UTC m=+343.934258178" Jan 22 16:35:22 crc kubenswrapper[4758]: I0122 16:35:22.451319 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s7bgv" podStartSLOduration=87.616039153 podStartE2EDuration="2m53.451314094s" podCreationTimestamp="2026-01-22 16:32:29 +0000 UTC" firstStartedPulling="2026-01-22 16:32:31.573216971 +0000 UTC m=+173.056556256" lastFinishedPulling="2026-01-22 16:33:57.408491912 +0000 UTC m=+258.891831197" observedRunningTime="2026-01-22 16:34:23.711484607 +0000 UTC m=+285.194823892" watchObservedRunningTime="2026-01-22 16:35:22.451314094 +0000 UTC m=+343.934653379" Jan 22 16:35:22 crc kubenswrapper[4758]: I0122 16:35:22.451839 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c6qmr" podStartSLOduration=88.906492727 podStartE2EDuration="2m56.451832029s" podCreationTimestamp="2026-01-22 16:32:26 +0000 UTC" firstStartedPulling="2026-01-22 16:32:28.347559706 +0000 UTC m=+169.830898991" lastFinishedPulling="2026-01-22 16:33:55.892899008 +0000 UTC m=+257.376238293" observedRunningTime="2026-01-22 16:34:23.853179613 +0000 UTC m=+285.336518908" watchObservedRunningTime="2026-01-22 16:35:22.451832029 +0000 UTC m=+343.935171304" Jan 22 16:35:22 crc kubenswrapper[4758]: I0122 16:35:22.452269 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 16:35:22 crc kubenswrapper[4758]: I0122 16:35:22.452372 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 16:35:22 crc kubenswrapper[4758]: I0122 16:35:22.452890 4758 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f128c8ae-2e32-4884-a296-728579141589" Jan 22 16:35:22 crc kubenswrapper[4758]: I0122 16:35:22.452926 4758 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f128c8ae-2e32-4884-a296-728579141589" Jan 22 16:35:22 crc kubenswrapper[4758]: I0122 16:35:22.457572 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:35:22 crc kubenswrapper[4758]: I0122 16:35:22.460421 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:35:22 crc kubenswrapper[4758]: I0122 16:35:22.477118 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=61.477085655 podStartE2EDuration="1m1.477085655s" podCreationTimestamp="2026-01-22 16:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:35:22.471508299 +0000 UTC m=+343.954847594" watchObservedRunningTime="2026-01-22 16:35:22.477085655 +0000 UTC m=+343.960424970" Jan 22 16:35:22 crc kubenswrapper[4758]: I0122 16:35:22.505130 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 22 16:35:22 crc kubenswrapper[4758]: I0122 16:35:22.748568 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 22 16:35:23 crc kubenswrapper[4758]: I0122 16:35:23.045134 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 22 16:35:23 crc kubenswrapper[4758]: I0122 16:35:23.673071 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 22 16:35:23 crc kubenswrapper[4758]: I0122 16:35:23.820013 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 22 16:35:23 crc kubenswrapper[4758]: I0122 16:35:23.827191 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 22 16:35:23 crc kubenswrapper[4758]: I0122 16:35:23.828229 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 22 16:35:23 crc kubenswrapper[4758]: I0122 16:35:23.926103 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 22 16:35:24 crc kubenswrapper[4758]: I0122 16:35:24.150033 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-fjsgm_5caed3c6-9037-4ecf-b0db-778db52bd3ee/marketplace-operator/2.log" Jan 22 16:35:24 crc kubenswrapper[4758]: I0122 16:35:24.150602 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-fjsgm_5caed3c6-9037-4ecf-b0db-778db52bd3ee/marketplace-operator/1.log" Jan 22 16:35:24 crc kubenswrapper[4758]: I0122 16:35:24.150655 4758 generic.go:334] "Generic (PLEG): container finished" podID="5caed3c6-9037-4ecf-b0db-778db52bd3ee" containerID="22de38a9c2a4d1c3947051d22cf68d2bab824700160f14bc5261fa2c0278b3d2" exitCode=1 Jan 22 16:35:24 crc kubenswrapper[4758]: I0122 16:35:24.150697 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fjsgm" event={"ID":"5caed3c6-9037-4ecf-b0db-778db52bd3ee","Type":"ContainerDied","Data":"22de38a9c2a4d1c3947051d22cf68d2bab824700160f14bc5261fa2c0278b3d2"} Jan 22 16:35:24 crc kubenswrapper[4758]: I0122 16:35:24.150786 4758 scope.go:117] "RemoveContainer" containerID="c6916366a4d57ca512d7ef0ae340c6bba9aab5100e71d5290324b747cfaaa815" Jan 22 16:35:24 crc kubenswrapper[4758]: I0122 16:35:24.151604 4758 scope.go:117] "RemoveContainer" containerID="22de38a9c2a4d1c3947051d22cf68d2bab824700160f14bc5261fa2c0278b3d2" Jan 22 16:35:24 crc kubenswrapper[4758]: E0122 16:35:24.151944 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-fjsgm_openshift-marketplace(5caed3c6-9037-4ecf-b0db-778db52bd3ee)\"" pod="openshift-marketplace/marketplace-operator-79b997595-fjsgm" podUID="5caed3c6-9037-4ecf-b0db-778db52bd3ee" Jan 22 16:35:24 crc kubenswrapper[4758]: I0122 16:35:24.201040 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 22 16:35:24 crc kubenswrapper[4758]: I0122 16:35:24.323618 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 22 16:35:24 crc kubenswrapper[4758]: I0122 16:35:24.339452 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 22 16:35:25 crc kubenswrapper[4758]: I0122 16:35:25.159729 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-fjsgm_5caed3c6-9037-4ecf-b0db-778db52bd3ee/marketplace-operator/2.log" Jan 22 16:35:25 crc kubenswrapper[4758]: I0122 16:35:25.161629 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 22 16:35:25 crc kubenswrapper[4758]: I0122 16:35:25.210579 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 22 16:35:25 crc kubenswrapper[4758]: I0122 16:35:25.355655 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 22 16:35:26 crc kubenswrapper[4758]: I0122 16:35:26.068607 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 22 16:35:26 crc kubenswrapper[4758]: I0122 16:35:26.101823 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 22 16:35:26 crc kubenswrapper[4758]: I0122 16:35:26.152796 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 22 16:35:26 crc kubenswrapper[4758]: I0122 16:35:26.200183 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 22 16:35:26 crc kubenswrapper[4758]: I0122 16:35:26.567212 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 22 16:35:26 crc kubenswrapper[4758]: I0122 16:35:26.826454 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 16:35:26 crc kubenswrapper[4758]: I0122 16:35:26.918801 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 22 16:35:27 crc kubenswrapper[4758]: I0122 16:35:27.251805 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-fjsgm" Jan 22 16:35:27 crc kubenswrapper[4758]: I0122 16:35:27.252117 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-fjsgm" Jan 22 16:35:27 crc kubenswrapper[4758]: I0122 16:35:27.252621 4758 scope.go:117] "RemoveContainer" containerID="22de38a9c2a4d1c3947051d22cf68d2bab824700160f14bc5261fa2c0278b3d2" Jan 22 16:35:27 crc kubenswrapper[4758]: E0122 16:35:27.252891 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-fjsgm_openshift-marketplace(5caed3c6-9037-4ecf-b0db-778db52bd3ee)\"" pod="openshift-marketplace/marketplace-operator-79b997595-fjsgm" podUID="5caed3c6-9037-4ecf-b0db-778db52bd3ee" Jan 22 16:35:27 crc kubenswrapper[4758]: I0122 16:35:27.280222 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 22 16:35:27 crc kubenswrapper[4758]: I0122 16:35:27.286082 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=1.286061025 podStartE2EDuration="1.286061025s" podCreationTimestamp="2026-01-22 16:35:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:35:27.283164754 +0000 UTC m=+348.766504039" watchObservedRunningTime="2026-01-22 16:35:27.286061025 +0000 UTC m=+348.769400310" Jan 22 16:35:27 crc kubenswrapper[4758]: I0122 16:35:27.371956 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 22 16:35:28 crc kubenswrapper[4758]: I0122 16:35:28.101564 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 22 16:35:28 crc kubenswrapper[4758]: I0122 16:35:28.176820 4758 scope.go:117] "RemoveContainer" containerID="22de38a9c2a4d1c3947051d22cf68d2bab824700160f14bc5261fa2c0278b3d2" Jan 22 16:35:28 crc kubenswrapper[4758]: E0122 16:35:28.177020 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-fjsgm_openshift-marketplace(5caed3c6-9037-4ecf-b0db-778db52bd3ee)\"" pod="openshift-marketplace/marketplace-operator-79b997595-fjsgm" podUID="5caed3c6-9037-4ecf-b0db-778db52bd3ee" Jan 22 16:35:28 crc kubenswrapper[4758]: I0122 16:35:28.548999 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 22 16:35:28 crc kubenswrapper[4758]: I0122 16:35:28.758506 4758 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 22 16:35:29 crc kubenswrapper[4758]: I0122 16:35:29.386777 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 22 16:35:29 crc kubenswrapper[4758]: I0122 16:35:29.536231 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 22 16:35:30 crc kubenswrapper[4758]: I0122 16:35:30.398155 4758 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 16:35:30 crc kubenswrapper[4758]: I0122 16:35:30.398396 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://488e56973319c20746a71443384c19979ff0582b2d2bf8b3c346a98e39acfe96" gracePeriod=5 Jan 22 16:35:31 crc kubenswrapper[4758]: I0122 16:35:31.480102 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 22 16:35:31 crc kubenswrapper[4758]: I0122 16:35:31.513355 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 22 16:35:36 crc kubenswrapper[4758]: I0122 16:35:36.221394 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 22 16:35:36 crc kubenswrapper[4758]: I0122 16:35:36.221455 4758 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="488e56973319c20746a71443384c19979ff0582b2d2bf8b3c346a98e39acfe96" exitCode=137 Jan 22 16:35:36 crc kubenswrapper[4758]: I0122 16:35:36.338802 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 22 16:35:36 crc kubenswrapper[4758]: I0122 16:35:36.339154 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 16:35:36 crc kubenswrapper[4758]: I0122 16:35:36.398490 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 16:35:36 crc kubenswrapper[4758]: I0122 16:35:36.398523 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 16:35:36 crc kubenswrapper[4758]: I0122 16:35:36.398546 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 16:35:36 crc kubenswrapper[4758]: I0122 16:35:36.398637 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 16:35:36 crc kubenswrapper[4758]: I0122 16:35:36.398695 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 16:35:36 crc kubenswrapper[4758]: I0122 16:35:36.399002 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:35:36 crc kubenswrapper[4758]: I0122 16:35:36.399076 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:35:36 crc kubenswrapper[4758]: I0122 16:35:36.399094 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:35:36 crc kubenswrapper[4758]: I0122 16:35:36.399591 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:35:36 crc kubenswrapper[4758]: I0122 16:35:36.449149 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:35:36 crc kubenswrapper[4758]: I0122 16:35:36.500392 4758 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 22 16:35:36 crc kubenswrapper[4758]: I0122 16:35:36.500444 4758 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 22 16:35:36 crc kubenswrapper[4758]: I0122 16:35:36.500465 4758 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 22 16:35:36 crc kubenswrapper[4758]: I0122 16:35:36.500476 4758 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 22 16:35:36 crc kubenswrapper[4758]: I0122 16:35:36.500484 4758 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 22 16:35:36 crc kubenswrapper[4758]: I0122 16:35:36.816450 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 22 16:35:36 crc kubenswrapper[4758]: I0122 16:35:36.816776 4758 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 22 16:35:36 crc kubenswrapper[4758]: I0122 16:35:36.836447 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 16:35:36 crc kubenswrapper[4758]: I0122 16:35:36.836494 4758 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="129b15ad-8ff1-4f47-9cfb-c13b8ab20876" Jan 22 16:35:36 crc kubenswrapper[4758]: I0122 16:35:36.841399 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 16:35:36 crc kubenswrapper[4758]: I0122 16:35:36.841430 4758 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="129b15ad-8ff1-4f47-9cfb-c13b8ab20876" Jan 22 16:35:37 crc kubenswrapper[4758]: I0122 16:35:37.230351 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 22 16:35:37 crc kubenswrapper[4758]: I0122 16:35:37.230653 4758 scope.go:117] "RemoveContainer" containerID="488e56973319c20746a71443384c19979ff0582b2d2bf8b3c346a98e39acfe96" Jan 22 16:35:37 crc kubenswrapper[4758]: I0122 16:35:37.230800 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 16:35:38 crc kubenswrapper[4758]: I0122 16:35:38.813615 4758 scope.go:117] "RemoveContainer" containerID="22de38a9c2a4d1c3947051d22cf68d2bab824700160f14bc5261fa2c0278b3d2" Jan 22 16:35:38 crc kubenswrapper[4758]: E0122 16:35:38.814506 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-fjsgm_openshift-marketplace(5caed3c6-9037-4ecf-b0db-778db52bd3ee)\"" pod="openshift-marketplace/marketplace-operator-79b997595-fjsgm" podUID="5caed3c6-9037-4ecf-b0db-778db52bd3ee" Jan 22 16:35:39 crc kubenswrapper[4758]: I0122 16:35:39.265219 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8v88c"] Jan 22 16:35:39 crc kubenswrapper[4758]: I0122 16:35:39.265628 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8v88c" podUID="88b3808a-aa06-48ab-9b57-f474a2e1379a" containerName="registry-server" containerID="cri-o://4f09d3a6ac11c76f074883c751146dc2e0c65ff684250918a6b12f70d1815a59" gracePeriod=30 Jan 22 16:35:39 crc kubenswrapper[4758]: I0122 16:35:39.270509 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mh88h"] Jan 22 16:35:39 crc kubenswrapper[4758]: I0122 16:35:39.271910 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mh88h" podUID="0437f83e-83ed-42f5-88ab-110deeeac7a4" containerName="registry-server" containerID="cri-o://a8ce6f54e301d9403da36f9643f9eb1b971cacf3a128f5837b85f0f3053db213" gracePeriod=30 Jan 22 16:35:39 crc kubenswrapper[4758]: I0122 16:35:39.281472 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b2rzs"] Jan 22 16:35:39 crc kubenswrapper[4758]: I0122 16:35:39.281912 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-b2rzs" podUID="8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9" containerName="registry-server" containerID="cri-o://c0d8a02460c67b646af3631ad0dce7aa077a5a9e907c2b8a02543b1c3c968606" gracePeriod=30 Jan 22 16:35:39 crc kubenswrapper[4758]: I0122 16:35:39.285509 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c6qmr"] Jan 22 16:35:39 crc kubenswrapper[4758]: I0122 16:35:39.286172 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c6qmr" podUID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" containerName="registry-server" containerID="cri-o://b62c8a1fcabd0d3a97f1533146cf8f0b11b055bc3905fefb7b0f4dd045495ade" gracePeriod=30 Jan 22 16:35:39 crc kubenswrapper[4758]: I0122 16:35:39.291183 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fjsgm"] Jan 22 16:35:39 crc kubenswrapper[4758]: I0122 16:35:39.304053 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nthqj"] Jan 22 16:35:39 crc kubenswrapper[4758]: I0122 16:35:39.304401 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nthqj" podUID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" containerName="registry-server" containerID="cri-o://5280a70e36ea601ca10423751b3ae6b4478b1c7552ba2a0beb14a05778f13a39" gracePeriod=30 Jan 22 16:35:39 crc kubenswrapper[4758]: I0122 16:35:39.309292 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-f2gvw"] Jan 22 16:35:39 crc kubenswrapper[4758]: E0122 16:35:39.309658 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 22 16:35:39 crc kubenswrapper[4758]: I0122 16:35:39.309685 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 22 16:35:39 crc kubenswrapper[4758]: E0122 16:35:39.309703 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3893a5d6-af77-48c5-a325-35d144e54f8a" containerName="installer" Jan 22 16:35:39 crc kubenswrapper[4758]: I0122 16:35:39.309717 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="3893a5d6-af77-48c5-a325-35d144e54f8a" containerName="installer" Jan 22 16:35:39 crc kubenswrapper[4758]: I0122 16:35:39.309907 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 22 16:35:39 crc kubenswrapper[4758]: I0122 16:35:39.309932 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="3893a5d6-af77-48c5-a325-35d144e54f8a" containerName="installer" Jan 22 16:35:39 crc kubenswrapper[4758]: I0122 16:35:39.310506 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-f2gvw" Jan 22 16:35:39 crc kubenswrapper[4758]: I0122 16:35:39.318246 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wjs4t"] Jan 22 16:35:39 crc kubenswrapper[4758]: I0122 16:35:39.318589 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wjs4t" podUID="a12a62bb-3713-4f66-902e-673cc09db2ee" containerName="registry-server" containerID="cri-o://b9a1b8bc551fc1a90f093ecbae7e6a2e5dee6207119888b22c551b5e4ad3baf0" gracePeriod=30 Jan 22 16:35:39 crc kubenswrapper[4758]: I0122 16:35:39.321136 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-559wb"] Jan 22 16:35:39 crc kubenswrapper[4758]: I0122 16:35:39.322052 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-559wb" podUID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" containerName="registry-server" containerID="cri-o://b05fae3fddde3f7f1e9fa6cefbb6b68ceb3550b54594bedb88e87d7e3f0fa3b3" gracePeriod=30 Jan 22 16:35:39 crc kubenswrapper[4758]: I0122 16:35:39.341009 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-f2gvw"] Jan 22 16:35:39 crc kubenswrapper[4758]: I0122 16:35:39.353223 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s7bgv"] Jan 22 16:35:39 crc kubenswrapper[4758]: I0122 16:35:39.353484 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s7bgv" podUID="e88aa20b-e3aa-4cc2-856c-0dd5e9394992" containerName="registry-server" containerID="cri-o://c8fe988cb0db8cceebd7070d798a4e7d4a5e4221466e370ef86ff48d66c220f6" gracePeriod=30 Jan 22 16:35:39 crc kubenswrapper[4758]: I0122 16:35:39.438187 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6daa1231-490e-4ff7-9157-f49cdec96a5e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-f2gvw\" (UID: \"6daa1231-490e-4ff7-9157-f49cdec96a5e\") " pod="openshift-marketplace/marketplace-operator-79b997595-f2gvw" Jan 22 16:35:39 crc kubenswrapper[4758]: I0122 16:35:39.438730 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njhwx\" (UniqueName: \"kubernetes.io/projected/6daa1231-490e-4ff7-9157-f49cdec96a5e-kube-api-access-njhwx\") pod \"marketplace-operator-79b997595-f2gvw\" (UID: \"6daa1231-490e-4ff7-9157-f49cdec96a5e\") " pod="openshift-marketplace/marketplace-operator-79b997595-f2gvw" Jan 22 16:35:39 crc kubenswrapper[4758]: I0122 16:35:39.438769 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6daa1231-490e-4ff7-9157-f49cdec96a5e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-f2gvw\" (UID: \"6daa1231-490e-4ff7-9157-f49cdec96a5e\") " pod="openshift-marketplace/marketplace-operator-79b997595-f2gvw" Jan 22 16:35:39 crc kubenswrapper[4758]: I0122 16:35:39.533825 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-fjsgm_5caed3c6-9037-4ecf-b0db-778db52bd3ee/marketplace-operator/2.log" Jan 22 16:35:39 crc kubenswrapper[4758]: I0122 16:35:39.533887 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fjsgm" Jan 22 16:35:39 crc kubenswrapper[4758]: I0122 16:35:39.540233 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6daa1231-490e-4ff7-9157-f49cdec96a5e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-f2gvw\" (UID: \"6daa1231-490e-4ff7-9157-f49cdec96a5e\") " pod="openshift-marketplace/marketplace-operator-79b997595-f2gvw" Jan 22 16:35:39 crc kubenswrapper[4758]: I0122 16:35:39.540296 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njhwx\" (UniqueName: \"kubernetes.io/projected/6daa1231-490e-4ff7-9157-f49cdec96a5e-kube-api-access-njhwx\") pod \"marketplace-operator-79b997595-f2gvw\" (UID: \"6daa1231-490e-4ff7-9157-f49cdec96a5e\") " pod="openshift-marketplace/marketplace-operator-79b997595-f2gvw" Jan 22 16:35:39 crc kubenswrapper[4758]: I0122 16:35:39.540321 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6daa1231-490e-4ff7-9157-f49cdec96a5e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-f2gvw\" (UID: \"6daa1231-490e-4ff7-9157-f49cdec96a5e\") " pod="openshift-marketplace/marketplace-operator-79b997595-f2gvw" Jan 22 16:35:39 crc kubenswrapper[4758]: I0122 16:35:39.541570 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6daa1231-490e-4ff7-9157-f49cdec96a5e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-f2gvw\" (UID: \"6daa1231-490e-4ff7-9157-f49cdec96a5e\") " pod="openshift-marketplace/marketplace-operator-79b997595-f2gvw" Jan 22 16:35:39 crc kubenswrapper[4758]: I0122 16:35:39.552677 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6daa1231-490e-4ff7-9157-f49cdec96a5e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-f2gvw\" (UID: \"6daa1231-490e-4ff7-9157-f49cdec96a5e\") " pod="openshift-marketplace/marketplace-operator-79b997595-f2gvw" Jan 22 16:35:39 crc kubenswrapper[4758]: I0122 16:35:39.559336 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njhwx\" (UniqueName: \"kubernetes.io/projected/6daa1231-490e-4ff7-9157-f49cdec96a5e-kube-api-access-njhwx\") pod \"marketplace-operator-79b997595-f2gvw\" (UID: \"6daa1231-490e-4ff7-9157-f49cdec96a5e\") " pod="openshift-marketplace/marketplace-operator-79b997595-f2gvw" Jan 22 16:35:39 crc kubenswrapper[4758]: E0122 16:35:39.592454 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c8fe988cb0db8cceebd7070d798a4e7d4a5e4221466e370ef86ff48d66c220f6 is running failed: container process not found" containerID="c8fe988cb0db8cceebd7070d798a4e7d4a5e4221466e370ef86ff48d66c220f6" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 16:35:39 crc kubenswrapper[4758]: E0122 16:35:39.592871 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c8fe988cb0db8cceebd7070d798a4e7d4a5e4221466e370ef86ff48d66c220f6 is running failed: container process not found" containerID="c8fe988cb0db8cceebd7070d798a4e7d4a5e4221466e370ef86ff48d66c220f6" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 16:35:39 crc kubenswrapper[4758]: E0122 16:35:39.593106 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c8fe988cb0db8cceebd7070d798a4e7d4a5e4221466e370ef86ff48d66c220f6 is running failed: container process not found" containerID="c8fe988cb0db8cceebd7070d798a4e7d4a5e4221466e370ef86ff48d66c220f6" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 16:35:39 crc kubenswrapper[4758]: E0122 16:35:39.593131 4758 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c8fe988cb0db8cceebd7070d798a4e7d4a5e4221466e370ef86ff48d66c220f6 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-s7bgv" podUID="e88aa20b-e3aa-4cc2-856c-0dd5e9394992" containerName="registry-server" Jan 22 16:35:39 crc kubenswrapper[4758]: I0122 16:35:39.632048 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-f2gvw" Jan 22 16:35:39 crc kubenswrapper[4758]: I0122 16:35:39.640798 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngx4q\" (UniqueName: \"kubernetes.io/projected/5caed3c6-9037-4ecf-b0db-778db52bd3ee-kube-api-access-ngx4q\") pod \"5caed3c6-9037-4ecf-b0db-778db52bd3ee\" (UID: \"5caed3c6-9037-4ecf-b0db-778db52bd3ee\") " Jan 22 16:35:39 crc kubenswrapper[4758]: I0122 16:35:39.640911 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5caed3c6-9037-4ecf-b0db-778db52bd3ee-marketplace-operator-metrics\") pod \"5caed3c6-9037-4ecf-b0db-778db52bd3ee\" (UID: \"5caed3c6-9037-4ecf-b0db-778db52bd3ee\") " Jan 22 16:35:39 crc kubenswrapper[4758]: I0122 16:35:39.640948 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5caed3c6-9037-4ecf-b0db-778db52bd3ee-marketplace-trusted-ca\") pod \"5caed3c6-9037-4ecf-b0db-778db52bd3ee\" (UID: \"5caed3c6-9037-4ecf-b0db-778db52bd3ee\") " Jan 22 16:35:39 crc kubenswrapper[4758]: I0122 16:35:39.642389 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5caed3c6-9037-4ecf-b0db-778db52bd3ee-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "5caed3c6-9037-4ecf-b0db-778db52bd3ee" (UID: "5caed3c6-9037-4ecf-b0db-778db52bd3ee"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:35:39 crc kubenswrapper[4758]: I0122 16:35:39.645412 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5caed3c6-9037-4ecf-b0db-778db52bd3ee-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "5caed3c6-9037-4ecf-b0db-778db52bd3ee" (UID: "5caed3c6-9037-4ecf-b0db-778db52bd3ee"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:35:39 crc kubenswrapper[4758]: I0122 16:35:39.645538 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5caed3c6-9037-4ecf-b0db-778db52bd3ee-kube-api-access-ngx4q" (OuterVolumeSpecName: "kube-api-access-ngx4q") pod "5caed3c6-9037-4ecf-b0db-778db52bd3ee" (UID: "5caed3c6-9037-4ecf-b0db-778db52bd3ee"). InnerVolumeSpecName "kube-api-access-ngx4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:35:39 crc kubenswrapper[4758]: E0122 16:35:39.689120 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b05fae3fddde3f7f1e9fa6cefbb6b68ceb3550b54594bedb88e87d7e3f0fa3b3 is running failed: container process not found" containerID="b05fae3fddde3f7f1e9fa6cefbb6b68ceb3550b54594bedb88e87d7e3f0fa3b3" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 16:35:39 crc kubenswrapper[4758]: E0122 16:35:39.689520 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b05fae3fddde3f7f1e9fa6cefbb6b68ceb3550b54594bedb88e87d7e3f0fa3b3 is running failed: container process not found" containerID="b05fae3fddde3f7f1e9fa6cefbb6b68ceb3550b54594bedb88e87d7e3f0fa3b3" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 16:35:39 crc kubenswrapper[4758]: E0122 16:35:39.689907 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b05fae3fddde3f7f1e9fa6cefbb6b68ceb3550b54594bedb88e87d7e3f0fa3b3 is running failed: container process not found" containerID="b05fae3fddde3f7f1e9fa6cefbb6b68ceb3550b54594bedb88e87d7e3f0fa3b3" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 16:35:39 crc kubenswrapper[4758]: E0122 16:35:39.689963 4758 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b05fae3fddde3f7f1e9fa6cefbb6b68ceb3550b54594bedb88e87d7e3f0fa3b3 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-559wb" podUID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" containerName="registry-server" Jan 22 16:35:39 crc kubenswrapper[4758]: I0122 16:35:39.742548 4758 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5caed3c6-9037-4ecf-b0db-778db52bd3ee-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 22 16:35:39 crc kubenswrapper[4758]: I0122 16:35:39.742582 4758 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5caed3c6-9037-4ecf-b0db-778db52bd3ee-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:35:39 crc kubenswrapper[4758]: I0122 16:35:39.742595 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngx4q\" (UniqueName: \"kubernetes.io/projected/5caed3c6-9037-4ecf-b0db-778db52bd3ee-kube-api-access-ngx4q\") on node \"crc\" DevicePath \"\"" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.049094 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-f2gvw"] Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.252122 4758 generic.go:334] "Generic (PLEG): container finished" podID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" containerID="b05fae3fddde3f7f1e9fa6cefbb6b68ceb3550b54594bedb88e87d7e3f0fa3b3" exitCode=0 Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.252197 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-559wb" event={"ID":"895a8f2e-590a-4270-9eb0-1f7c76da93d9","Type":"ContainerDied","Data":"b05fae3fddde3f7f1e9fa6cefbb6b68ceb3550b54594bedb88e87d7e3f0fa3b3"} Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.252250 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-559wb" event={"ID":"895a8f2e-590a-4270-9eb0-1f7c76da93d9","Type":"ContainerDied","Data":"27139c45d4140b5af7a08fb5e17b9f5d7f14f3a14c50375f804352b4adfb3170"} Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.252264 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27139c45d4140b5af7a08fb5e17b9f5d7f14f3a14c50375f804352b4adfb3170" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.255781 4758 generic.go:334] "Generic (PLEG): container finished" podID="8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9" containerID="c0d8a02460c67b646af3631ad0dce7aa077a5a9e907c2b8a02543b1c3c968606" exitCode=0 Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.255855 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2rzs" event={"ID":"8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9","Type":"ContainerDied","Data":"c0d8a02460c67b646af3631ad0dce7aa077a5a9e907c2b8a02543b1c3c968606"} Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.258161 4758 generic.go:334] "Generic (PLEG): container finished" podID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" containerID="b62c8a1fcabd0d3a97f1533146cf8f0b11b055bc3905fefb7b0f4dd045495ade" exitCode=0 Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.258247 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6qmr" event={"ID":"6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4","Type":"ContainerDied","Data":"b62c8a1fcabd0d3a97f1533146cf8f0b11b055bc3905fefb7b0f4dd045495ade"} Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.260533 4758 generic.go:334] "Generic (PLEG): container finished" podID="0437f83e-83ed-42f5-88ab-110deeeac7a4" containerID="a8ce6f54e301d9403da36f9643f9eb1b971cacf3a128f5837b85f0f3053db213" exitCode=0 Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.260607 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mh88h" event={"ID":"0437f83e-83ed-42f5-88ab-110deeeac7a4","Type":"ContainerDied","Data":"a8ce6f54e301d9403da36f9643f9eb1b971cacf3a128f5837b85f0f3053db213"} Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.263236 4758 generic.go:334] "Generic (PLEG): container finished" podID="88b3808a-aa06-48ab-9b57-f474a2e1379a" containerID="4f09d3a6ac11c76f074883c751146dc2e0c65ff684250918a6b12f70d1815a59" exitCode=0 Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.263310 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8v88c" event={"ID":"88b3808a-aa06-48ab-9b57-f474a2e1379a","Type":"ContainerDied","Data":"4f09d3a6ac11c76f074883c751146dc2e0c65ff684250918a6b12f70d1815a59"} Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.264394 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-f2gvw" event={"ID":"6daa1231-490e-4ff7-9157-f49cdec96a5e","Type":"ContainerStarted","Data":"ab6b6c9e2df9961182d36ec6a537cfe0f7036852a8eacfc894bea5d95b2decfd"} Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.266469 4758 generic.go:334] "Generic (PLEG): container finished" podID="e88aa20b-e3aa-4cc2-856c-0dd5e9394992" containerID="c8fe988cb0db8cceebd7070d798a4e7d4a5e4221466e370ef86ff48d66c220f6" exitCode=0 Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.266543 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7bgv" event={"ID":"e88aa20b-e3aa-4cc2-856c-0dd5e9394992","Type":"ContainerDied","Data":"c8fe988cb0db8cceebd7070d798a4e7d4a5e4221466e370ef86ff48d66c220f6"} Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.266581 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7bgv" event={"ID":"e88aa20b-e3aa-4cc2-856c-0dd5e9394992","Type":"ContainerDied","Data":"d95a211d266181654e065fe79fa20039053a7cce147e1b240a6501b0d3cfaa03"} Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.266594 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d95a211d266181654e065fe79fa20039053a7cce147e1b240a6501b0d3cfaa03" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.268963 4758 generic.go:334] "Generic (PLEG): container finished" podID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" containerID="5280a70e36ea601ca10423751b3ae6b4478b1c7552ba2a0beb14a05778f13a39" exitCode=0 Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.269019 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nthqj" event={"ID":"25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e","Type":"ContainerDied","Data":"5280a70e36ea601ca10423751b3ae6b4478b1c7552ba2a0beb14a05778f13a39"} Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.269344 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nthqj" event={"ID":"25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e","Type":"ContainerDied","Data":"a913934e4409aaa5b93a33a016278889e8f1d89d95c7217a35d1c830b4dc92bb"} Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.269474 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a913934e4409aaa5b93a33a016278889e8f1d89d95c7217a35d1c830b4dc92bb" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.275759 4758 generic.go:334] "Generic (PLEG): container finished" podID="a12a62bb-3713-4f66-902e-673cc09db2ee" containerID="b9a1b8bc551fc1a90f093ecbae7e6a2e5dee6207119888b22c551b5e4ad3baf0" exitCode=0 Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.275811 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wjs4t" event={"ID":"a12a62bb-3713-4f66-902e-673cc09db2ee","Type":"ContainerDied","Data":"b9a1b8bc551fc1a90f093ecbae7e6a2e5dee6207119888b22c551b5e4ad3baf0"} Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.275834 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wjs4t" event={"ID":"a12a62bb-3713-4f66-902e-673cc09db2ee","Type":"ContainerDied","Data":"7e4e5e6940233b49b11fd5366b591eda968fe326775d2c1e20458d4fb644172a"} Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.275844 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e4e5e6940233b49b11fd5366b591eda968fe326775d2c1e20458d4fb644172a" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.279121 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-fjsgm_5caed3c6-9037-4ecf-b0db-778db52bd3ee/marketplace-operator/2.log" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.279212 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fjsgm" event={"ID":"5caed3c6-9037-4ecf-b0db-778db52bd3ee","Type":"ContainerDied","Data":"e5e3ebdad4eeca671ca7800977916d8b4cd3ad73ac41d7c91106d2a709718986"} Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.279274 4758 scope.go:117] "RemoveContainer" containerID="22de38a9c2a4d1c3947051d22cf68d2bab824700160f14bc5261fa2c0278b3d2" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.279334 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fjsgm" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.456171 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wjs4t" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.487939 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nthqj" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.498413 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fjsgm"] Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.502573 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fjsgm"] Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.519929 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s7bgv" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.522817 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-559wb" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.653239 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fp54\" (UniqueName: \"kubernetes.io/projected/895a8f2e-590a-4270-9eb0-1f7c76da93d9-kube-api-access-2fp54\") pod \"895a8f2e-590a-4270-9eb0-1f7c76da93d9\" (UID: \"895a8f2e-590a-4270-9eb0-1f7c76da93d9\") " Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.653300 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvcxg\" (UniqueName: \"kubernetes.io/projected/a12a62bb-3713-4f66-902e-673cc09db2ee-kube-api-access-kvcxg\") pod \"a12a62bb-3713-4f66-902e-673cc09db2ee\" (UID: \"a12a62bb-3713-4f66-902e-673cc09db2ee\") " Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.653337 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tnwm\" (UniqueName: \"kubernetes.io/projected/25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e-kube-api-access-7tnwm\") pod \"25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e\" (UID: \"25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e\") " Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.653366 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a12a62bb-3713-4f66-902e-673cc09db2ee-utilities\") pod \"a12a62bb-3713-4f66-902e-673cc09db2ee\" (UID: \"a12a62bb-3713-4f66-902e-673cc09db2ee\") " Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.653398 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d864j\" (UniqueName: \"kubernetes.io/projected/e88aa20b-e3aa-4cc2-856c-0dd5e9394992-kube-api-access-d864j\") pod \"e88aa20b-e3aa-4cc2-856c-0dd5e9394992\" (UID: \"e88aa20b-e3aa-4cc2-856c-0dd5e9394992\") " Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.653425 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/895a8f2e-590a-4270-9eb0-1f7c76da93d9-catalog-content\") pod \"895a8f2e-590a-4270-9eb0-1f7c76da93d9\" (UID: \"895a8f2e-590a-4270-9eb0-1f7c76da93d9\") " Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.653448 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e88aa20b-e3aa-4cc2-856c-0dd5e9394992-catalog-content\") pod \"e88aa20b-e3aa-4cc2-856c-0dd5e9394992\" (UID: \"e88aa20b-e3aa-4cc2-856c-0dd5e9394992\") " Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.653484 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a12a62bb-3713-4f66-902e-673cc09db2ee-catalog-content\") pod \"a12a62bb-3713-4f66-902e-673cc09db2ee\" (UID: \"a12a62bb-3713-4f66-902e-673cc09db2ee\") " Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.653499 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e-utilities\") pod \"25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e\" (UID: \"25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e\") " Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.653529 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e88aa20b-e3aa-4cc2-856c-0dd5e9394992-utilities\") pod \"e88aa20b-e3aa-4cc2-856c-0dd5e9394992\" (UID: \"e88aa20b-e3aa-4cc2-856c-0dd5e9394992\") " Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.653548 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/895a8f2e-590a-4270-9eb0-1f7c76da93d9-utilities\") pod \"895a8f2e-590a-4270-9eb0-1f7c76da93d9\" (UID: \"895a8f2e-590a-4270-9eb0-1f7c76da93d9\") " Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.653569 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e-catalog-content\") pod \"25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e\" (UID: \"25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e\") " Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.656301 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a12a62bb-3713-4f66-902e-673cc09db2ee-utilities" (OuterVolumeSpecName: "utilities") pod "a12a62bb-3713-4f66-902e-673cc09db2ee" (UID: "a12a62bb-3713-4f66-902e-673cc09db2ee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.656322 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e88aa20b-e3aa-4cc2-856c-0dd5e9394992-utilities" (OuterVolumeSpecName: "utilities") pod "e88aa20b-e3aa-4cc2-856c-0dd5e9394992" (UID: "e88aa20b-e3aa-4cc2-856c-0dd5e9394992"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.657800 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/895a8f2e-590a-4270-9eb0-1f7c76da93d9-utilities" (OuterVolumeSpecName: "utilities") pod "895a8f2e-590a-4270-9eb0-1f7c76da93d9" (UID: "895a8f2e-590a-4270-9eb0-1f7c76da93d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.661133 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/895a8f2e-590a-4270-9eb0-1f7c76da93d9-kube-api-access-2fp54" (OuterVolumeSpecName: "kube-api-access-2fp54") pod "895a8f2e-590a-4270-9eb0-1f7c76da93d9" (UID: "895a8f2e-590a-4270-9eb0-1f7c76da93d9"). InnerVolumeSpecName "kube-api-access-2fp54". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.662793 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e88aa20b-e3aa-4cc2-856c-0dd5e9394992-kube-api-access-d864j" (OuterVolumeSpecName: "kube-api-access-d864j") pod "e88aa20b-e3aa-4cc2-856c-0dd5e9394992" (UID: "e88aa20b-e3aa-4cc2-856c-0dd5e9394992"). InnerVolumeSpecName "kube-api-access-d864j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.663918 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a12a62bb-3713-4f66-902e-673cc09db2ee-kube-api-access-kvcxg" (OuterVolumeSpecName: "kube-api-access-kvcxg") pod "a12a62bb-3713-4f66-902e-673cc09db2ee" (UID: "a12a62bb-3713-4f66-902e-673cc09db2ee"). InnerVolumeSpecName "kube-api-access-kvcxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.673852 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e-kube-api-access-7tnwm" (OuterVolumeSpecName: "kube-api-access-7tnwm") pod "25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" (UID: "25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e"). InnerVolumeSpecName "kube-api-access-7tnwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.675118 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e-utilities" (OuterVolumeSpecName: "utilities") pod "25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" (UID: "25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.680508 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a12a62bb-3713-4f66-902e-673cc09db2ee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a12a62bb-3713-4f66-902e-673cc09db2ee" (UID: "a12a62bb-3713-4f66-902e-673cc09db2ee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.700233 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" (UID: "25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.738809 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8v88c" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.756674 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a12a62bb-3713-4f66-902e-673cc09db2ee-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.756761 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.756779 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e88aa20b-e3aa-4cc2-856c-0dd5e9394992-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.756791 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/895a8f2e-590a-4270-9eb0-1f7c76da93d9-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.756850 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.756862 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvcxg\" (UniqueName: \"kubernetes.io/projected/a12a62bb-3713-4f66-902e-673cc09db2ee-kube-api-access-kvcxg\") on node \"crc\" DevicePath \"\"" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.756905 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fp54\" (UniqueName: \"kubernetes.io/projected/895a8f2e-590a-4270-9eb0-1f7c76da93d9-kube-api-access-2fp54\") on node \"crc\" DevicePath \"\"" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.756919 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tnwm\" (UniqueName: \"kubernetes.io/projected/25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e-kube-api-access-7tnwm\") on node \"crc\" DevicePath \"\"" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.756930 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a12a62bb-3713-4f66-902e-673cc09db2ee-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.756941 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d864j\" (UniqueName: \"kubernetes.io/projected/e88aa20b-e3aa-4cc2-856c-0dd5e9394992-kube-api-access-d864j\") on node \"crc\" DevicePath \"\"" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.799298 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c6qmr" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.802674 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mh88h" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.807869 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/895a8f2e-590a-4270-9eb0-1f7c76da93d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "895a8f2e-590a-4270-9eb0-1f7c76da93d9" (UID: "895a8f2e-590a-4270-9eb0-1f7c76da93d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.817447 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5caed3c6-9037-4ecf-b0db-778db52bd3ee" path="/var/lib/kubelet/pods/5caed3c6-9037-4ecf-b0db-778db52bd3ee/volumes" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.822753 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b2rzs" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.858074 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88b3808a-aa06-48ab-9b57-f474a2e1379a-catalog-content\") pod \"88b3808a-aa06-48ab-9b57-f474a2e1379a\" (UID: \"88b3808a-aa06-48ab-9b57-f474a2e1379a\") " Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.858171 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnx6v\" (UniqueName: \"kubernetes.io/projected/88b3808a-aa06-48ab-9b57-f474a2e1379a-kube-api-access-hnx6v\") pod \"88b3808a-aa06-48ab-9b57-f474a2e1379a\" (UID: \"88b3808a-aa06-48ab-9b57-f474a2e1379a\") " Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.858208 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88b3808a-aa06-48ab-9b57-f474a2e1379a-utilities\") pod \"88b3808a-aa06-48ab-9b57-f474a2e1379a\" (UID: \"88b3808a-aa06-48ab-9b57-f474a2e1379a\") " Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.859181 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88b3808a-aa06-48ab-9b57-f474a2e1379a-utilities" (OuterVolumeSpecName: "utilities") pod "88b3808a-aa06-48ab-9b57-f474a2e1379a" (UID: "88b3808a-aa06-48ab-9b57-f474a2e1379a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.860102 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/895a8f2e-590a-4270-9eb0-1f7c76da93d9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.861492 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88b3808a-aa06-48ab-9b57-f474a2e1379a-kube-api-access-hnx6v" (OuterVolumeSpecName: "kube-api-access-hnx6v") pod "88b3808a-aa06-48ab-9b57-f474a2e1379a" (UID: "88b3808a-aa06-48ab-9b57-f474a2e1379a"). InnerVolumeSpecName "kube-api-access-hnx6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.873209 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e88aa20b-e3aa-4cc2-856c-0dd5e9394992-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e88aa20b-e3aa-4cc2-856c-0dd5e9394992" (UID: "e88aa20b-e3aa-4cc2-856c-0dd5e9394992"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.908193 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88b3808a-aa06-48ab-9b57-f474a2e1379a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "88b3808a-aa06-48ab-9b57-f474a2e1379a" (UID: "88b3808a-aa06-48ab-9b57-f474a2e1379a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.960425 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hg45h\" (UniqueName: \"kubernetes.io/projected/8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9-kube-api-access-hg45h\") pod \"8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9\" (UID: \"8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9\") " Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.960515 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0437f83e-83ed-42f5-88ab-110deeeac7a4-catalog-content\") pod \"0437f83e-83ed-42f5-88ab-110deeeac7a4\" (UID: \"0437f83e-83ed-42f5-88ab-110deeeac7a4\") " Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.960544 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kqdl\" (UniqueName: \"kubernetes.io/projected/6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4-kube-api-access-6kqdl\") pod \"6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4\" (UID: \"6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4\") " Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.960576 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gdng\" (UniqueName: \"kubernetes.io/projected/0437f83e-83ed-42f5-88ab-110deeeac7a4-kube-api-access-6gdng\") pod \"0437f83e-83ed-42f5-88ab-110deeeac7a4\" (UID: \"0437f83e-83ed-42f5-88ab-110deeeac7a4\") " Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.960613 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9-catalog-content\") pod \"8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9\" (UID: \"8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9\") " Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.960635 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9-utilities\") pod \"8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9\" (UID: \"8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9\") " Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.960663 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0437f83e-83ed-42f5-88ab-110deeeac7a4-utilities\") pod \"0437f83e-83ed-42f5-88ab-110deeeac7a4\" (UID: \"0437f83e-83ed-42f5-88ab-110deeeac7a4\") " Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.960696 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4-utilities\") pod \"6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4\" (UID: \"6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4\") " Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.960725 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4-catalog-content\") pod \"6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4\" (UID: \"6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4\") " Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.960959 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88b3808a-aa06-48ab-9b57-f474a2e1379a-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.960983 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88b3808a-aa06-48ab-9b57-f474a2e1379a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.961003 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e88aa20b-e3aa-4cc2-856c-0dd5e9394992-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.961020 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnx6v\" (UniqueName: \"kubernetes.io/projected/88b3808a-aa06-48ab-9b57-f474a2e1379a-kube-api-access-hnx6v\") on node \"crc\" DevicePath \"\"" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.961723 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0437f83e-83ed-42f5-88ab-110deeeac7a4-utilities" (OuterVolumeSpecName: "utilities") pod "0437f83e-83ed-42f5-88ab-110deeeac7a4" (UID: "0437f83e-83ed-42f5-88ab-110deeeac7a4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.962199 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4-utilities" (OuterVolumeSpecName: "utilities") pod "6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" (UID: "6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.962330 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9-utilities" (OuterVolumeSpecName: "utilities") pod "8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9" (UID: "8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.963507 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9-kube-api-access-hg45h" (OuterVolumeSpecName: "kube-api-access-hg45h") pod "8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9" (UID: "8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9"). InnerVolumeSpecName "kube-api-access-hg45h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.964324 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4-kube-api-access-6kqdl" (OuterVolumeSpecName: "kube-api-access-6kqdl") pod "6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" (UID: "6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4"). InnerVolumeSpecName "kube-api-access-6kqdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:35:40 crc kubenswrapper[4758]: I0122 16:35:40.965599 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0437f83e-83ed-42f5-88ab-110deeeac7a4-kube-api-access-6gdng" (OuterVolumeSpecName: "kube-api-access-6gdng") pod "0437f83e-83ed-42f5-88ab-110deeeac7a4" (UID: "0437f83e-83ed-42f5-88ab-110deeeac7a4"). InnerVolumeSpecName "kube-api-access-6gdng". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.018033 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0437f83e-83ed-42f5-88ab-110deeeac7a4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0437f83e-83ed-42f5-88ab-110deeeac7a4" (UID: "0437f83e-83ed-42f5-88ab-110deeeac7a4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.024526 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9" (UID: "8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.032701 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" (UID: "6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.062532 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.062622 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.062682 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hg45h\" (UniqueName: \"kubernetes.io/projected/8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9-kube-api-access-hg45h\") on node \"crc\" DevicePath \"\"" Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.062704 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0437f83e-83ed-42f5-88ab-110deeeac7a4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.062722 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kqdl\" (UniqueName: \"kubernetes.io/projected/6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4-kube-api-access-6kqdl\") on node \"crc\" DevicePath \"\"" Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.062785 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gdng\" (UniqueName: \"kubernetes.io/projected/0437f83e-83ed-42f5-88ab-110deeeac7a4-kube-api-access-6gdng\") on node \"crc\" DevicePath \"\"" Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.062802 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.062819 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.062838 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0437f83e-83ed-42f5-88ab-110deeeac7a4-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.287441 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2rzs" event={"ID":"8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9","Type":"ContainerDied","Data":"18c999ed0d6e1b4702584de69c6aed237434a262c3945cb9712190505b055913"} Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.287499 4758 scope.go:117] "RemoveContainer" containerID="c0d8a02460c67b646af3631ad0dce7aa077a5a9e907c2b8a02543b1c3c968606" Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.287594 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b2rzs" Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.297073 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6qmr" event={"ID":"6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4","Type":"ContainerDied","Data":"930621c983d344d7049ad24f91878878538f33a7eeb161ae0f994ceb85ae9111"} Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.297080 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c6qmr" Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.301120 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mh88h" event={"ID":"0437f83e-83ed-42f5-88ab-110deeeac7a4","Type":"ContainerDied","Data":"1b89572c89fa54bb472656f35042d63af561f98b6ebebf494db7601b9df0a43e"} Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.301224 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mh88h" Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.303638 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8v88c" Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.303645 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8v88c" event={"ID":"88b3808a-aa06-48ab-9b57-f474a2e1379a","Type":"ContainerDied","Data":"2a986056293c56b3775e881773212166c7505ac4f9f89e8e2f09f84ce3910057"} Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.305581 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nthqj" Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.305611 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-f2gvw" event={"ID":"6daa1231-490e-4ff7-9157-f49cdec96a5e","Type":"ContainerStarted","Data":"ad4303b386c6e21f3904b24f988068646e3106398b796a612dade9432bc95cd7"} Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.305593 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-559wb" Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.305666 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wjs4t" Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.305765 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s7bgv" Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.306123 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-f2gvw" Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.308951 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-f2gvw" Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.315875 4758 scope.go:117] "RemoveContainer" containerID="1e0acb8ed556cc14512fc308b5a524d120c95ce37b647251de19030c581bf8d9" Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.337485 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-f2gvw" podStartSLOduration=2.337465833 podStartE2EDuration="2.337465833s" podCreationTimestamp="2026-01-22 16:35:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:35:41.331089635 +0000 UTC m=+362.814428940" watchObservedRunningTime="2026-01-22 16:35:41.337465833 +0000 UTC m=+362.820805118" Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.346185 4758 scope.go:117] "RemoveContainer" containerID="2b06c240c13ac71aa873c8491ed2c54fb64ed87343fd8ba85555e22e613c36b8" Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.348795 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nthqj"] Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.353905 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nthqj"] Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.383546 4758 scope.go:117] "RemoveContainer" containerID="b62c8a1fcabd0d3a97f1533146cf8f0b11b055bc3905fefb7b0f4dd045495ade" Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.401326 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wjs4t"] Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.408299 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wjs4t"] Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.412762 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8v88c"] Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.416125 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8v88c"] Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.419644 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mh88h"] Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.425012 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mh88h"] Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.428482 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c6qmr"] Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.432183 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-c6qmr"] Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.437229 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-559wb"] Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.441302 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-559wb"] Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.444374 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b2rzs"] Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.447471 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-b2rzs"] Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.450222 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s7bgv"] Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.451964 4758 scope.go:117] "RemoveContainer" containerID="5c7fd3b6b998083fd2f09c10a4cf8852ce10f3d758f76897756b5137a5c54138" Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.452734 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s7bgv"] Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.464940 4758 scope.go:117] "RemoveContainer" containerID="43d9a5e9db109d92bdb8bc0744b9b457e395a08a418345afba79e3c1b91ddc02" Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.479870 4758 scope.go:117] "RemoveContainer" containerID="a8ce6f54e301d9403da36f9643f9eb1b971cacf3a128f5837b85f0f3053db213" Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.493605 4758 scope.go:117] "RemoveContainer" containerID="2df174418e884b4cf3b67404b07f226cf8e1296b25c4ff1d7cab69ccd1fdd01c" Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.516563 4758 scope.go:117] "RemoveContainer" containerID="679e83afb94bfd6c31f16c82313770f48b39c11d40eb40aff2e9b243c3a5faf6" Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.541648 4758 scope.go:117] "RemoveContainer" containerID="4f09d3a6ac11c76f074883c751146dc2e0c65ff684250918a6b12f70d1815a59" Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.559868 4758 scope.go:117] "RemoveContainer" containerID="9e9cc8c35fc8f5cdfd74e6abff53ae2eac7dfda663c9ae64f12d5a594faef9cf" Jan 22 16:35:41 crc kubenswrapper[4758]: I0122 16:35:41.578919 4758 scope.go:117] "RemoveContainer" containerID="81348e474a064553ee490f2f52e2a9d4997af0961b7545ad14415651bbb90908" Jan 22 16:35:42 crc kubenswrapper[4758]: I0122 16:35:42.814154 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0437f83e-83ed-42f5-88ab-110deeeac7a4" path="/var/lib/kubelet/pods/0437f83e-83ed-42f5-88ab-110deeeac7a4/volumes" Jan 22 16:35:42 crc kubenswrapper[4758]: I0122 16:35:42.815563 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" path="/var/lib/kubelet/pods/25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e/volumes" Jan 22 16:35:42 crc kubenswrapper[4758]: I0122 16:35:42.816449 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" path="/var/lib/kubelet/pods/6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4/volumes" Jan 22 16:35:42 crc kubenswrapper[4758]: I0122 16:35:42.817238 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88b3808a-aa06-48ab-9b57-f474a2e1379a" path="/var/lib/kubelet/pods/88b3808a-aa06-48ab-9b57-f474a2e1379a/volumes" Jan 22 16:35:42 crc kubenswrapper[4758]: I0122 16:35:42.818277 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" path="/var/lib/kubelet/pods/895a8f2e-590a-4270-9eb0-1f7c76da93d9/volumes" Jan 22 16:35:42 crc kubenswrapper[4758]: I0122 16:35:42.819179 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9" path="/var/lib/kubelet/pods/8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9/volumes" Jan 22 16:35:42 crc kubenswrapper[4758]: I0122 16:35:42.820109 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a12a62bb-3713-4f66-902e-673cc09db2ee" path="/var/lib/kubelet/pods/a12a62bb-3713-4f66-902e-673cc09db2ee/volumes" Jan 22 16:35:42 crc kubenswrapper[4758]: I0122 16:35:42.821521 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e88aa20b-e3aa-4cc2-856c-0dd5e9394992" path="/var/lib/kubelet/pods/e88aa20b-e3aa-4cc2-856c-0dd5e9394992/volumes" Jan 22 16:35:43 crc kubenswrapper[4758]: I0122 16:35:43.836955 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:35:43 crc kubenswrapper[4758]: I0122 16:35:43.836999 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:36:11 crc kubenswrapper[4758]: I0122 16:36:11.366387 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hwwcr"] Jan 22 16:36:11 crc kubenswrapper[4758]: I0122 16:36:11.367176 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-hwwcr" podUID="ac22080d-c713-4917-9254-d103edaa0c3e" containerName="controller-manager" containerID="cri-o://9e309c4a67c8b39e3006925b03a415d1536de2550e843e2d041cfc7def210548" gracePeriod=30 Jan 22 16:36:11 crc kubenswrapper[4758]: I0122 16:36:11.500472 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qc9q5"] Jan 22 16:36:11 crc kubenswrapper[4758]: I0122 16:36:11.500826 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qc9q5" podUID="7d7a9e04-71e1-4090-96af-395ad7e823ac" containerName="route-controller-manager" containerID="cri-o://011e92b10292b22c4668915368ebcce2824985ec2ca68fa3c93833db32e61c66" gracePeriod=30 Jan 22 16:36:11 crc kubenswrapper[4758]: I0122 16:36:11.797166 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hwwcr" Jan 22 16:36:11 crc kubenswrapper[4758]: I0122 16:36:11.857417 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fh9sv\" (UniqueName: \"kubernetes.io/projected/ac22080d-c713-4917-9254-d103edaa0c3e-kube-api-access-fh9sv\") pod \"ac22080d-c713-4917-9254-d103edaa0c3e\" (UID: \"ac22080d-c713-4917-9254-d103edaa0c3e\") " Jan 22 16:36:11 crc kubenswrapper[4758]: I0122 16:36:11.857523 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac22080d-c713-4917-9254-d103edaa0c3e-serving-cert\") pod \"ac22080d-c713-4917-9254-d103edaa0c3e\" (UID: \"ac22080d-c713-4917-9254-d103edaa0c3e\") " Jan 22 16:36:11 crc kubenswrapper[4758]: I0122 16:36:11.857589 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ac22080d-c713-4917-9254-d103edaa0c3e-client-ca\") pod \"ac22080d-c713-4917-9254-d103edaa0c3e\" (UID: \"ac22080d-c713-4917-9254-d103edaa0c3e\") " Jan 22 16:36:11 crc kubenswrapper[4758]: I0122 16:36:11.857625 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ac22080d-c713-4917-9254-d103edaa0c3e-proxy-ca-bundles\") pod \"ac22080d-c713-4917-9254-d103edaa0c3e\" (UID: \"ac22080d-c713-4917-9254-d103edaa0c3e\") " Jan 22 16:36:11 crc kubenswrapper[4758]: I0122 16:36:11.857676 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac22080d-c713-4917-9254-d103edaa0c3e-config\") pod \"ac22080d-c713-4917-9254-d103edaa0c3e\" (UID: \"ac22080d-c713-4917-9254-d103edaa0c3e\") " Jan 22 16:36:11 crc kubenswrapper[4758]: I0122 16:36:11.860044 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac22080d-c713-4917-9254-d103edaa0c3e-client-ca" (OuterVolumeSpecName: "client-ca") pod "ac22080d-c713-4917-9254-d103edaa0c3e" (UID: "ac22080d-c713-4917-9254-d103edaa0c3e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:36:11 crc kubenswrapper[4758]: I0122 16:36:11.860110 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac22080d-c713-4917-9254-d103edaa0c3e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ac22080d-c713-4917-9254-d103edaa0c3e" (UID: "ac22080d-c713-4917-9254-d103edaa0c3e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:36:11 crc kubenswrapper[4758]: I0122 16:36:11.861089 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac22080d-c713-4917-9254-d103edaa0c3e-config" (OuterVolumeSpecName: "config") pod "ac22080d-c713-4917-9254-d103edaa0c3e" (UID: "ac22080d-c713-4917-9254-d103edaa0c3e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:36:11 crc kubenswrapper[4758]: I0122 16:36:11.867331 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac22080d-c713-4917-9254-d103edaa0c3e-kube-api-access-fh9sv" (OuterVolumeSpecName: "kube-api-access-fh9sv") pod "ac22080d-c713-4917-9254-d103edaa0c3e" (UID: "ac22080d-c713-4917-9254-d103edaa0c3e"). InnerVolumeSpecName "kube-api-access-fh9sv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:36:11 crc kubenswrapper[4758]: I0122 16:36:11.868408 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac22080d-c713-4917-9254-d103edaa0c3e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ac22080d-c713-4917-9254-d103edaa0c3e" (UID: "ac22080d-c713-4917-9254-d103edaa0c3e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:36:11 crc kubenswrapper[4758]: I0122 16:36:11.903133 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qc9q5" Jan 22 16:36:11 crc kubenswrapper[4758]: I0122 16:36:11.959362 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d7a9e04-71e1-4090-96af-395ad7e823ac-client-ca\") pod \"7d7a9e04-71e1-4090-96af-395ad7e823ac\" (UID: \"7d7a9e04-71e1-4090-96af-395ad7e823ac\") " Jan 22 16:36:11 crc kubenswrapper[4758]: I0122 16:36:11.959467 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6fxt\" (UniqueName: \"kubernetes.io/projected/7d7a9e04-71e1-4090-96af-395ad7e823ac-kube-api-access-j6fxt\") pod \"7d7a9e04-71e1-4090-96af-395ad7e823ac\" (UID: \"7d7a9e04-71e1-4090-96af-395ad7e823ac\") " Jan 22 16:36:11 crc kubenswrapper[4758]: I0122 16:36:11.959527 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d7a9e04-71e1-4090-96af-395ad7e823ac-config\") pod \"7d7a9e04-71e1-4090-96af-395ad7e823ac\" (UID: \"7d7a9e04-71e1-4090-96af-395ad7e823ac\") " Jan 22 16:36:11 crc kubenswrapper[4758]: I0122 16:36:11.959601 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d7a9e04-71e1-4090-96af-395ad7e823ac-serving-cert\") pod \"7d7a9e04-71e1-4090-96af-395ad7e823ac\" (UID: \"7d7a9e04-71e1-4090-96af-395ad7e823ac\") " Jan 22 16:36:11 crc kubenswrapper[4758]: I0122 16:36:11.959946 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fh9sv\" (UniqueName: \"kubernetes.io/projected/ac22080d-c713-4917-9254-d103edaa0c3e-kube-api-access-fh9sv\") on node \"crc\" DevicePath \"\"" Jan 22 16:36:11 crc kubenswrapper[4758]: I0122 16:36:11.959967 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac22080d-c713-4917-9254-d103edaa0c3e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:36:11 crc kubenswrapper[4758]: I0122 16:36:11.959985 4758 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ac22080d-c713-4917-9254-d103edaa0c3e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:36:11 crc kubenswrapper[4758]: I0122 16:36:11.960001 4758 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ac22080d-c713-4917-9254-d103edaa0c3e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 16:36:11 crc kubenswrapper[4758]: I0122 16:36:11.960017 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac22080d-c713-4917-9254-d103edaa0c3e-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:36:11 crc kubenswrapper[4758]: I0122 16:36:11.960263 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d7a9e04-71e1-4090-96af-395ad7e823ac-client-ca" (OuterVolumeSpecName: "client-ca") pod "7d7a9e04-71e1-4090-96af-395ad7e823ac" (UID: "7d7a9e04-71e1-4090-96af-395ad7e823ac"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:36:11 crc kubenswrapper[4758]: I0122 16:36:11.960387 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d7a9e04-71e1-4090-96af-395ad7e823ac-config" (OuterVolumeSpecName: "config") pod "7d7a9e04-71e1-4090-96af-395ad7e823ac" (UID: "7d7a9e04-71e1-4090-96af-395ad7e823ac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:36:11 crc kubenswrapper[4758]: I0122 16:36:11.963292 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d7a9e04-71e1-4090-96af-395ad7e823ac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7d7a9e04-71e1-4090-96af-395ad7e823ac" (UID: "7d7a9e04-71e1-4090-96af-395ad7e823ac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:36:11 crc kubenswrapper[4758]: I0122 16:36:11.963467 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d7a9e04-71e1-4090-96af-395ad7e823ac-kube-api-access-j6fxt" (OuterVolumeSpecName: "kube-api-access-j6fxt") pod "7d7a9e04-71e1-4090-96af-395ad7e823ac" (UID: "7d7a9e04-71e1-4090-96af-395ad7e823ac"). InnerVolumeSpecName "kube-api-access-j6fxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:36:12 crc kubenswrapper[4758]: I0122 16:36:12.061259 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6fxt\" (UniqueName: \"kubernetes.io/projected/7d7a9e04-71e1-4090-96af-395ad7e823ac-kube-api-access-j6fxt\") on node \"crc\" DevicePath \"\"" Jan 22 16:36:12 crc kubenswrapper[4758]: I0122 16:36:12.061307 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d7a9e04-71e1-4090-96af-395ad7e823ac-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:36:12 crc kubenswrapper[4758]: I0122 16:36:12.061320 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d7a9e04-71e1-4090-96af-395ad7e823ac-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:36:12 crc kubenswrapper[4758]: I0122 16:36:12.061334 4758 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d7a9e04-71e1-4090-96af-395ad7e823ac-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:36:12 crc kubenswrapper[4758]: I0122 16:36:12.481645 4758 generic.go:334] "Generic (PLEG): container finished" podID="ac22080d-c713-4917-9254-d103edaa0c3e" containerID="9e309c4a67c8b39e3006925b03a415d1536de2550e843e2d041cfc7def210548" exitCode=0 Jan 22 16:36:12 crc kubenswrapper[4758]: I0122 16:36:12.481714 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hwwcr" Jan 22 16:36:12 crc kubenswrapper[4758]: I0122 16:36:12.481714 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hwwcr" event={"ID":"ac22080d-c713-4917-9254-d103edaa0c3e","Type":"ContainerDied","Data":"9e309c4a67c8b39e3006925b03a415d1536de2550e843e2d041cfc7def210548"} Jan 22 16:36:12 crc kubenswrapper[4758]: I0122 16:36:12.481855 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hwwcr" event={"ID":"ac22080d-c713-4917-9254-d103edaa0c3e","Type":"ContainerDied","Data":"92755c40a94b140798e4303171dc6c8a96905bcead76099262baa56656e94f94"} Jan 22 16:36:12 crc kubenswrapper[4758]: I0122 16:36:12.481878 4758 scope.go:117] "RemoveContainer" containerID="9e309c4a67c8b39e3006925b03a415d1536de2550e843e2d041cfc7def210548" Jan 22 16:36:12 crc kubenswrapper[4758]: I0122 16:36:12.489895 4758 generic.go:334] "Generic (PLEG): container finished" podID="7d7a9e04-71e1-4090-96af-395ad7e823ac" containerID="011e92b10292b22c4668915368ebcce2824985ec2ca68fa3c93833db32e61c66" exitCode=0 Jan 22 16:36:12 crc kubenswrapper[4758]: I0122 16:36:12.489933 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qc9q5" event={"ID":"7d7a9e04-71e1-4090-96af-395ad7e823ac","Type":"ContainerDied","Data":"011e92b10292b22c4668915368ebcce2824985ec2ca68fa3c93833db32e61c66"} Jan 22 16:36:12 crc kubenswrapper[4758]: I0122 16:36:12.489970 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qc9q5" event={"ID":"7d7a9e04-71e1-4090-96af-395ad7e823ac","Type":"ContainerDied","Data":"14d91218c33c4d31cd87f596b0b3e9c8680372f9673bd406378fee8ac09cc6e1"} Jan 22 16:36:12 crc kubenswrapper[4758]: I0122 16:36:12.489972 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qc9q5" Jan 22 16:36:12 crc kubenswrapper[4758]: I0122 16:36:12.528330 4758 scope.go:117] "RemoveContainer" containerID="9e309c4a67c8b39e3006925b03a415d1536de2550e843e2d041cfc7def210548" Jan 22 16:36:12 crc kubenswrapper[4758]: E0122 16:36:12.528833 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e309c4a67c8b39e3006925b03a415d1536de2550e843e2d041cfc7def210548\": container with ID starting with 9e309c4a67c8b39e3006925b03a415d1536de2550e843e2d041cfc7def210548 not found: ID does not exist" containerID="9e309c4a67c8b39e3006925b03a415d1536de2550e843e2d041cfc7def210548" Jan 22 16:36:12 crc kubenswrapper[4758]: I0122 16:36:12.528858 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e309c4a67c8b39e3006925b03a415d1536de2550e843e2d041cfc7def210548"} err="failed to get container status \"9e309c4a67c8b39e3006925b03a415d1536de2550e843e2d041cfc7def210548\": rpc error: code = NotFound desc = could not find container \"9e309c4a67c8b39e3006925b03a415d1536de2550e843e2d041cfc7def210548\": container with ID starting with 9e309c4a67c8b39e3006925b03a415d1536de2550e843e2d041cfc7def210548 not found: ID does not exist" Jan 22 16:36:12 crc kubenswrapper[4758]: I0122 16:36:12.528880 4758 scope.go:117] "RemoveContainer" containerID="011e92b10292b22c4668915368ebcce2824985ec2ca68fa3c93833db32e61c66" Jan 22 16:36:12 crc kubenswrapper[4758]: I0122 16:36:12.537431 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qc9q5"] Jan 22 16:36:12 crc kubenswrapper[4758]: I0122 16:36:12.541474 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qc9q5"] Jan 22 16:36:12 crc kubenswrapper[4758]: I0122 16:36:12.547772 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hwwcr"] Jan 22 16:36:12 crc kubenswrapper[4758]: I0122 16:36:12.550683 4758 scope.go:117] "RemoveContainer" containerID="011e92b10292b22c4668915368ebcce2824985ec2ca68fa3c93833db32e61c66" Jan 22 16:36:12 crc kubenswrapper[4758]: I0122 16:36:12.550950 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hwwcr"] Jan 22 16:36:12 crc kubenswrapper[4758]: E0122 16:36:12.551180 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"011e92b10292b22c4668915368ebcce2824985ec2ca68fa3c93833db32e61c66\": container with ID starting with 011e92b10292b22c4668915368ebcce2824985ec2ca68fa3c93833db32e61c66 not found: ID does not exist" containerID="011e92b10292b22c4668915368ebcce2824985ec2ca68fa3c93833db32e61c66" Jan 22 16:36:12 crc kubenswrapper[4758]: I0122 16:36:12.551212 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"011e92b10292b22c4668915368ebcce2824985ec2ca68fa3c93833db32e61c66"} err="failed to get container status \"011e92b10292b22c4668915368ebcce2824985ec2ca68fa3c93833db32e61c66\": rpc error: code = NotFound desc = could not find container \"011e92b10292b22c4668915368ebcce2824985ec2ca68fa3c93833db32e61c66\": container with ID starting with 011e92b10292b22c4668915368ebcce2824985ec2ca68fa3c93833db32e61c66 not found: ID does not exist" Jan 22 16:36:12 crc kubenswrapper[4758]: I0122 16:36:12.814806 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d7a9e04-71e1-4090-96af-395ad7e823ac" path="/var/lib/kubelet/pods/7d7a9e04-71e1-4090-96af-395ad7e823ac/volumes" Jan 22 16:36:12 crc kubenswrapper[4758]: I0122 16:36:12.815419 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac22080d-c713-4917-9254-d103edaa0c3e" path="/var/lib/kubelet/pods/ac22080d-c713-4917-9254-d103edaa0c3e/volumes" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.314202 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-577db457fc-nw295"] Jan 22 16:36:13 crc kubenswrapper[4758]: E0122 16:36:13.315211 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9" containerName="extract-utilities" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.315263 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9" containerName="extract-utilities" Jan 22 16:36:13 crc kubenswrapper[4758]: E0122 16:36:13.315293 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e88aa20b-e3aa-4cc2-856c-0dd5e9394992" containerName="extract-content" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.315313 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e88aa20b-e3aa-4cc2-856c-0dd5e9394992" containerName="extract-content" Jan 22 16:36:13 crc kubenswrapper[4758]: E0122 16:36:13.315346 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" containerName="extract-utilities" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.315366 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" containerName="extract-utilities" Jan 22 16:36:13 crc kubenswrapper[4758]: E0122 16:36:13.315393 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88b3808a-aa06-48ab-9b57-f474a2e1379a" containerName="registry-server" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.315443 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="88b3808a-aa06-48ab-9b57-f474a2e1379a" containerName="registry-server" Jan 22 16:36:13 crc kubenswrapper[4758]: E0122 16:36:13.315475 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a12a62bb-3713-4f66-902e-673cc09db2ee" containerName="registry-server" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.315493 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a12a62bb-3713-4f66-902e-673cc09db2ee" containerName="registry-server" Jan 22 16:36:13 crc kubenswrapper[4758]: E0122 16:36:13.315517 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" containerName="registry-server" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.315534 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" containerName="registry-server" Jan 22 16:36:13 crc kubenswrapper[4758]: E0122 16:36:13.315568 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e88aa20b-e3aa-4cc2-856c-0dd5e9394992" containerName="extract-utilities" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.315589 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e88aa20b-e3aa-4cc2-856c-0dd5e9394992" containerName="extract-utilities" Jan 22 16:36:13 crc kubenswrapper[4758]: E0122 16:36:13.315612 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0437f83e-83ed-42f5-88ab-110deeeac7a4" containerName="extract-content" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.315630 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="0437f83e-83ed-42f5-88ab-110deeeac7a4" containerName="extract-content" Jan 22 16:36:13 crc kubenswrapper[4758]: E0122 16:36:13.315653 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9" containerName="registry-server" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.315672 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9" containerName="registry-server" Jan 22 16:36:13 crc kubenswrapper[4758]: E0122 16:36:13.315697 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5caed3c6-9037-4ecf-b0db-778db52bd3ee" containerName="marketplace-operator" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.315715 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="5caed3c6-9037-4ecf-b0db-778db52bd3ee" containerName="marketplace-operator" Jan 22 16:36:13 crc kubenswrapper[4758]: E0122 16:36:13.315734 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d7a9e04-71e1-4090-96af-395ad7e823ac" containerName="route-controller-manager" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.315808 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d7a9e04-71e1-4090-96af-395ad7e823ac" containerName="route-controller-manager" Jan 22 16:36:13 crc kubenswrapper[4758]: E0122 16:36:13.315832 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" containerName="extract-utilities" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.315851 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" containerName="extract-utilities" Jan 22 16:36:13 crc kubenswrapper[4758]: E0122 16:36:13.315873 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" containerName="registry-server" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.315890 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" containerName="registry-server" Jan 22 16:36:13 crc kubenswrapper[4758]: E0122 16:36:13.315914 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e88aa20b-e3aa-4cc2-856c-0dd5e9394992" containerName="registry-server" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.315933 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e88aa20b-e3aa-4cc2-856c-0dd5e9394992" containerName="registry-server" Jan 22 16:36:13 crc kubenswrapper[4758]: E0122 16:36:13.315958 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a12a62bb-3713-4f66-902e-673cc09db2ee" containerName="extract-content" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.315976 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a12a62bb-3713-4f66-902e-673cc09db2ee" containerName="extract-content" Jan 22 16:36:13 crc kubenswrapper[4758]: E0122 16:36:13.316000 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0437f83e-83ed-42f5-88ab-110deeeac7a4" containerName="registry-server" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.316018 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="0437f83e-83ed-42f5-88ab-110deeeac7a4" containerName="registry-server" Jan 22 16:36:13 crc kubenswrapper[4758]: E0122 16:36:13.316036 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" containerName="registry-server" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.316052 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" containerName="registry-server" Jan 22 16:36:13 crc kubenswrapper[4758]: E0122 16:36:13.316076 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a12a62bb-3713-4f66-902e-673cc09db2ee" containerName="extract-utilities" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.316094 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a12a62bb-3713-4f66-902e-673cc09db2ee" containerName="extract-utilities" Jan 22 16:36:13 crc kubenswrapper[4758]: E0122 16:36:13.316120 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" containerName="extract-content" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.316137 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" containerName="extract-content" Jan 22 16:36:13 crc kubenswrapper[4758]: E0122 16:36:13.316159 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac22080d-c713-4917-9254-d103edaa0c3e" containerName="controller-manager" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.316178 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac22080d-c713-4917-9254-d103edaa0c3e" containerName="controller-manager" Jan 22 16:36:13 crc kubenswrapper[4758]: E0122 16:36:13.316208 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5caed3c6-9037-4ecf-b0db-778db52bd3ee" containerName="marketplace-operator" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.316226 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="5caed3c6-9037-4ecf-b0db-778db52bd3ee" containerName="marketplace-operator" Jan 22 16:36:13 crc kubenswrapper[4758]: E0122 16:36:13.316245 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0437f83e-83ed-42f5-88ab-110deeeac7a4" containerName="extract-utilities" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.316263 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="0437f83e-83ed-42f5-88ab-110deeeac7a4" containerName="extract-utilities" Jan 22 16:36:13 crc kubenswrapper[4758]: E0122 16:36:13.316282 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88b3808a-aa06-48ab-9b57-f474a2e1379a" containerName="extract-content" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.316301 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="88b3808a-aa06-48ab-9b57-f474a2e1379a" containerName="extract-content" Jan 22 16:36:13 crc kubenswrapper[4758]: E0122 16:36:13.316323 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" containerName="extract-content" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.316344 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" containerName="extract-content" Jan 22 16:36:13 crc kubenswrapper[4758]: E0122 16:36:13.316369 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" containerName="extract-content" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.316388 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" containerName="extract-content" Jan 22 16:36:13 crc kubenswrapper[4758]: E0122 16:36:13.316415 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" containerName="extract-utilities" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.316434 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" containerName="extract-utilities" Jan 22 16:36:13 crc kubenswrapper[4758]: E0122 16:36:13.316459 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88b3808a-aa06-48ab-9b57-f474a2e1379a" containerName="extract-utilities" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.316478 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="88b3808a-aa06-48ab-9b57-f474a2e1379a" containerName="extract-utilities" Jan 22 16:36:13 crc kubenswrapper[4758]: E0122 16:36:13.316506 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9" containerName="extract-content" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.316524 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9" containerName="extract-content" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.316805 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a12a62bb-3713-4f66-902e-673cc09db2ee" containerName="registry-server" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.316843 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d7a9e04-71e1-4090-96af-395ad7e823ac" containerName="route-controller-manager" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.316870 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="88b3808a-aa06-48ab-9b57-f474a2e1379a" containerName="registry-server" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.316890 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="e88aa20b-e3aa-4cc2-856c-0dd5e9394992" containerName="registry-server" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.316912 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="895a8f2e-590a-4270-9eb0-1f7c76da93d9" containerName="registry-server" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.316932 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac22080d-c713-4917-9254-d103edaa0c3e" containerName="controller-manager" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.316958 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ef866e2-0d1f-4d1a-8ffc-5be203ce73c9" containerName="registry-server" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.316982 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="25e0ddcb-ad39-41bc-b9e5-05c0c8c71f7e" containerName="registry-server" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.317008 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b2bf50f-bb24-4544-a6eb-1a3b81fd91a4" containerName="registry-server" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.317037 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="0437f83e-83ed-42f5-88ab-110deeeac7a4" containerName="registry-server" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.317060 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="5caed3c6-9037-4ecf-b0db-778db52bd3ee" containerName="marketplace-operator" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.317086 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="5caed3c6-9037-4ecf-b0db-778db52bd3ee" containerName="marketplace-operator" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.317111 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="5caed3c6-9037-4ecf-b0db-778db52bd3ee" containerName="marketplace-operator" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.317938 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-64585bb48f-6psbz"] Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.318114 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-577db457fc-nw295" Jan 22 16:36:13 crc kubenswrapper[4758]: E0122 16:36:13.319329 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5caed3c6-9037-4ecf-b0db-778db52bd3ee" containerName="marketplace-operator" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.319368 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="5caed3c6-9037-4ecf-b0db-778db52bd3ee" containerName="marketplace-operator" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.321002 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64585bb48f-6psbz" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.324261 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.324445 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.324503 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.324824 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.324854 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.325261 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.325594 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.325720 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.327623 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.327852 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.328023 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.328183 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.332759 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-577db457fc-nw295"] Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.336865 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.340970 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-64585bb48f-6psbz"] Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.478894 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f55ps\" (UniqueName: \"kubernetes.io/projected/6ec0d921-eec1-4b11-9df3-c3566dfbb4ee-kube-api-access-f55ps\") pod \"route-controller-manager-577db457fc-nw295\" (UID: \"6ec0d921-eec1-4b11-9df3-c3566dfbb4ee\") " pod="openshift-route-controller-manager/route-controller-manager-577db457fc-nw295" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.479346 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1ce9b9d8-0324-4470-8f31-9feef5a1a975-proxy-ca-bundles\") pod \"controller-manager-64585bb48f-6psbz\" (UID: \"1ce9b9d8-0324-4470-8f31-9feef5a1a975\") " pod="openshift-controller-manager/controller-manager-64585bb48f-6psbz" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.479573 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ec0d921-eec1-4b11-9df3-c3566dfbb4ee-serving-cert\") pod \"route-controller-manager-577db457fc-nw295\" (UID: \"6ec0d921-eec1-4b11-9df3-c3566dfbb4ee\") " pod="openshift-route-controller-manager/route-controller-manager-577db457fc-nw295" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.479729 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ce9b9d8-0324-4470-8f31-9feef5a1a975-config\") pod \"controller-manager-64585bb48f-6psbz\" (UID: \"1ce9b9d8-0324-4470-8f31-9feef5a1a975\") " pod="openshift-controller-manager/controller-manager-64585bb48f-6psbz" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.479898 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj22t\" (UniqueName: \"kubernetes.io/projected/1ce9b9d8-0324-4470-8f31-9feef5a1a975-kube-api-access-jj22t\") pod \"controller-manager-64585bb48f-6psbz\" (UID: \"1ce9b9d8-0324-4470-8f31-9feef5a1a975\") " pod="openshift-controller-manager/controller-manager-64585bb48f-6psbz" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.480033 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ce9b9d8-0324-4470-8f31-9feef5a1a975-serving-cert\") pod \"controller-manager-64585bb48f-6psbz\" (UID: \"1ce9b9d8-0324-4470-8f31-9feef5a1a975\") " pod="openshift-controller-manager/controller-manager-64585bb48f-6psbz" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.480192 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ec0d921-eec1-4b11-9df3-c3566dfbb4ee-client-ca\") pod \"route-controller-manager-577db457fc-nw295\" (UID: \"6ec0d921-eec1-4b11-9df3-c3566dfbb4ee\") " pod="openshift-route-controller-manager/route-controller-manager-577db457fc-nw295" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.480295 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ec0d921-eec1-4b11-9df3-c3566dfbb4ee-config\") pod \"route-controller-manager-577db457fc-nw295\" (UID: \"6ec0d921-eec1-4b11-9df3-c3566dfbb4ee\") " pod="openshift-route-controller-manager/route-controller-manager-577db457fc-nw295" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.480399 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ce9b9d8-0324-4470-8f31-9feef5a1a975-client-ca\") pod \"controller-manager-64585bb48f-6psbz\" (UID: \"1ce9b9d8-0324-4470-8f31-9feef5a1a975\") " pod="openshift-controller-manager/controller-manager-64585bb48f-6psbz" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.581666 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ec0d921-eec1-4b11-9df3-c3566dfbb4ee-serving-cert\") pod \"route-controller-manager-577db457fc-nw295\" (UID: \"6ec0d921-eec1-4b11-9df3-c3566dfbb4ee\") " pod="openshift-route-controller-manager/route-controller-manager-577db457fc-nw295" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.582593 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ce9b9d8-0324-4470-8f31-9feef5a1a975-config\") pod \"controller-manager-64585bb48f-6psbz\" (UID: \"1ce9b9d8-0324-4470-8f31-9feef5a1a975\") " pod="openshift-controller-manager/controller-manager-64585bb48f-6psbz" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.582669 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj22t\" (UniqueName: \"kubernetes.io/projected/1ce9b9d8-0324-4470-8f31-9feef5a1a975-kube-api-access-jj22t\") pod \"controller-manager-64585bb48f-6psbz\" (UID: \"1ce9b9d8-0324-4470-8f31-9feef5a1a975\") " pod="openshift-controller-manager/controller-manager-64585bb48f-6psbz" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.582810 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ce9b9d8-0324-4470-8f31-9feef5a1a975-serving-cert\") pod \"controller-manager-64585bb48f-6psbz\" (UID: \"1ce9b9d8-0324-4470-8f31-9feef5a1a975\") " pod="openshift-controller-manager/controller-manager-64585bb48f-6psbz" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.582881 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ec0d921-eec1-4b11-9df3-c3566dfbb4ee-client-ca\") pod \"route-controller-manager-577db457fc-nw295\" (UID: \"6ec0d921-eec1-4b11-9df3-c3566dfbb4ee\") " pod="openshift-route-controller-manager/route-controller-manager-577db457fc-nw295" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.582917 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ec0d921-eec1-4b11-9df3-c3566dfbb4ee-config\") pod \"route-controller-manager-577db457fc-nw295\" (UID: \"6ec0d921-eec1-4b11-9df3-c3566dfbb4ee\") " pod="openshift-route-controller-manager/route-controller-manager-577db457fc-nw295" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.582944 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ce9b9d8-0324-4470-8f31-9feef5a1a975-client-ca\") pod \"controller-manager-64585bb48f-6psbz\" (UID: \"1ce9b9d8-0324-4470-8f31-9feef5a1a975\") " pod="openshift-controller-manager/controller-manager-64585bb48f-6psbz" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.582989 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f55ps\" (UniqueName: \"kubernetes.io/projected/6ec0d921-eec1-4b11-9df3-c3566dfbb4ee-kube-api-access-f55ps\") pod \"route-controller-manager-577db457fc-nw295\" (UID: \"6ec0d921-eec1-4b11-9df3-c3566dfbb4ee\") " pod="openshift-route-controller-manager/route-controller-manager-577db457fc-nw295" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.583032 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1ce9b9d8-0324-4470-8f31-9feef5a1a975-proxy-ca-bundles\") pod \"controller-manager-64585bb48f-6psbz\" (UID: \"1ce9b9d8-0324-4470-8f31-9feef5a1a975\") " pod="openshift-controller-manager/controller-manager-64585bb48f-6psbz" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.584301 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ec0d921-eec1-4b11-9df3-c3566dfbb4ee-client-ca\") pod \"route-controller-manager-577db457fc-nw295\" (UID: \"6ec0d921-eec1-4b11-9df3-c3566dfbb4ee\") " pod="openshift-route-controller-manager/route-controller-manager-577db457fc-nw295" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.584396 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ec0d921-eec1-4b11-9df3-c3566dfbb4ee-config\") pod \"route-controller-manager-577db457fc-nw295\" (UID: \"6ec0d921-eec1-4b11-9df3-c3566dfbb4ee\") " pod="openshift-route-controller-manager/route-controller-manager-577db457fc-nw295" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.585771 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ce9b9d8-0324-4470-8f31-9feef5a1a975-config\") pod \"controller-manager-64585bb48f-6psbz\" (UID: \"1ce9b9d8-0324-4470-8f31-9feef5a1a975\") " pod="openshift-controller-manager/controller-manager-64585bb48f-6psbz" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.585819 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ce9b9d8-0324-4470-8f31-9feef5a1a975-client-ca\") pod \"controller-manager-64585bb48f-6psbz\" (UID: \"1ce9b9d8-0324-4470-8f31-9feef5a1a975\") " pod="openshift-controller-manager/controller-manager-64585bb48f-6psbz" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.586092 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1ce9b9d8-0324-4470-8f31-9feef5a1a975-proxy-ca-bundles\") pod \"controller-manager-64585bb48f-6psbz\" (UID: \"1ce9b9d8-0324-4470-8f31-9feef5a1a975\") " pod="openshift-controller-manager/controller-manager-64585bb48f-6psbz" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.591129 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ec0d921-eec1-4b11-9df3-c3566dfbb4ee-serving-cert\") pod \"route-controller-manager-577db457fc-nw295\" (UID: \"6ec0d921-eec1-4b11-9df3-c3566dfbb4ee\") " pod="openshift-route-controller-manager/route-controller-manager-577db457fc-nw295" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.592326 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ce9b9d8-0324-4470-8f31-9feef5a1a975-serving-cert\") pod \"controller-manager-64585bb48f-6psbz\" (UID: \"1ce9b9d8-0324-4470-8f31-9feef5a1a975\") " pod="openshift-controller-manager/controller-manager-64585bb48f-6psbz" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.600930 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj22t\" (UniqueName: \"kubernetes.io/projected/1ce9b9d8-0324-4470-8f31-9feef5a1a975-kube-api-access-jj22t\") pod \"controller-manager-64585bb48f-6psbz\" (UID: \"1ce9b9d8-0324-4470-8f31-9feef5a1a975\") " pod="openshift-controller-manager/controller-manager-64585bb48f-6psbz" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.611049 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f55ps\" (UniqueName: \"kubernetes.io/projected/6ec0d921-eec1-4b11-9df3-c3566dfbb4ee-kube-api-access-f55ps\") pod \"route-controller-manager-577db457fc-nw295\" (UID: \"6ec0d921-eec1-4b11-9df3-c3566dfbb4ee\") " pod="openshift-route-controller-manager/route-controller-manager-577db457fc-nw295" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.644888 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-577db457fc-nw295" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.661310 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64585bb48f-6psbz" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.837760 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.838111 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.838165 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-577db457fc-nw295"] Jan 22 16:36:13 crc kubenswrapper[4758]: I0122 16:36:13.886042 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-64585bb48f-6psbz"] Jan 22 16:36:13 crc kubenswrapper[4758]: W0122 16:36:13.892864 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ce9b9d8_0324_4470_8f31_9feef5a1a975.slice/crio-f6f8ef7d2dda3c3e8342c376d3ded2af3fc8c3f73d3b4c6fbc00c5276f002bc3 WatchSource:0}: Error finding container f6f8ef7d2dda3c3e8342c376d3ded2af3fc8c3f73d3b4c6fbc00c5276f002bc3: Status 404 returned error can't find the container with id f6f8ef7d2dda3c3e8342c376d3ded2af3fc8c3f73d3b4c6fbc00c5276f002bc3 Jan 22 16:36:14 crc kubenswrapper[4758]: I0122 16:36:14.507967 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64585bb48f-6psbz" event={"ID":"1ce9b9d8-0324-4470-8f31-9feef5a1a975","Type":"ContainerStarted","Data":"02abd7666e9039c4bca4f7c5dfb6c84a67ab1603607a196aa9aafef19c30b46a"} Jan 22 16:36:14 crc kubenswrapper[4758]: I0122 16:36:14.508017 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64585bb48f-6psbz" event={"ID":"1ce9b9d8-0324-4470-8f31-9feef5a1a975","Type":"ContainerStarted","Data":"f6f8ef7d2dda3c3e8342c376d3ded2af3fc8c3f73d3b4c6fbc00c5276f002bc3"} Jan 22 16:36:14 crc kubenswrapper[4758]: I0122 16:36:14.508901 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-64585bb48f-6psbz" Jan 22 16:36:14 crc kubenswrapper[4758]: I0122 16:36:14.509943 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-577db457fc-nw295" event={"ID":"6ec0d921-eec1-4b11-9df3-c3566dfbb4ee","Type":"ContainerStarted","Data":"5853951e010d119da7cc3298e2dbc2b05e69e32db60d1758b900543165593b32"} Jan 22 16:36:14 crc kubenswrapper[4758]: I0122 16:36:14.509972 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-577db457fc-nw295" event={"ID":"6ec0d921-eec1-4b11-9df3-c3566dfbb4ee","Type":"ContainerStarted","Data":"9d7686cd07f39b4999a2e9c81d98eca83b59cf55ecc72783647591ed629d8978"} Jan 22 16:36:14 crc kubenswrapper[4758]: I0122 16:36:14.510405 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-577db457fc-nw295" Jan 22 16:36:14 crc kubenswrapper[4758]: I0122 16:36:14.522455 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-64585bb48f-6psbz" Jan 22 16:36:14 crc kubenswrapper[4758]: I0122 16:36:14.535318 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-64585bb48f-6psbz" podStartSLOduration=3.535301409 podStartE2EDuration="3.535301409s" podCreationTimestamp="2026-01-22 16:36:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:36:14.534502547 +0000 UTC m=+396.017841832" watchObservedRunningTime="2026-01-22 16:36:14.535301409 +0000 UTC m=+396.018640694" Jan 22 16:36:14 crc kubenswrapper[4758]: I0122 16:36:14.559567 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-577db457fc-nw295" podStartSLOduration=3.559549174 podStartE2EDuration="3.559549174s" podCreationTimestamp="2026-01-22 16:36:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:36:14.557600069 +0000 UTC m=+396.040939354" watchObservedRunningTime="2026-01-22 16:36:14.559549174 +0000 UTC m=+396.042888459" Jan 22 16:36:14 crc kubenswrapper[4758]: I0122 16:36:14.997926 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-577db457fc-nw295" Jan 22 16:36:19 crc kubenswrapper[4758]: I0122 16:36:19.147042 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qcbh7"] Jan 22 16:36:28 crc kubenswrapper[4758]: I0122 16:36:28.920634 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m8fjx"] Jan 22 16:36:28 crc kubenswrapper[4758]: I0122 16:36:28.923718 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m8fjx" Jan 22 16:36:28 crc kubenswrapper[4758]: I0122 16:36:28.927533 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 22 16:36:28 crc kubenswrapper[4758]: I0122 16:36:28.927871 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m8fjx"] Jan 22 16:36:29 crc kubenswrapper[4758]: I0122 16:36:29.064642 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08b59c09-1a10-4c8a-946b-0f760e9ba4a6-utilities\") pod \"redhat-marketplace-m8fjx\" (UID: \"08b59c09-1a10-4c8a-946b-0f760e9ba4a6\") " pod="openshift-marketplace/redhat-marketplace-m8fjx" Jan 22 16:36:29 crc kubenswrapper[4758]: I0122 16:36:29.064713 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08b59c09-1a10-4c8a-946b-0f760e9ba4a6-catalog-content\") pod \"redhat-marketplace-m8fjx\" (UID: \"08b59c09-1a10-4c8a-946b-0f760e9ba4a6\") " pod="openshift-marketplace/redhat-marketplace-m8fjx" Jan 22 16:36:29 crc kubenswrapper[4758]: I0122 16:36:29.064838 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddg4j\" (UniqueName: \"kubernetes.io/projected/08b59c09-1a10-4c8a-946b-0f760e9ba4a6-kube-api-access-ddg4j\") pod \"redhat-marketplace-m8fjx\" (UID: \"08b59c09-1a10-4c8a-946b-0f760e9ba4a6\") " pod="openshift-marketplace/redhat-marketplace-m8fjx" Jan 22 16:36:29 crc kubenswrapper[4758]: I0122 16:36:29.165792 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddg4j\" (UniqueName: \"kubernetes.io/projected/08b59c09-1a10-4c8a-946b-0f760e9ba4a6-kube-api-access-ddg4j\") pod \"redhat-marketplace-m8fjx\" (UID: \"08b59c09-1a10-4c8a-946b-0f760e9ba4a6\") " pod="openshift-marketplace/redhat-marketplace-m8fjx" Jan 22 16:36:29 crc kubenswrapper[4758]: I0122 16:36:29.165875 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08b59c09-1a10-4c8a-946b-0f760e9ba4a6-utilities\") pod \"redhat-marketplace-m8fjx\" (UID: \"08b59c09-1a10-4c8a-946b-0f760e9ba4a6\") " pod="openshift-marketplace/redhat-marketplace-m8fjx" Jan 22 16:36:29 crc kubenswrapper[4758]: I0122 16:36:29.165959 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08b59c09-1a10-4c8a-946b-0f760e9ba4a6-catalog-content\") pod \"redhat-marketplace-m8fjx\" (UID: \"08b59c09-1a10-4c8a-946b-0f760e9ba4a6\") " pod="openshift-marketplace/redhat-marketplace-m8fjx" Jan 22 16:36:29 crc kubenswrapper[4758]: I0122 16:36:29.166377 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08b59c09-1a10-4c8a-946b-0f760e9ba4a6-utilities\") pod \"redhat-marketplace-m8fjx\" (UID: \"08b59c09-1a10-4c8a-946b-0f760e9ba4a6\") " pod="openshift-marketplace/redhat-marketplace-m8fjx" Jan 22 16:36:29 crc kubenswrapper[4758]: I0122 16:36:29.166399 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08b59c09-1a10-4c8a-946b-0f760e9ba4a6-catalog-content\") pod \"redhat-marketplace-m8fjx\" (UID: \"08b59c09-1a10-4c8a-946b-0f760e9ba4a6\") " pod="openshift-marketplace/redhat-marketplace-m8fjx" Jan 22 16:36:29 crc kubenswrapper[4758]: I0122 16:36:29.185929 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddg4j\" (UniqueName: \"kubernetes.io/projected/08b59c09-1a10-4c8a-946b-0f760e9ba4a6-kube-api-access-ddg4j\") pod \"redhat-marketplace-m8fjx\" (UID: \"08b59c09-1a10-4c8a-946b-0f760e9ba4a6\") " pod="openshift-marketplace/redhat-marketplace-m8fjx" Jan 22 16:36:29 crc kubenswrapper[4758]: I0122 16:36:29.254464 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m8fjx" Jan 22 16:36:29 crc kubenswrapper[4758]: I0122 16:36:29.311587 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-45rp2"] Jan 22 16:36:29 crc kubenswrapper[4758]: I0122 16:36:29.313686 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-45rp2" Jan 22 16:36:29 crc kubenswrapper[4758]: I0122 16:36:29.315631 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 22 16:36:29 crc kubenswrapper[4758]: I0122 16:36:29.333349 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-45rp2"] Jan 22 16:36:29 crc kubenswrapper[4758]: I0122 16:36:29.469183 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2970941d-360b-4f65-befc-15b942098ef1-catalog-content\") pod \"redhat-operators-45rp2\" (UID: \"2970941d-360b-4f65-befc-15b942098ef1\") " pod="openshift-marketplace/redhat-operators-45rp2" Jan 22 16:36:29 crc kubenswrapper[4758]: I0122 16:36:29.469281 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nglqj\" (UniqueName: \"kubernetes.io/projected/2970941d-360b-4f65-befc-15b942098ef1-kube-api-access-nglqj\") pod \"redhat-operators-45rp2\" (UID: \"2970941d-360b-4f65-befc-15b942098ef1\") " pod="openshift-marketplace/redhat-operators-45rp2" Jan 22 16:36:29 crc kubenswrapper[4758]: I0122 16:36:29.469317 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2970941d-360b-4f65-befc-15b942098ef1-utilities\") pod \"redhat-operators-45rp2\" (UID: \"2970941d-360b-4f65-befc-15b942098ef1\") " pod="openshift-marketplace/redhat-operators-45rp2" Jan 22 16:36:29 crc kubenswrapper[4758]: I0122 16:36:29.643665 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nglqj\" (UniqueName: \"kubernetes.io/projected/2970941d-360b-4f65-befc-15b942098ef1-kube-api-access-nglqj\") pod \"redhat-operators-45rp2\" (UID: \"2970941d-360b-4f65-befc-15b942098ef1\") " pod="openshift-marketplace/redhat-operators-45rp2" Jan 22 16:36:29 crc kubenswrapper[4758]: I0122 16:36:29.643726 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2970941d-360b-4f65-befc-15b942098ef1-utilities\") pod \"redhat-operators-45rp2\" (UID: \"2970941d-360b-4f65-befc-15b942098ef1\") " pod="openshift-marketplace/redhat-operators-45rp2" Jan 22 16:36:29 crc kubenswrapper[4758]: I0122 16:36:29.643780 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2970941d-360b-4f65-befc-15b942098ef1-catalog-content\") pod \"redhat-operators-45rp2\" (UID: \"2970941d-360b-4f65-befc-15b942098ef1\") " pod="openshift-marketplace/redhat-operators-45rp2" Jan 22 16:36:29 crc kubenswrapper[4758]: I0122 16:36:29.644651 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2970941d-360b-4f65-befc-15b942098ef1-catalog-content\") pod \"redhat-operators-45rp2\" (UID: \"2970941d-360b-4f65-befc-15b942098ef1\") " pod="openshift-marketplace/redhat-operators-45rp2" Jan 22 16:36:29 crc kubenswrapper[4758]: I0122 16:36:29.644836 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2970941d-360b-4f65-befc-15b942098ef1-utilities\") pod \"redhat-operators-45rp2\" (UID: \"2970941d-360b-4f65-befc-15b942098ef1\") " pod="openshift-marketplace/redhat-operators-45rp2" Jan 22 16:36:29 crc kubenswrapper[4758]: I0122 16:36:29.666162 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nglqj\" (UniqueName: \"kubernetes.io/projected/2970941d-360b-4f65-befc-15b942098ef1-kube-api-access-nglqj\") pod \"redhat-operators-45rp2\" (UID: \"2970941d-360b-4f65-befc-15b942098ef1\") " pod="openshift-marketplace/redhat-operators-45rp2" Jan 22 16:36:29 crc kubenswrapper[4758]: I0122 16:36:29.701363 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m8fjx"] Jan 22 16:36:29 crc kubenswrapper[4758]: W0122 16:36:29.703736 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08b59c09_1a10_4c8a_946b_0f760e9ba4a6.slice/crio-4cf2c864dc8e51dd5a38075e91c93c28d311bc0fae62b30320c66d48adb507e0 WatchSource:0}: Error finding container 4cf2c864dc8e51dd5a38075e91c93c28d311bc0fae62b30320c66d48adb507e0: Status 404 returned error can't find the container with id 4cf2c864dc8e51dd5a38075e91c93c28d311bc0fae62b30320c66d48adb507e0 Jan 22 16:36:29 crc kubenswrapper[4758]: I0122 16:36:29.946669 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-45rp2" Jan 22 16:36:30 crc kubenswrapper[4758]: I0122 16:36:30.362347 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-45rp2"] Jan 22 16:36:30 crc kubenswrapper[4758]: I0122 16:36:30.661365 4758 generic.go:334] "Generic (PLEG): container finished" podID="2970941d-360b-4f65-befc-15b942098ef1" containerID="9d0f7ad8da6f13aab3f6f83d1229d379267096a282a5d52af1a14ef32cf6a931" exitCode=0 Jan 22 16:36:30 crc kubenswrapper[4758]: I0122 16:36:30.661501 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-45rp2" event={"ID":"2970941d-360b-4f65-befc-15b942098ef1","Type":"ContainerDied","Data":"9d0f7ad8da6f13aab3f6f83d1229d379267096a282a5d52af1a14ef32cf6a931"} Jan 22 16:36:30 crc kubenswrapper[4758]: I0122 16:36:30.661552 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-45rp2" event={"ID":"2970941d-360b-4f65-befc-15b942098ef1","Type":"ContainerStarted","Data":"69dacfcf19a7887fbf033a23c5a4b5f6989da81e42dff9df39a967abe0b4fd0b"} Jan 22 16:36:30 crc kubenswrapper[4758]: I0122 16:36:30.664549 4758 generic.go:334] "Generic (PLEG): container finished" podID="08b59c09-1a10-4c8a-946b-0f760e9ba4a6" containerID="190cad008f01c055a12e0f490241757e7f9a2d276398b5447948c229c85851d7" exitCode=0 Jan 22 16:36:30 crc kubenswrapper[4758]: I0122 16:36:30.664617 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m8fjx" event={"ID":"08b59c09-1a10-4c8a-946b-0f760e9ba4a6","Type":"ContainerDied","Data":"190cad008f01c055a12e0f490241757e7f9a2d276398b5447948c229c85851d7"} Jan 22 16:36:30 crc kubenswrapper[4758]: I0122 16:36:30.664657 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m8fjx" event={"ID":"08b59c09-1a10-4c8a-946b-0f760e9ba4a6","Type":"ContainerStarted","Data":"4cf2c864dc8e51dd5a38075e91c93c28d311bc0fae62b30320c66d48adb507e0"} Jan 22 16:36:30 crc kubenswrapper[4758]: I0122 16:36:30.917522 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lwpnp"] Jan 22 16:36:30 crc kubenswrapper[4758]: I0122 16:36:30.919694 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lwpnp" Jan 22 16:36:30 crc kubenswrapper[4758]: I0122 16:36:30.935911 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lwpnp"] Jan 22 16:36:30 crc kubenswrapper[4758]: I0122 16:36:30.937833 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 22 16:36:31 crc kubenswrapper[4758]: I0122 16:36:31.059250 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-488rc\" (UniqueName: \"kubernetes.io/projected/d5b62b0f-9c35-46f7-b806-69b0a53eaf63-kube-api-access-488rc\") pod \"certified-operators-lwpnp\" (UID: \"d5b62b0f-9c35-46f7-b806-69b0a53eaf63\") " pod="openshift-marketplace/certified-operators-lwpnp" Jan 22 16:36:31 crc kubenswrapper[4758]: I0122 16:36:31.059456 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5b62b0f-9c35-46f7-b806-69b0a53eaf63-utilities\") pod \"certified-operators-lwpnp\" (UID: \"d5b62b0f-9c35-46f7-b806-69b0a53eaf63\") " pod="openshift-marketplace/certified-operators-lwpnp" Jan 22 16:36:31 crc kubenswrapper[4758]: I0122 16:36:31.059530 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5b62b0f-9c35-46f7-b806-69b0a53eaf63-catalog-content\") pod \"certified-operators-lwpnp\" (UID: \"d5b62b0f-9c35-46f7-b806-69b0a53eaf63\") " pod="openshift-marketplace/certified-operators-lwpnp" Jan 22 16:36:31 crc kubenswrapper[4758]: I0122 16:36:31.160335 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5b62b0f-9c35-46f7-b806-69b0a53eaf63-utilities\") pod \"certified-operators-lwpnp\" (UID: \"d5b62b0f-9c35-46f7-b806-69b0a53eaf63\") " pod="openshift-marketplace/certified-operators-lwpnp" Jan 22 16:36:31 crc kubenswrapper[4758]: I0122 16:36:31.160384 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5b62b0f-9c35-46f7-b806-69b0a53eaf63-catalog-content\") pod \"certified-operators-lwpnp\" (UID: \"d5b62b0f-9c35-46f7-b806-69b0a53eaf63\") " pod="openshift-marketplace/certified-operators-lwpnp" Jan 22 16:36:31 crc kubenswrapper[4758]: I0122 16:36:31.160432 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-488rc\" (UniqueName: \"kubernetes.io/projected/d5b62b0f-9c35-46f7-b806-69b0a53eaf63-kube-api-access-488rc\") pod \"certified-operators-lwpnp\" (UID: \"d5b62b0f-9c35-46f7-b806-69b0a53eaf63\") " pod="openshift-marketplace/certified-operators-lwpnp" Jan 22 16:36:31 crc kubenswrapper[4758]: I0122 16:36:31.161007 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5b62b0f-9c35-46f7-b806-69b0a53eaf63-utilities\") pod \"certified-operators-lwpnp\" (UID: \"d5b62b0f-9c35-46f7-b806-69b0a53eaf63\") " pod="openshift-marketplace/certified-operators-lwpnp" Jan 22 16:36:31 crc kubenswrapper[4758]: I0122 16:36:31.161354 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5b62b0f-9c35-46f7-b806-69b0a53eaf63-catalog-content\") pod \"certified-operators-lwpnp\" (UID: \"d5b62b0f-9c35-46f7-b806-69b0a53eaf63\") " pod="openshift-marketplace/certified-operators-lwpnp" Jan 22 16:36:31 crc kubenswrapper[4758]: I0122 16:36:31.188681 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-488rc\" (UniqueName: \"kubernetes.io/projected/d5b62b0f-9c35-46f7-b806-69b0a53eaf63-kube-api-access-488rc\") pod \"certified-operators-lwpnp\" (UID: \"d5b62b0f-9c35-46f7-b806-69b0a53eaf63\") " pod="openshift-marketplace/certified-operators-lwpnp" Jan 22 16:36:31 crc kubenswrapper[4758]: I0122 16:36:31.257251 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lwpnp" Jan 22 16:36:31 crc kubenswrapper[4758]: I0122 16:36:31.524617 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lwpnp"] Jan 22 16:36:31 crc kubenswrapper[4758]: I0122 16:36:31.672364 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m8fjx" event={"ID":"08b59c09-1a10-4c8a-946b-0f760e9ba4a6","Type":"ContainerStarted","Data":"0703ac9697986a5a563d00322670a5698a33dc87e9661983e3d6c1a162c25211"} Jan 22 16:36:31 crc kubenswrapper[4758]: I0122 16:36:31.676927 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lwpnp" event={"ID":"d5b62b0f-9c35-46f7-b806-69b0a53eaf63","Type":"ContainerStarted","Data":"cc995b7de125b5acec33342e520661ff1ba86d63f44b1900bd6d3fd53b7bbf75"} Jan 22 16:36:31 crc kubenswrapper[4758]: I0122 16:36:31.676978 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lwpnp" event={"ID":"d5b62b0f-9c35-46f7-b806-69b0a53eaf63","Type":"ContainerStarted","Data":"06c076e6ccfd35bb44586790e8926666b64a0e5d389a2a44de3dd5bdecdaee28"} Jan 22 16:36:31 crc kubenswrapper[4758]: I0122 16:36:31.909823 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6nnrg"] Jan 22 16:36:31 crc kubenswrapper[4758]: I0122 16:36:31.911525 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6nnrg" Jan 22 16:36:31 crc kubenswrapper[4758]: I0122 16:36:31.913498 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 22 16:36:31 crc kubenswrapper[4758]: I0122 16:36:31.919566 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6nnrg"] Jan 22 16:36:31 crc kubenswrapper[4758]: I0122 16:36:31.989300 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6353b564-856d-4648-88f7-b4630ec7bf7b-catalog-content\") pod \"community-operators-6nnrg\" (UID: \"6353b564-856d-4648-88f7-b4630ec7bf7b\") " pod="openshift-marketplace/community-operators-6nnrg" Jan 22 16:36:31 crc kubenswrapper[4758]: I0122 16:36:31.989427 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6353b564-856d-4648-88f7-b4630ec7bf7b-utilities\") pod \"community-operators-6nnrg\" (UID: \"6353b564-856d-4648-88f7-b4630ec7bf7b\") " pod="openshift-marketplace/community-operators-6nnrg" Jan 22 16:36:31 crc kubenswrapper[4758]: I0122 16:36:31.989551 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2blxx\" (UniqueName: \"kubernetes.io/projected/6353b564-856d-4648-88f7-b4630ec7bf7b-kube-api-access-2blxx\") pod \"community-operators-6nnrg\" (UID: \"6353b564-856d-4648-88f7-b4630ec7bf7b\") " pod="openshift-marketplace/community-operators-6nnrg" Jan 22 16:36:32 crc kubenswrapper[4758]: I0122 16:36:32.090811 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2blxx\" (UniqueName: \"kubernetes.io/projected/6353b564-856d-4648-88f7-b4630ec7bf7b-kube-api-access-2blxx\") pod \"community-operators-6nnrg\" (UID: \"6353b564-856d-4648-88f7-b4630ec7bf7b\") " pod="openshift-marketplace/community-operators-6nnrg" Jan 22 16:36:32 crc kubenswrapper[4758]: I0122 16:36:32.091713 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6353b564-856d-4648-88f7-b4630ec7bf7b-catalog-content\") pod \"community-operators-6nnrg\" (UID: \"6353b564-856d-4648-88f7-b4630ec7bf7b\") " pod="openshift-marketplace/community-operators-6nnrg" Jan 22 16:36:32 crc kubenswrapper[4758]: I0122 16:36:32.091861 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6353b564-856d-4648-88f7-b4630ec7bf7b-utilities\") pod \"community-operators-6nnrg\" (UID: \"6353b564-856d-4648-88f7-b4630ec7bf7b\") " pod="openshift-marketplace/community-operators-6nnrg" Jan 22 16:36:32 crc kubenswrapper[4758]: I0122 16:36:32.092507 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6353b564-856d-4648-88f7-b4630ec7bf7b-utilities\") pod \"community-operators-6nnrg\" (UID: \"6353b564-856d-4648-88f7-b4630ec7bf7b\") " pod="openshift-marketplace/community-operators-6nnrg" Jan 22 16:36:32 crc kubenswrapper[4758]: I0122 16:36:32.093666 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6353b564-856d-4648-88f7-b4630ec7bf7b-catalog-content\") pod \"community-operators-6nnrg\" (UID: \"6353b564-856d-4648-88f7-b4630ec7bf7b\") " pod="openshift-marketplace/community-operators-6nnrg" Jan 22 16:36:32 crc kubenswrapper[4758]: I0122 16:36:32.113629 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2blxx\" (UniqueName: \"kubernetes.io/projected/6353b564-856d-4648-88f7-b4630ec7bf7b-kube-api-access-2blxx\") pod \"community-operators-6nnrg\" (UID: \"6353b564-856d-4648-88f7-b4630ec7bf7b\") " pod="openshift-marketplace/community-operators-6nnrg" Jan 22 16:36:32 crc kubenswrapper[4758]: I0122 16:36:32.226856 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6nnrg" Jan 22 16:36:32 crc kubenswrapper[4758]: I0122 16:36:32.607274 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6nnrg"] Jan 22 16:36:32 crc kubenswrapper[4758]: I0122 16:36:32.684043 4758 generic.go:334] "Generic (PLEG): container finished" podID="d5b62b0f-9c35-46f7-b806-69b0a53eaf63" containerID="cc995b7de125b5acec33342e520661ff1ba86d63f44b1900bd6d3fd53b7bbf75" exitCode=0 Jan 22 16:36:32 crc kubenswrapper[4758]: I0122 16:36:32.684195 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lwpnp" event={"ID":"d5b62b0f-9c35-46f7-b806-69b0a53eaf63","Type":"ContainerDied","Data":"cc995b7de125b5acec33342e520661ff1ba86d63f44b1900bd6d3fd53b7bbf75"} Jan 22 16:36:32 crc kubenswrapper[4758]: I0122 16:36:32.686175 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6nnrg" event={"ID":"6353b564-856d-4648-88f7-b4630ec7bf7b","Type":"ContainerStarted","Data":"9e1455f839f500e1994a438c9abfe8179097424275168e7eb23728d87c792213"} Jan 22 16:36:32 crc kubenswrapper[4758]: I0122 16:36:32.690640 4758 generic.go:334] "Generic (PLEG): container finished" podID="08b59c09-1a10-4c8a-946b-0f760e9ba4a6" containerID="0703ac9697986a5a563d00322670a5698a33dc87e9661983e3d6c1a162c25211" exitCode=0 Jan 22 16:36:32 crc kubenswrapper[4758]: I0122 16:36:32.690687 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m8fjx" event={"ID":"08b59c09-1a10-4c8a-946b-0f760e9ba4a6","Type":"ContainerDied","Data":"0703ac9697986a5a563d00322670a5698a33dc87e9661983e3d6c1a162c25211"} Jan 22 16:36:33 crc kubenswrapper[4758]: I0122 16:36:33.697909 4758 generic.go:334] "Generic (PLEG): container finished" podID="6353b564-856d-4648-88f7-b4630ec7bf7b" containerID="f138a26d8b5d832ccf436e98629b2a45f6688e6baa667e31543d761d78eae15f" exitCode=0 Jan 22 16:36:33 crc kubenswrapper[4758]: I0122 16:36:33.697949 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6nnrg" event={"ID":"6353b564-856d-4648-88f7-b4630ec7bf7b","Type":"ContainerDied","Data":"f138a26d8b5d832ccf436e98629b2a45f6688e6baa667e31543d761d78eae15f"} Jan 22 16:36:34 crc kubenswrapper[4758]: I0122 16:36:34.709763 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m8fjx" event={"ID":"08b59c09-1a10-4c8a-946b-0f760e9ba4a6","Type":"ContainerStarted","Data":"bee0c9ad39df3d1f194d85850e29c686abecaeffcb8723d91273d5f4090afe08"} Jan 22 16:36:34 crc kubenswrapper[4758]: I0122 16:36:34.712472 4758 generic.go:334] "Generic (PLEG): container finished" podID="d5b62b0f-9c35-46f7-b806-69b0a53eaf63" containerID="ed7eecd9014522436c4d19d926b9044455795b5aaec98a3ce427dc34de0388b8" exitCode=0 Jan 22 16:36:34 crc kubenswrapper[4758]: I0122 16:36:34.712520 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lwpnp" event={"ID":"d5b62b0f-9c35-46f7-b806-69b0a53eaf63","Type":"ContainerDied","Data":"ed7eecd9014522436c4d19d926b9044455795b5aaec98a3ce427dc34de0388b8"} Jan 22 16:36:34 crc kubenswrapper[4758]: I0122 16:36:34.730501 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m8fjx" podStartSLOduration=3.684621639 podStartE2EDuration="6.730484577s" podCreationTimestamp="2026-01-22 16:36:28 +0000 UTC" firstStartedPulling="2026-01-22 16:36:30.666729021 +0000 UTC m=+412.150068356" lastFinishedPulling="2026-01-22 16:36:33.712592009 +0000 UTC m=+415.195931294" observedRunningTime="2026-01-22 16:36:34.728262785 +0000 UTC m=+416.211602070" watchObservedRunningTime="2026-01-22 16:36:34.730484577 +0000 UTC m=+416.213823872" Jan 22 16:36:36 crc kubenswrapper[4758]: I0122 16:36:36.726844 4758 generic.go:334] "Generic (PLEG): container finished" podID="6353b564-856d-4648-88f7-b4630ec7bf7b" containerID="b0b034029f2ae2ebb289a800ea137ab6e7851be40c19a2d5468b27381d4f4086" exitCode=0 Jan 22 16:36:36 crc kubenswrapper[4758]: I0122 16:36:36.726957 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6nnrg" event={"ID":"6353b564-856d-4648-88f7-b4630ec7bf7b","Type":"ContainerDied","Data":"b0b034029f2ae2ebb289a800ea137ab6e7851be40c19a2d5468b27381d4f4086"} Jan 22 16:36:39 crc kubenswrapper[4758]: I0122 16:36:39.256513 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m8fjx" Jan 22 16:36:39 crc kubenswrapper[4758]: I0122 16:36:39.256603 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m8fjx" Jan 22 16:36:39 crc kubenswrapper[4758]: I0122 16:36:39.303583 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m8fjx" Jan 22 16:36:39 crc kubenswrapper[4758]: I0122 16:36:39.786393 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m8fjx" Jan 22 16:36:43 crc kubenswrapper[4758]: I0122 16:36:43.837373 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:36:43 crc kubenswrapper[4758]: I0122 16:36:43.838093 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:36:43 crc kubenswrapper[4758]: I0122 16:36:43.838163 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" Jan 22 16:36:43 crc kubenswrapper[4758]: I0122 16:36:43.838969 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a7cae046a3bb22e5d3a084fb0fecaa7e3bddc05b5196ba2795a8cbf04c254117"} pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 16:36:43 crc kubenswrapper[4758]: I0122 16:36:43.839070 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" containerID="cri-o://a7cae046a3bb22e5d3a084fb0fecaa7e3bddc05b5196ba2795a8cbf04c254117" gracePeriod=600 Jan 22 16:36:44 crc kubenswrapper[4758]: I0122 16:36:44.175809 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" podUID="e926035e-0af8-45eb-9451-19c8827363c3" containerName="oauth-openshift" containerID="cri-o://237c3a2fb8131d656e985482a0995ed58bc9dfebd0e06074bdce07f532f3f33d" gracePeriod=15 Jan 22 16:36:44 crc kubenswrapper[4758]: I0122 16:36:44.783404 4758 generic.go:334] "Generic (PLEG): container finished" podID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerID="a7cae046a3bb22e5d3a084fb0fecaa7e3bddc05b5196ba2795a8cbf04c254117" exitCode=0 Jan 22 16:36:44 crc kubenswrapper[4758]: I0122 16:36:44.783491 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerDied","Data":"a7cae046a3bb22e5d3a084fb0fecaa7e3bddc05b5196ba2795a8cbf04c254117"} Jan 22 16:36:44 crc kubenswrapper[4758]: I0122 16:36:44.784108 4758 scope.go:117] "RemoveContainer" containerID="4fbf5569b30ec6397014b282bf67eca77930756b413c7554ab366d2d31a4f548" Jan 22 16:36:44 crc kubenswrapper[4758]: I0122 16:36:44.786811 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lwpnp" event={"ID":"d5b62b0f-9c35-46f7-b806-69b0a53eaf63","Type":"ContainerStarted","Data":"0e22d8dc53fe3d2d2d8b3f5f685d34eefe0ef6d6be6160d098dcdafc56bfdb22"} Jan 22 16:36:44 crc kubenswrapper[4758]: I0122 16:36:44.788302 4758 generic.go:334] "Generic (PLEG): container finished" podID="e926035e-0af8-45eb-9451-19c8827363c3" containerID="237c3a2fb8131d656e985482a0995ed58bc9dfebd0e06074bdce07f532f3f33d" exitCode=0 Jan 22 16:36:44 crc kubenswrapper[4758]: I0122 16:36:44.788341 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" event={"ID":"e926035e-0af8-45eb-9451-19c8827363c3","Type":"ContainerDied","Data":"237c3a2fb8131d656e985482a0995ed58bc9dfebd0e06074bdce07f532f3f33d"} Jan 22 16:36:44 crc kubenswrapper[4758]: I0122 16:36:44.813313 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lwpnp" podStartSLOduration=11.455709672 podStartE2EDuration="14.813292979s" podCreationTimestamp="2026-01-22 16:36:30 +0000 UTC" firstStartedPulling="2026-01-22 16:36:32.685794963 +0000 UTC m=+414.169134258" lastFinishedPulling="2026-01-22 16:36:36.04337828 +0000 UTC m=+417.526717565" observedRunningTime="2026-01-22 16:36:44.806413687 +0000 UTC m=+426.289752972" watchObservedRunningTime="2026-01-22 16:36:44.813292979 +0000 UTC m=+426.296632264" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.129759 4758 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-qcbh7 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.6:6443/healthz\": dial tcp 10.217.0.6:6443: connect: connection refused" start-of-body= Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.129822 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" podUID="e926035e-0af8-45eb-9451-19c8827363c3" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.6:6443/healthz\": dial tcp 10.217.0.6:6443: connect: connection refused" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.444887 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.473136 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e926035e-0af8-45eb-9451-19c8827363c3-audit-dir\") pod \"e926035e-0af8-45eb-9451-19c8827363c3\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.473196 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwfxm\" (UniqueName: \"kubernetes.io/projected/e926035e-0af8-45eb-9451-19c8827363c3-kube-api-access-cwfxm\") pod \"e926035e-0af8-45eb-9451-19c8827363c3\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.473236 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-user-template-error\") pod \"e926035e-0af8-45eb-9451-19c8827363c3\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.473236 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e926035e-0af8-45eb-9451-19c8827363c3-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "e926035e-0af8-45eb-9451-19c8827363c3" (UID: "e926035e-0af8-45eb-9451-19c8827363c3"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.473264 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-system-service-ca\") pod \"e926035e-0af8-45eb-9451-19c8827363c3\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.473290 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-system-session\") pod \"e926035e-0af8-45eb-9451-19c8827363c3\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.473322 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-system-cliconfig\") pod \"e926035e-0af8-45eb-9451-19c8827363c3\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.473359 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-system-trusted-ca-bundle\") pod \"e926035e-0af8-45eb-9451-19c8827363c3\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.473384 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-user-template-provider-selection\") pod \"e926035e-0af8-45eb-9451-19c8827363c3\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.473407 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-system-ocp-branding-template\") pod \"e926035e-0af8-45eb-9451-19c8827363c3\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.473437 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-user-template-login\") pod \"e926035e-0af8-45eb-9451-19c8827363c3\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.473465 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-user-idp-0-file-data\") pod \"e926035e-0af8-45eb-9451-19c8827363c3\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.473503 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e926035e-0af8-45eb-9451-19c8827363c3-audit-policies\") pod \"e926035e-0af8-45eb-9451-19c8827363c3\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.473521 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-system-serving-cert\") pod \"e926035e-0af8-45eb-9451-19c8827363c3\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.473541 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-system-router-certs\") pod \"e926035e-0af8-45eb-9451-19c8827363c3\" (UID: \"e926035e-0af8-45eb-9451-19c8827363c3\") " Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.473736 4758 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e926035e-0af8-45eb-9451-19c8827363c3-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.474282 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "e926035e-0af8-45eb-9451-19c8827363c3" (UID: "e926035e-0af8-45eb-9451-19c8827363c3"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.474361 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "e926035e-0af8-45eb-9451-19c8827363c3" (UID: "e926035e-0af8-45eb-9451-19c8827363c3"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.474632 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "e926035e-0af8-45eb-9451-19c8827363c3" (UID: "e926035e-0af8-45eb-9451-19c8827363c3"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.474899 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e926035e-0af8-45eb-9451-19c8827363c3-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "e926035e-0af8-45eb-9451-19c8827363c3" (UID: "e926035e-0af8-45eb-9451-19c8827363c3"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.482591 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "e926035e-0af8-45eb-9451-19c8827363c3" (UID: "e926035e-0af8-45eb-9451-19c8827363c3"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.483166 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "e926035e-0af8-45eb-9451-19c8827363c3" (UID: "e926035e-0af8-45eb-9451-19c8827363c3"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.483592 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "e926035e-0af8-45eb-9451-19c8827363c3" (UID: "e926035e-0af8-45eb-9451-19c8827363c3"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.486557 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e926035e-0af8-45eb-9451-19c8827363c3-kube-api-access-cwfxm" (OuterVolumeSpecName: "kube-api-access-cwfxm") pod "e926035e-0af8-45eb-9451-19c8827363c3" (UID: "e926035e-0af8-45eb-9451-19c8827363c3"). InnerVolumeSpecName "kube-api-access-cwfxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.486573 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "e926035e-0af8-45eb-9451-19c8827363c3" (UID: "e926035e-0af8-45eb-9451-19c8827363c3"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.487158 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "e926035e-0af8-45eb-9451-19c8827363c3" (UID: "e926035e-0af8-45eb-9451-19c8827363c3"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.487203 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "e926035e-0af8-45eb-9451-19c8827363c3" (UID: "e926035e-0af8-45eb-9451-19c8827363c3"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.490044 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "e926035e-0af8-45eb-9451-19c8827363c3" (UID: "e926035e-0af8-45eb-9451-19c8827363c3"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.491340 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-65454647d6-pr5dd"] Jan 22 16:36:45 crc kubenswrapper[4758]: E0122 16:36:45.491613 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e926035e-0af8-45eb-9451-19c8827363c3" containerName="oauth-openshift" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.491628 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e926035e-0af8-45eb-9451-19c8827363c3" containerName="oauth-openshift" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.491748 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="e926035e-0af8-45eb-9451-19c8827363c3" containerName="oauth-openshift" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.492194 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.494895 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "e926035e-0af8-45eb-9451-19c8827363c3" (UID: "e926035e-0af8-45eb-9451-19c8827363c3"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.517528 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-65454647d6-pr5dd"] Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.574466 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-system-service-ca\") pod \"oauth-openshift-65454647d6-pr5dd\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.574498 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-65454647d6-pr5dd\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.574565 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9deedfb3-0e0e-4287-81de-8131aac4b6b0-audit-policies\") pod \"oauth-openshift-65454647d6-pr5dd\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.574594 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-65454647d6-pr5dd\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.574682 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-65454647d6-pr5dd\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.574705 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-65454647d6-pr5dd\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.574835 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9deedfb3-0e0e-4287-81de-8131aac4b6b0-audit-dir\") pod \"oauth-openshift-65454647d6-pr5dd\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.574863 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-user-template-login\") pod \"oauth-openshift-65454647d6-pr5dd\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.574900 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-system-router-certs\") pod \"oauth-openshift-65454647d6-pr5dd\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.574949 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr9wk\" (UniqueName: \"kubernetes.io/projected/9deedfb3-0e0e-4287-81de-8131aac4b6b0-kube-api-access-qr9wk\") pod \"oauth-openshift-65454647d6-pr5dd\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.575003 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-65454647d6-pr5dd\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.575022 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-65454647d6-pr5dd\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.575095 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-system-session\") pod \"oauth-openshift-65454647d6-pr5dd\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.575149 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-user-template-error\") pod \"oauth-openshift-65454647d6-pr5dd\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.575312 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.575361 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.575374 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.575386 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.575396 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.575421 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.575431 4758 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e926035e-0af8-45eb-9451-19c8827363c3-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.575505 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.575518 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.575527 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwfxm\" (UniqueName: \"kubernetes.io/projected/e926035e-0af8-45eb-9451-19c8827363c3-kube-api-access-cwfxm\") on node \"crc\" DevicePath \"\"" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.575537 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.575545 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.575556 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e926035e-0af8-45eb-9451-19c8827363c3-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.676712 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-system-session\") pod \"oauth-openshift-65454647d6-pr5dd\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.677139 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-user-template-error\") pod \"oauth-openshift-65454647d6-pr5dd\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.677166 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-system-service-ca\") pod \"oauth-openshift-65454647d6-pr5dd\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.677189 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-65454647d6-pr5dd\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.677213 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9deedfb3-0e0e-4287-81de-8131aac4b6b0-audit-policies\") pod \"oauth-openshift-65454647d6-pr5dd\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.677275 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-65454647d6-pr5dd\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.677298 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-65454647d6-pr5dd\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.677325 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-65454647d6-pr5dd\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.677349 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9deedfb3-0e0e-4287-81de-8131aac4b6b0-audit-dir\") pod \"oauth-openshift-65454647d6-pr5dd\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.677377 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-user-template-login\") pod \"oauth-openshift-65454647d6-pr5dd\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.677402 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-system-router-certs\") pod \"oauth-openshift-65454647d6-pr5dd\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.677423 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-65454647d6-pr5dd\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.677444 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qr9wk\" (UniqueName: \"kubernetes.io/projected/9deedfb3-0e0e-4287-81de-8131aac4b6b0-kube-api-access-qr9wk\") pod \"oauth-openshift-65454647d6-pr5dd\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.677470 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-65454647d6-pr5dd\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.677878 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9deedfb3-0e0e-4287-81de-8131aac4b6b0-audit-dir\") pod \"oauth-openshift-65454647d6-pr5dd\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.677989 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-system-service-ca\") pod \"oauth-openshift-65454647d6-pr5dd\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.678455 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-65454647d6-pr5dd\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.678598 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9deedfb3-0e0e-4287-81de-8131aac4b6b0-audit-policies\") pod \"oauth-openshift-65454647d6-pr5dd\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.679285 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-65454647d6-pr5dd\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.682454 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-65454647d6-pr5dd\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.682626 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-system-router-certs\") pod \"oauth-openshift-65454647d6-pr5dd\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.682871 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-user-template-login\") pod \"oauth-openshift-65454647d6-pr5dd\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.683133 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-65454647d6-pr5dd\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.683164 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-system-session\") pod \"oauth-openshift-65454647d6-pr5dd\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.684043 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-65454647d6-pr5dd\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.684274 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-user-template-error\") pod \"oauth-openshift-65454647d6-pr5dd\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.685854 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-65454647d6-pr5dd\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.697692 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qr9wk\" (UniqueName: \"kubernetes.io/projected/9deedfb3-0e0e-4287-81de-8131aac4b6b0-kube-api-access-qr9wk\") pod \"oauth-openshift-65454647d6-pr5dd\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.795378 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" event={"ID":"e926035e-0af8-45eb-9451-19c8827363c3","Type":"ContainerDied","Data":"06fe3b48de957488ed3233fc44b2211826eaf0f4701de5c80c870b4221289206"} Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.795430 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qcbh7" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.795473 4758 scope.go:117] "RemoveContainer" containerID="237c3a2fb8131d656e985482a0995ed58bc9dfebd0e06074bdce07f532f3f33d" Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.798039 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerStarted","Data":"d2534229fb8e289739e191d5d234a2856a0000b3c73a9c17a9c7dddb12404503"} Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.837264 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qcbh7"] Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.840732 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qcbh7"] Jan 22 16:36:45 crc kubenswrapper[4758]: I0122 16:36:45.840937 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" Jan 22 16:36:46 crc kubenswrapper[4758]: I0122 16:36:46.397434 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-65454647d6-pr5dd"] Jan 22 16:36:46 crc kubenswrapper[4758]: W0122 16:36:46.405658 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9deedfb3_0e0e_4287_81de_8131aac4b6b0.slice/crio-31c7878860a43c0e87efbe64ae4c7904fda5c58eafbdea82749e9d63b92cd61a WatchSource:0}: Error finding container 31c7878860a43c0e87efbe64ae4c7904fda5c58eafbdea82749e9d63b92cd61a: Status 404 returned error can't find the container with id 31c7878860a43c0e87efbe64ae4c7904fda5c58eafbdea82749e9d63b92cd61a Jan 22 16:36:46 crc kubenswrapper[4758]: I0122 16:36:46.807032 4758 generic.go:334] "Generic (PLEG): container finished" podID="2970941d-360b-4f65-befc-15b942098ef1" containerID="c254617970c78aaa037b0c8fa28460c334ccf1a1b8c10a0df228cbffd79ebdfc" exitCode=0 Jan 22 16:36:46 crc kubenswrapper[4758]: I0122 16:36:46.827806 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e926035e-0af8-45eb-9451-19c8827363c3" path="/var/lib/kubelet/pods/e926035e-0af8-45eb-9451-19c8827363c3/volumes" Jan 22 16:36:46 crc kubenswrapper[4758]: I0122 16:36:46.828516 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-45rp2" event={"ID":"2970941d-360b-4f65-befc-15b942098ef1","Type":"ContainerDied","Data":"c254617970c78aaa037b0c8fa28460c334ccf1a1b8c10a0df228cbffd79ebdfc"} Jan 22 16:36:46 crc kubenswrapper[4758]: I0122 16:36:46.828558 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" event={"ID":"9deedfb3-0e0e-4287-81de-8131aac4b6b0","Type":"ContainerStarted","Data":"fc184b4d0a6ffa3c0042ec525d291d32b21dad822f07345d3dd2db1dfc4585ba"} Jan 22 16:36:46 crc kubenswrapper[4758]: I0122 16:36:46.828584 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" Jan 22 16:36:46 crc kubenswrapper[4758]: I0122 16:36:46.828593 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" event={"ID":"9deedfb3-0e0e-4287-81de-8131aac4b6b0","Type":"ContainerStarted","Data":"31c7878860a43c0e87efbe64ae4c7904fda5c58eafbdea82749e9d63b92cd61a"} Jan 22 16:36:46 crc kubenswrapper[4758]: I0122 16:36:46.828602 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6nnrg" event={"ID":"6353b564-856d-4648-88f7-b4630ec7bf7b","Type":"ContainerStarted","Data":"cc14190399fa000c175563324271e37cd674268625e6ea69434a23b3f6e73cfa"} Jan 22 16:36:46 crc kubenswrapper[4758]: I0122 16:36:46.860528 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" podStartSLOduration=27.860507164 podStartE2EDuration="27.860507164s" podCreationTimestamp="2026-01-22 16:36:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:36:46.858447637 +0000 UTC m=+428.341786942" watchObservedRunningTime="2026-01-22 16:36:46.860507164 +0000 UTC m=+428.343846449" Jan 22 16:36:46 crc kubenswrapper[4758]: I0122 16:36:46.881920 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6nnrg" podStartSLOduration=3.598538688 podStartE2EDuration="15.881904119s" podCreationTimestamp="2026-01-22 16:36:31 +0000 UTC" firstStartedPulling="2026-01-22 16:36:33.711447107 +0000 UTC m=+415.194786382" lastFinishedPulling="2026-01-22 16:36:45.994812508 +0000 UTC m=+427.478151813" observedRunningTime="2026-01-22 16:36:46.877196228 +0000 UTC m=+428.360535523" watchObservedRunningTime="2026-01-22 16:36:46.881904119 +0000 UTC m=+428.365243394" Jan 22 16:36:47 crc kubenswrapper[4758]: I0122 16:36:47.066722 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" Jan 22 16:36:47 crc kubenswrapper[4758]: I0122 16:36:47.832878 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-45rp2" event={"ID":"2970941d-360b-4f65-befc-15b942098ef1","Type":"ContainerStarted","Data":"00800407f39b1b4bdad40a59260c88b5d5df19bbc7307a8a88f02693ff0e9f9e"} Jan 22 16:36:48 crc kubenswrapper[4758]: I0122 16:36:48.856245 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-45rp2" podStartSLOduration=3.049891326 podStartE2EDuration="19.856227217s" podCreationTimestamp="2026-01-22 16:36:29 +0000 UTC" firstStartedPulling="2026-01-22 16:36:30.663579592 +0000 UTC m=+412.146918917" lastFinishedPulling="2026-01-22 16:36:47.469915523 +0000 UTC m=+428.953254808" observedRunningTime="2026-01-22 16:36:48.854628382 +0000 UTC m=+430.337967677" watchObservedRunningTime="2026-01-22 16:36:48.856227217 +0000 UTC m=+430.339566502" Jan 22 16:36:49 crc kubenswrapper[4758]: I0122 16:36:49.947678 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-45rp2" Jan 22 16:36:49 crc kubenswrapper[4758]: I0122 16:36:49.948070 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-45rp2" Jan 22 16:36:51 crc kubenswrapper[4758]: I0122 16:36:51.022223 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-45rp2" podUID="2970941d-360b-4f65-befc-15b942098ef1" containerName="registry-server" probeResult="failure" output=< Jan 22 16:36:51 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 22 16:36:51 crc kubenswrapper[4758]: > Jan 22 16:36:51 crc kubenswrapper[4758]: I0122 16:36:51.258489 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lwpnp" Jan 22 16:36:51 crc kubenswrapper[4758]: I0122 16:36:51.258552 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lwpnp" Jan 22 16:36:51 crc kubenswrapper[4758]: I0122 16:36:51.508796 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-64585bb48f-6psbz"] Jan 22 16:36:51 crc kubenswrapper[4758]: I0122 16:36:51.509032 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-64585bb48f-6psbz" podUID="1ce9b9d8-0324-4470-8f31-9feef5a1a975" containerName="controller-manager" containerID="cri-o://02abd7666e9039c4bca4f7c5dfb6c84a67ab1603607a196aa9aafef19c30b46a" gracePeriod=30 Jan 22 16:36:51 crc kubenswrapper[4758]: I0122 16:36:51.566629 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lwpnp" Jan 22 16:36:51 crc kubenswrapper[4758]: I0122 16:36:51.930793 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lwpnp" Jan 22 16:36:52 crc kubenswrapper[4758]: I0122 16:36:52.228012 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6nnrg" Jan 22 16:36:52 crc kubenswrapper[4758]: I0122 16:36:52.228255 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6nnrg" Jan 22 16:36:52 crc kubenswrapper[4758]: I0122 16:36:52.282188 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6nnrg" Jan 22 16:36:52 crc kubenswrapper[4758]: I0122 16:36:52.911838 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6nnrg" Jan 22 16:36:53 crc kubenswrapper[4758]: I0122 16:36:53.661899 4758 patch_prober.go:28] interesting pod/controller-manager-64585bb48f-6psbz container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" start-of-body= Jan 22 16:36:53 crc kubenswrapper[4758]: I0122 16:36:53.661961 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-64585bb48f-6psbz" podUID="1ce9b9d8-0324-4470-8f31-9feef5a1a975" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" Jan 22 16:36:59 crc kubenswrapper[4758]: I0122 16:36:59.622790 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-64585bb48f-6psbz_1ce9b9d8-0324-4470-8f31-9feef5a1a975/controller-manager/0.log" Jan 22 16:36:59 crc kubenswrapper[4758]: I0122 16:36:59.630348 4758 generic.go:334] "Generic (PLEG): container finished" podID="1ce9b9d8-0324-4470-8f31-9feef5a1a975" containerID="02abd7666e9039c4bca4f7c5dfb6c84a67ab1603607a196aa9aafef19c30b46a" exitCode=-1 Jan 22 16:36:59 crc kubenswrapper[4758]: I0122 16:36:59.630402 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64585bb48f-6psbz" event={"ID":"1ce9b9d8-0324-4470-8f31-9feef5a1a975","Type":"ContainerDied","Data":"02abd7666e9039c4bca4f7c5dfb6c84a67ab1603607a196aa9aafef19c30b46a"} Jan 22 16:36:59 crc kubenswrapper[4758]: I0122 16:36:59.984976 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-45rp2" Jan 22 16:37:00 crc kubenswrapper[4758]: I0122 16:37:00.027644 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-45rp2" Jan 22 16:37:05 crc kubenswrapper[4758]: I0122 16:37:03.579588 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64585bb48f-6psbz" Jan 22 16:37:05 crc kubenswrapper[4758]: I0122 16:37:03.617076 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5b46f89db7-56qr2"] Jan 22 16:37:05 crc kubenswrapper[4758]: E0122 16:37:03.617341 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ce9b9d8-0324-4470-8f31-9feef5a1a975" containerName="controller-manager" Jan 22 16:37:05 crc kubenswrapper[4758]: I0122 16:37:03.617355 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ce9b9d8-0324-4470-8f31-9feef5a1a975" containerName="controller-manager" Jan 22 16:37:05 crc kubenswrapper[4758]: I0122 16:37:03.617479 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ce9b9d8-0324-4470-8f31-9feef5a1a975" containerName="controller-manager" Jan 22 16:37:05 crc kubenswrapper[4758]: I0122 16:37:03.617997 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b46f89db7-56qr2" Jan 22 16:37:05 crc kubenswrapper[4758]: I0122 16:37:03.621438 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jj22t\" (UniqueName: \"kubernetes.io/projected/1ce9b9d8-0324-4470-8f31-9feef5a1a975-kube-api-access-jj22t\") pod \"1ce9b9d8-0324-4470-8f31-9feef5a1a975\" (UID: \"1ce9b9d8-0324-4470-8f31-9feef5a1a975\") " Jan 22 16:37:05 crc kubenswrapper[4758]: I0122 16:37:03.621603 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ce9b9d8-0324-4470-8f31-9feef5a1a975-serving-cert\") pod \"1ce9b9d8-0324-4470-8f31-9feef5a1a975\" (UID: \"1ce9b9d8-0324-4470-8f31-9feef5a1a975\") " Jan 22 16:37:05 crc kubenswrapper[4758]: I0122 16:37:03.621778 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ce9b9d8-0324-4470-8f31-9feef5a1a975-client-ca\") pod \"1ce9b9d8-0324-4470-8f31-9feef5a1a975\" (UID: \"1ce9b9d8-0324-4470-8f31-9feef5a1a975\") " Jan 22 16:37:05 crc kubenswrapper[4758]: I0122 16:37:03.621827 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ce9b9d8-0324-4470-8f31-9feef5a1a975-config\") pod \"1ce9b9d8-0324-4470-8f31-9feef5a1a975\" (UID: \"1ce9b9d8-0324-4470-8f31-9feef5a1a975\") " Jan 22 16:37:05 crc kubenswrapper[4758]: I0122 16:37:03.621863 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1ce9b9d8-0324-4470-8f31-9feef5a1a975-proxy-ca-bundles\") pod \"1ce9b9d8-0324-4470-8f31-9feef5a1a975\" (UID: \"1ce9b9d8-0324-4470-8f31-9feef5a1a975\") " Jan 22 16:37:05 crc kubenswrapper[4758]: I0122 16:37:03.622113 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11e5039c-273e-4208-9295-329a27e6d22b-config\") pod \"controller-manager-5b46f89db7-56qr2\" (UID: \"11e5039c-273e-4208-9295-329a27e6d22b\") " pod="openshift-controller-manager/controller-manager-5b46f89db7-56qr2" Jan 22 16:37:05 crc kubenswrapper[4758]: I0122 16:37:03.622144 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vnv7\" (UniqueName: \"kubernetes.io/projected/11e5039c-273e-4208-9295-329a27e6d22b-kube-api-access-5vnv7\") pod \"controller-manager-5b46f89db7-56qr2\" (UID: \"11e5039c-273e-4208-9295-329a27e6d22b\") " pod="openshift-controller-manager/controller-manager-5b46f89db7-56qr2" Jan 22 16:37:05 crc kubenswrapper[4758]: I0122 16:37:03.622181 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11e5039c-273e-4208-9295-329a27e6d22b-serving-cert\") pod \"controller-manager-5b46f89db7-56qr2\" (UID: \"11e5039c-273e-4208-9295-329a27e6d22b\") " pod="openshift-controller-manager/controller-manager-5b46f89db7-56qr2" Jan 22 16:37:05 crc kubenswrapper[4758]: I0122 16:37:03.622210 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/11e5039c-273e-4208-9295-329a27e6d22b-client-ca\") pod \"controller-manager-5b46f89db7-56qr2\" (UID: \"11e5039c-273e-4208-9295-329a27e6d22b\") " pod="openshift-controller-manager/controller-manager-5b46f89db7-56qr2" Jan 22 16:37:05 crc kubenswrapper[4758]: I0122 16:37:03.622245 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/11e5039c-273e-4208-9295-329a27e6d22b-proxy-ca-bundles\") pod \"controller-manager-5b46f89db7-56qr2\" (UID: \"11e5039c-273e-4208-9295-329a27e6d22b\") " pod="openshift-controller-manager/controller-manager-5b46f89db7-56qr2" Jan 22 16:37:05 crc kubenswrapper[4758]: I0122 16:37:03.623070 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ce9b9d8-0324-4470-8f31-9feef5a1a975-client-ca" (OuterVolumeSpecName: "client-ca") pod "1ce9b9d8-0324-4470-8f31-9feef5a1a975" (UID: "1ce9b9d8-0324-4470-8f31-9feef5a1a975"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:37:05 crc kubenswrapper[4758]: I0122 16:37:03.623809 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ce9b9d8-0324-4470-8f31-9feef5a1a975-config" (OuterVolumeSpecName: "config") pod "1ce9b9d8-0324-4470-8f31-9feef5a1a975" (UID: "1ce9b9d8-0324-4470-8f31-9feef5a1a975"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:37:05 crc kubenswrapper[4758]: I0122 16:37:03.624368 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ce9b9d8-0324-4470-8f31-9feef5a1a975-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1ce9b9d8-0324-4470-8f31-9feef5a1a975" (UID: "1ce9b9d8-0324-4470-8f31-9feef5a1a975"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:37:05 crc kubenswrapper[4758]: I0122 16:37:03.632473 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b46f89db7-56qr2"] Jan 22 16:37:05 crc kubenswrapper[4758]: I0122 16:37:03.633816 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ce9b9d8-0324-4470-8f31-9feef5a1a975-kube-api-access-jj22t" (OuterVolumeSpecName: "kube-api-access-jj22t") pod "1ce9b9d8-0324-4470-8f31-9feef5a1a975" (UID: "1ce9b9d8-0324-4470-8f31-9feef5a1a975"). InnerVolumeSpecName "kube-api-access-jj22t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:37:05 crc kubenswrapper[4758]: I0122 16:37:03.634445 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ce9b9d8-0324-4470-8f31-9feef5a1a975-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1ce9b9d8-0324-4470-8f31-9feef5a1a975" (UID: "1ce9b9d8-0324-4470-8f31-9feef5a1a975"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:37:05 crc kubenswrapper[4758]: I0122 16:37:03.722859 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11e5039c-273e-4208-9295-329a27e6d22b-config\") pod \"controller-manager-5b46f89db7-56qr2\" (UID: \"11e5039c-273e-4208-9295-329a27e6d22b\") " pod="openshift-controller-manager/controller-manager-5b46f89db7-56qr2" Jan 22 16:37:05 crc kubenswrapper[4758]: I0122 16:37:03.722912 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vnv7\" (UniqueName: \"kubernetes.io/projected/11e5039c-273e-4208-9295-329a27e6d22b-kube-api-access-5vnv7\") pod \"controller-manager-5b46f89db7-56qr2\" (UID: \"11e5039c-273e-4208-9295-329a27e6d22b\") " pod="openshift-controller-manager/controller-manager-5b46f89db7-56qr2" Jan 22 16:37:05 crc kubenswrapper[4758]: I0122 16:37:03.722947 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11e5039c-273e-4208-9295-329a27e6d22b-serving-cert\") pod \"controller-manager-5b46f89db7-56qr2\" (UID: \"11e5039c-273e-4208-9295-329a27e6d22b\") " pod="openshift-controller-manager/controller-manager-5b46f89db7-56qr2" Jan 22 16:37:05 crc kubenswrapper[4758]: I0122 16:37:03.722975 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/11e5039c-273e-4208-9295-329a27e6d22b-client-ca\") pod \"controller-manager-5b46f89db7-56qr2\" (UID: \"11e5039c-273e-4208-9295-329a27e6d22b\") " pod="openshift-controller-manager/controller-manager-5b46f89db7-56qr2" Jan 22 16:37:05 crc kubenswrapper[4758]: I0122 16:37:03.723005 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/11e5039c-273e-4208-9295-329a27e6d22b-proxy-ca-bundles\") pod \"controller-manager-5b46f89db7-56qr2\" (UID: \"11e5039c-273e-4208-9295-329a27e6d22b\") " pod="openshift-controller-manager/controller-manager-5b46f89db7-56qr2" Jan 22 16:37:05 crc kubenswrapper[4758]: I0122 16:37:03.723077 4758 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ce9b9d8-0324-4470-8f31-9feef5a1a975-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:37:05 crc kubenswrapper[4758]: I0122 16:37:03.723094 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ce9b9d8-0324-4470-8f31-9feef5a1a975-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:37:05 crc kubenswrapper[4758]: I0122 16:37:03.723106 4758 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1ce9b9d8-0324-4470-8f31-9feef5a1a975-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 16:37:05 crc kubenswrapper[4758]: I0122 16:37:03.723121 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jj22t\" (UniqueName: \"kubernetes.io/projected/1ce9b9d8-0324-4470-8f31-9feef5a1a975-kube-api-access-jj22t\") on node \"crc\" DevicePath \"\"" Jan 22 16:37:05 crc kubenswrapper[4758]: I0122 16:37:03.723134 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ce9b9d8-0324-4470-8f31-9feef5a1a975-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:37:05 crc kubenswrapper[4758]: I0122 16:37:03.724346 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/11e5039c-273e-4208-9295-329a27e6d22b-proxy-ca-bundles\") pod \"controller-manager-5b46f89db7-56qr2\" (UID: \"11e5039c-273e-4208-9295-329a27e6d22b\") " pod="openshift-controller-manager/controller-manager-5b46f89db7-56qr2" Jan 22 16:37:05 crc kubenswrapper[4758]: I0122 16:37:03.724556 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11e5039c-273e-4208-9295-329a27e6d22b-config\") pod \"controller-manager-5b46f89db7-56qr2\" (UID: \"11e5039c-273e-4208-9295-329a27e6d22b\") " pod="openshift-controller-manager/controller-manager-5b46f89db7-56qr2" Jan 22 16:37:05 crc kubenswrapper[4758]: I0122 16:37:03.724627 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/11e5039c-273e-4208-9295-329a27e6d22b-client-ca\") pod \"controller-manager-5b46f89db7-56qr2\" (UID: \"11e5039c-273e-4208-9295-329a27e6d22b\") " pod="openshift-controller-manager/controller-manager-5b46f89db7-56qr2" Jan 22 16:37:05 crc kubenswrapper[4758]: I0122 16:37:03.727540 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11e5039c-273e-4208-9295-329a27e6d22b-serving-cert\") pod \"controller-manager-5b46f89db7-56qr2\" (UID: \"11e5039c-273e-4208-9295-329a27e6d22b\") " pod="openshift-controller-manager/controller-manager-5b46f89db7-56qr2" Jan 22 16:37:05 crc kubenswrapper[4758]: I0122 16:37:03.743264 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vnv7\" (UniqueName: \"kubernetes.io/projected/11e5039c-273e-4208-9295-329a27e6d22b-kube-api-access-5vnv7\") pod \"controller-manager-5b46f89db7-56qr2\" (UID: \"11e5039c-273e-4208-9295-329a27e6d22b\") " pod="openshift-controller-manager/controller-manager-5b46f89db7-56qr2" Jan 22 16:37:05 crc kubenswrapper[4758]: I0122 16:37:03.936156 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b46f89db7-56qr2" Jan 22 16:37:05 crc kubenswrapper[4758]: I0122 16:37:04.371228 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64585bb48f-6psbz" event={"ID":"1ce9b9d8-0324-4470-8f31-9feef5a1a975","Type":"ContainerDied","Data":"f6f8ef7d2dda3c3e8342c376d3ded2af3fc8c3f73d3b4c6fbc00c5276f002bc3"} Jan 22 16:37:05 crc kubenswrapper[4758]: I0122 16:37:04.371630 4758 scope.go:117] "RemoveContainer" containerID="02abd7666e9039c4bca4f7c5dfb6c84a67ab1603607a196aa9aafef19c30b46a" Jan 22 16:37:05 crc kubenswrapper[4758]: I0122 16:37:05.377499 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64585bb48f-6psbz" Jan 22 16:37:05 crc kubenswrapper[4758]: I0122 16:37:05.403400 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-64585bb48f-6psbz"] Jan 22 16:37:05 crc kubenswrapper[4758]: I0122 16:37:05.409991 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-64585bb48f-6psbz"] Jan 22 16:37:05 crc kubenswrapper[4758]: I0122 16:37:05.727098 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b46f89db7-56qr2"] Jan 22 16:37:05 crc kubenswrapper[4758]: W0122 16:37:05.736221 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11e5039c_273e_4208_9295_329a27e6d22b.slice/crio-e6e84c1c71dde5e03741c689ca975cc440d36264976bdf89e75fd1aff206bb24 WatchSource:0}: Error finding container e6e84c1c71dde5e03741c689ca975cc440d36264976bdf89e75fd1aff206bb24: Status 404 returned error can't find the container with id e6e84c1c71dde5e03741c689ca975cc440d36264976bdf89e75fd1aff206bb24 Jan 22 16:37:06 crc kubenswrapper[4758]: I0122 16:37:06.404191 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b46f89db7-56qr2" event={"ID":"11e5039c-273e-4208-9295-329a27e6d22b","Type":"ContainerStarted","Data":"65006021a8965b2747a0d7380568c23c54c670ee12a36e84e076b78cbc5f595d"} Jan 22 16:37:06 crc kubenswrapper[4758]: I0122 16:37:06.404839 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5b46f89db7-56qr2" Jan 22 16:37:06 crc kubenswrapper[4758]: I0122 16:37:06.404854 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b46f89db7-56qr2" event={"ID":"11e5039c-273e-4208-9295-329a27e6d22b","Type":"ContainerStarted","Data":"e6e84c1c71dde5e03741c689ca975cc440d36264976bdf89e75fd1aff206bb24"} Jan 22 16:37:06 crc kubenswrapper[4758]: I0122 16:37:06.439479 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5b46f89db7-56qr2" podStartSLOduration=15.439457765 podStartE2EDuration="15.439457765s" podCreationTimestamp="2026-01-22 16:36:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:37:06.439123526 +0000 UTC m=+447.922462811" watchObservedRunningTime="2026-01-22 16:37:06.439457765 +0000 UTC m=+447.922797060" Jan 22 16:37:06 crc kubenswrapper[4758]: I0122 16:37:06.466451 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5b46f89db7-56qr2" Jan 22 16:37:06 crc kubenswrapper[4758]: I0122 16:37:06.836862 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ce9b9d8-0324-4470-8f31-9feef5a1a975" path="/var/lib/kubelet/pods/1ce9b9d8-0324-4470-8f31-9feef5a1a975/volumes" Jan 22 16:37:08 crc kubenswrapper[4758]: I0122 16:37:08.149672 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-nkbvl"] Jan 22 16:37:08 crc kubenswrapper[4758]: I0122 16:37:08.150634 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-nkbvl" Jan 22 16:37:08 crc kubenswrapper[4758]: I0122 16:37:08.164517 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-nkbvl"] Jan 22 16:37:08 crc kubenswrapper[4758]: I0122 16:37:08.300876 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-nkbvl\" (UID: \"16526876-66db-457d-8f80-f02ddc58bca0\") " pod="openshift-image-registry/image-registry-66df7c8f76-nkbvl" Jan 22 16:37:08 crc kubenswrapper[4758]: I0122 16:37:08.301052 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/16526876-66db-457d-8f80-f02ddc58bca0-ca-trust-extracted\") pod \"image-registry-66df7c8f76-nkbvl\" (UID: \"16526876-66db-457d-8f80-f02ddc58bca0\") " pod="openshift-image-registry/image-registry-66df7c8f76-nkbvl" Jan 22 16:37:08 crc kubenswrapper[4758]: I0122 16:37:08.301104 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/16526876-66db-457d-8f80-f02ddc58bca0-registry-certificates\") pod \"image-registry-66df7c8f76-nkbvl\" (UID: \"16526876-66db-457d-8f80-f02ddc58bca0\") " pod="openshift-image-registry/image-registry-66df7c8f76-nkbvl" Jan 22 16:37:08 crc kubenswrapper[4758]: I0122 16:37:08.301221 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/16526876-66db-457d-8f80-f02ddc58bca0-registry-tls\") pod \"image-registry-66df7c8f76-nkbvl\" (UID: \"16526876-66db-457d-8f80-f02ddc58bca0\") " pod="openshift-image-registry/image-registry-66df7c8f76-nkbvl" Jan 22 16:37:08 crc kubenswrapper[4758]: I0122 16:37:08.301291 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpcdt\" (UniqueName: \"kubernetes.io/projected/16526876-66db-457d-8f80-f02ddc58bca0-kube-api-access-bpcdt\") pod \"image-registry-66df7c8f76-nkbvl\" (UID: \"16526876-66db-457d-8f80-f02ddc58bca0\") " pod="openshift-image-registry/image-registry-66df7c8f76-nkbvl" Jan 22 16:37:08 crc kubenswrapper[4758]: I0122 16:37:08.301434 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/16526876-66db-457d-8f80-f02ddc58bca0-installation-pull-secrets\") pod \"image-registry-66df7c8f76-nkbvl\" (UID: \"16526876-66db-457d-8f80-f02ddc58bca0\") " pod="openshift-image-registry/image-registry-66df7c8f76-nkbvl" Jan 22 16:37:08 crc kubenswrapper[4758]: I0122 16:37:08.301512 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16526876-66db-457d-8f80-f02ddc58bca0-trusted-ca\") pod \"image-registry-66df7c8f76-nkbvl\" (UID: \"16526876-66db-457d-8f80-f02ddc58bca0\") " pod="openshift-image-registry/image-registry-66df7c8f76-nkbvl" Jan 22 16:37:08 crc kubenswrapper[4758]: I0122 16:37:08.301630 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/16526876-66db-457d-8f80-f02ddc58bca0-bound-sa-token\") pod \"image-registry-66df7c8f76-nkbvl\" (UID: \"16526876-66db-457d-8f80-f02ddc58bca0\") " pod="openshift-image-registry/image-registry-66df7c8f76-nkbvl" Jan 22 16:37:08 crc kubenswrapper[4758]: I0122 16:37:08.322825 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-nkbvl\" (UID: \"16526876-66db-457d-8f80-f02ddc58bca0\") " pod="openshift-image-registry/image-registry-66df7c8f76-nkbvl" Jan 22 16:37:08 crc kubenswrapper[4758]: I0122 16:37:08.403342 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/16526876-66db-457d-8f80-f02ddc58bca0-ca-trust-extracted\") pod \"image-registry-66df7c8f76-nkbvl\" (UID: \"16526876-66db-457d-8f80-f02ddc58bca0\") " pod="openshift-image-registry/image-registry-66df7c8f76-nkbvl" Jan 22 16:37:08 crc kubenswrapper[4758]: I0122 16:37:08.403402 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/16526876-66db-457d-8f80-f02ddc58bca0-registry-certificates\") pod \"image-registry-66df7c8f76-nkbvl\" (UID: \"16526876-66db-457d-8f80-f02ddc58bca0\") " pod="openshift-image-registry/image-registry-66df7c8f76-nkbvl" Jan 22 16:37:08 crc kubenswrapper[4758]: I0122 16:37:08.403445 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/16526876-66db-457d-8f80-f02ddc58bca0-registry-tls\") pod \"image-registry-66df7c8f76-nkbvl\" (UID: \"16526876-66db-457d-8f80-f02ddc58bca0\") " pod="openshift-image-registry/image-registry-66df7c8f76-nkbvl" Jan 22 16:37:08 crc kubenswrapper[4758]: I0122 16:37:08.403478 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/16526876-66db-457d-8f80-f02ddc58bca0-installation-pull-secrets\") pod \"image-registry-66df7c8f76-nkbvl\" (UID: \"16526876-66db-457d-8f80-f02ddc58bca0\") " pod="openshift-image-registry/image-registry-66df7c8f76-nkbvl" Jan 22 16:37:08 crc kubenswrapper[4758]: I0122 16:37:08.403502 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpcdt\" (UniqueName: \"kubernetes.io/projected/16526876-66db-457d-8f80-f02ddc58bca0-kube-api-access-bpcdt\") pod \"image-registry-66df7c8f76-nkbvl\" (UID: \"16526876-66db-457d-8f80-f02ddc58bca0\") " pod="openshift-image-registry/image-registry-66df7c8f76-nkbvl" Jan 22 16:37:08 crc kubenswrapper[4758]: I0122 16:37:08.403528 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16526876-66db-457d-8f80-f02ddc58bca0-trusted-ca\") pod \"image-registry-66df7c8f76-nkbvl\" (UID: \"16526876-66db-457d-8f80-f02ddc58bca0\") " pod="openshift-image-registry/image-registry-66df7c8f76-nkbvl" Jan 22 16:37:08 crc kubenswrapper[4758]: I0122 16:37:08.403554 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/16526876-66db-457d-8f80-f02ddc58bca0-bound-sa-token\") pod \"image-registry-66df7c8f76-nkbvl\" (UID: \"16526876-66db-457d-8f80-f02ddc58bca0\") " pod="openshift-image-registry/image-registry-66df7c8f76-nkbvl" Jan 22 16:37:08 crc kubenswrapper[4758]: I0122 16:37:08.405102 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/16526876-66db-457d-8f80-f02ddc58bca0-registry-certificates\") pod \"image-registry-66df7c8f76-nkbvl\" (UID: \"16526876-66db-457d-8f80-f02ddc58bca0\") " pod="openshift-image-registry/image-registry-66df7c8f76-nkbvl" Jan 22 16:37:08 crc kubenswrapper[4758]: I0122 16:37:08.405149 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16526876-66db-457d-8f80-f02ddc58bca0-trusted-ca\") pod \"image-registry-66df7c8f76-nkbvl\" (UID: \"16526876-66db-457d-8f80-f02ddc58bca0\") " pod="openshift-image-registry/image-registry-66df7c8f76-nkbvl" Jan 22 16:37:08 crc kubenswrapper[4758]: I0122 16:37:08.405386 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/16526876-66db-457d-8f80-f02ddc58bca0-ca-trust-extracted\") pod \"image-registry-66df7c8f76-nkbvl\" (UID: \"16526876-66db-457d-8f80-f02ddc58bca0\") " pod="openshift-image-registry/image-registry-66df7c8f76-nkbvl" Jan 22 16:37:08 crc kubenswrapper[4758]: I0122 16:37:08.410129 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/16526876-66db-457d-8f80-f02ddc58bca0-installation-pull-secrets\") pod \"image-registry-66df7c8f76-nkbvl\" (UID: \"16526876-66db-457d-8f80-f02ddc58bca0\") " pod="openshift-image-registry/image-registry-66df7c8f76-nkbvl" Jan 22 16:37:08 crc kubenswrapper[4758]: I0122 16:37:08.410577 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/16526876-66db-457d-8f80-f02ddc58bca0-registry-tls\") pod \"image-registry-66df7c8f76-nkbvl\" (UID: \"16526876-66db-457d-8f80-f02ddc58bca0\") " pod="openshift-image-registry/image-registry-66df7c8f76-nkbvl" Jan 22 16:37:08 crc kubenswrapper[4758]: I0122 16:37:08.420703 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpcdt\" (UniqueName: \"kubernetes.io/projected/16526876-66db-457d-8f80-f02ddc58bca0-kube-api-access-bpcdt\") pod \"image-registry-66df7c8f76-nkbvl\" (UID: \"16526876-66db-457d-8f80-f02ddc58bca0\") " pod="openshift-image-registry/image-registry-66df7c8f76-nkbvl" Jan 22 16:37:08 crc kubenswrapper[4758]: I0122 16:37:08.422905 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/16526876-66db-457d-8f80-f02ddc58bca0-bound-sa-token\") pod \"image-registry-66df7c8f76-nkbvl\" (UID: \"16526876-66db-457d-8f80-f02ddc58bca0\") " pod="openshift-image-registry/image-registry-66df7c8f76-nkbvl" Jan 22 16:37:08 crc kubenswrapper[4758]: I0122 16:37:08.472986 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-nkbvl" Jan 22 16:37:09 crc kubenswrapper[4758]: I0122 16:37:09.087994 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-nkbvl"] Jan 22 16:37:09 crc kubenswrapper[4758]: W0122 16:37:09.096204 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16526876_66db_457d_8f80_f02ddc58bca0.slice/crio-dff4d2aa1bc8fe52ec31c6d09ae00315cbdda19cfc5fd0143d26bdd3dbd433ff WatchSource:0}: Error finding container dff4d2aa1bc8fe52ec31c6d09ae00315cbdda19cfc5fd0143d26bdd3dbd433ff: Status 404 returned error can't find the container with id dff4d2aa1bc8fe52ec31c6d09ae00315cbdda19cfc5fd0143d26bdd3dbd433ff Jan 22 16:37:09 crc kubenswrapper[4758]: I0122 16:37:09.565153 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-nkbvl" event={"ID":"16526876-66db-457d-8f80-f02ddc58bca0","Type":"ContainerStarted","Data":"046550588d32ee072da8165fc9c9f0b9cf92b68ef078c514b3802e1c9e8440fb"} Jan 22 16:37:09 crc kubenswrapper[4758]: I0122 16:37:09.565525 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-nkbvl" Jan 22 16:37:09 crc kubenswrapper[4758]: I0122 16:37:09.565545 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-nkbvl" event={"ID":"16526876-66db-457d-8f80-f02ddc58bca0","Type":"ContainerStarted","Data":"dff4d2aa1bc8fe52ec31c6d09ae00315cbdda19cfc5fd0143d26bdd3dbd433ff"} Jan 22 16:37:09 crc kubenswrapper[4758]: I0122 16:37:09.587220 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-nkbvl" podStartSLOduration=1.587200937 podStartE2EDuration="1.587200937s" podCreationTimestamp="2026-01-22 16:37:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:37:09.583257137 +0000 UTC m=+451.066596422" watchObservedRunningTime="2026-01-22 16:37:09.587200937 +0000 UTC m=+451.070540222" Jan 22 16:37:11 crc kubenswrapper[4758]: I0122 16:37:11.399604 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-577db457fc-nw295"] Jan 22 16:37:11 crc kubenswrapper[4758]: I0122 16:37:11.399889 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-577db457fc-nw295" podUID="6ec0d921-eec1-4b11-9df3-c3566dfbb4ee" containerName="route-controller-manager" containerID="cri-o://5853951e010d119da7cc3298e2dbc2b05e69e32db60d1758b900543165593b32" gracePeriod=30 Jan 22 16:37:11 crc kubenswrapper[4758]: I0122 16:37:11.973112 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-577db457fc-nw295" Jan 22 16:37:12 crc kubenswrapper[4758]: I0122 16:37:12.114700 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ec0d921-eec1-4b11-9df3-c3566dfbb4ee-config\") pod \"6ec0d921-eec1-4b11-9df3-c3566dfbb4ee\" (UID: \"6ec0d921-eec1-4b11-9df3-c3566dfbb4ee\") " Jan 22 16:37:12 crc kubenswrapper[4758]: I0122 16:37:12.114850 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ec0d921-eec1-4b11-9df3-c3566dfbb4ee-client-ca\") pod \"6ec0d921-eec1-4b11-9df3-c3566dfbb4ee\" (UID: \"6ec0d921-eec1-4b11-9df3-c3566dfbb4ee\") " Jan 22 16:37:12 crc kubenswrapper[4758]: I0122 16:37:12.114895 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f55ps\" (UniqueName: \"kubernetes.io/projected/6ec0d921-eec1-4b11-9df3-c3566dfbb4ee-kube-api-access-f55ps\") pod \"6ec0d921-eec1-4b11-9df3-c3566dfbb4ee\" (UID: \"6ec0d921-eec1-4b11-9df3-c3566dfbb4ee\") " Jan 22 16:37:12 crc kubenswrapper[4758]: I0122 16:37:12.114918 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ec0d921-eec1-4b11-9df3-c3566dfbb4ee-serving-cert\") pod \"6ec0d921-eec1-4b11-9df3-c3566dfbb4ee\" (UID: \"6ec0d921-eec1-4b11-9df3-c3566dfbb4ee\") " Jan 22 16:37:12 crc kubenswrapper[4758]: I0122 16:37:12.115802 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ec0d921-eec1-4b11-9df3-c3566dfbb4ee-client-ca" (OuterVolumeSpecName: "client-ca") pod "6ec0d921-eec1-4b11-9df3-c3566dfbb4ee" (UID: "6ec0d921-eec1-4b11-9df3-c3566dfbb4ee"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:37:12 crc kubenswrapper[4758]: I0122 16:37:12.116791 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ec0d921-eec1-4b11-9df3-c3566dfbb4ee-config" (OuterVolumeSpecName: "config") pod "6ec0d921-eec1-4b11-9df3-c3566dfbb4ee" (UID: "6ec0d921-eec1-4b11-9df3-c3566dfbb4ee"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:37:12 crc kubenswrapper[4758]: I0122 16:37:12.120805 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ec0d921-eec1-4b11-9df3-c3566dfbb4ee-kube-api-access-f55ps" (OuterVolumeSpecName: "kube-api-access-f55ps") pod "6ec0d921-eec1-4b11-9df3-c3566dfbb4ee" (UID: "6ec0d921-eec1-4b11-9df3-c3566dfbb4ee"). InnerVolumeSpecName "kube-api-access-f55ps". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:37:12 crc kubenswrapper[4758]: I0122 16:37:12.141116 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ec0d921-eec1-4b11-9df3-c3566dfbb4ee-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6ec0d921-eec1-4b11-9df3-c3566dfbb4ee" (UID: "6ec0d921-eec1-4b11-9df3-c3566dfbb4ee"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:37:12 crc kubenswrapper[4758]: I0122 16:37:12.216411 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ec0d921-eec1-4b11-9df3-c3566dfbb4ee-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:37:12 crc kubenswrapper[4758]: I0122 16:37:12.216448 4758 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ec0d921-eec1-4b11-9df3-c3566dfbb4ee-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:37:12 crc kubenswrapper[4758]: I0122 16:37:12.216464 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f55ps\" (UniqueName: \"kubernetes.io/projected/6ec0d921-eec1-4b11-9df3-c3566dfbb4ee-kube-api-access-f55ps\") on node \"crc\" DevicePath \"\"" Jan 22 16:37:12 crc kubenswrapper[4758]: I0122 16:37:12.216479 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ec0d921-eec1-4b11-9df3-c3566dfbb4ee-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:37:12 crc kubenswrapper[4758]: I0122 16:37:12.616559 4758 generic.go:334] "Generic (PLEG): container finished" podID="6ec0d921-eec1-4b11-9df3-c3566dfbb4ee" containerID="5853951e010d119da7cc3298e2dbc2b05e69e32db60d1758b900543165593b32" exitCode=0 Jan 22 16:37:12 crc kubenswrapper[4758]: I0122 16:37:12.616607 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-577db457fc-nw295" event={"ID":"6ec0d921-eec1-4b11-9df3-c3566dfbb4ee","Type":"ContainerDied","Data":"5853951e010d119da7cc3298e2dbc2b05e69e32db60d1758b900543165593b32"} Jan 22 16:37:12 crc kubenswrapper[4758]: I0122 16:37:12.616649 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-577db457fc-nw295" event={"ID":"6ec0d921-eec1-4b11-9df3-c3566dfbb4ee","Type":"ContainerDied","Data":"9d7686cd07f39b4999a2e9c81d98eca83b59cf55ecc72783647591ed629d8978"} Jan 22 16:37:12 crc kubenswrapper[4758]: I0122 16:37:12.616672 4758 scope.go:117] "RemoveContainer" containerID="5853951e010d119da7cc3298e2dbc2b05e69e32db60d1758b900543165593b32" Jan 22 16:37:12 crc kubenswrapper[4758]: I0122 16:37:12.616616 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-577db457fc-nw295" Jan 22 16:37:12 crc kubenswrapper[4758]: I0122 16:37:12.649527 4758 scope.go:117] "RemoveContainer" containerID="5853951e010d119da7cc3298e2dbc2b05e69e32db60d1758b900543165593b32" Jan 22 16:37:12 crc kubenswrapper[4758]: E0122 16:37:12.651594 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5853951e010d119da7cc3298e2dbc2b05e69e32db60d1758b900543165593b32\": container with ID starting with 5853951e010d119da7cc3298e2dbc2b05e69e32db60d1758b900543165593b32 not found: ID does not exist" containerID="5853951e010d119da7cc3298e2dbc2b05e69e32db60d1758b900543165593b32" Jan 22 16:37:12 crc kubenswrapper[4758]: I0122 16:37:12.651668 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5853951e010d119da7cc3298e2dbc2b05e69e32db60d1758b900543165593b32"} err="failed to get container status \"5853951e010d119da7cc3298e2dbc2b05e69e32db60d1758b900543165593b32\": rpc error: code = NotFound desc = could not find container \"5853951e010d119da7cc3298e2dbc2b05e69e32db60d1758b900543165593b32\": container with ID starting with 5853951e010d119da7cc3298e2dbc2b05e69e32db60d1758b900543165593b32 not found: ID does not exist" Jan 22 16:37:12 crc kubenswrapper[4758]: I0122 16:37:12.656771 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-577db457fc-nw295"] Jan 22 16:37:12 crc kubenswrapper[4758]: I0122 16:37:12.661922 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-577db457fc-nw295"] Jan 22 16:37:12 crc kubenswrapper[4758]: I0122 16:37:12.816028 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ec0d921-eec1-4b11-9df3-c3566dfbb4ee" path="/var/lib/kubelet/pods/6ec0d921-eec1-4b11-9df3-c3566dfbb4ee/volumes" Jan 22 16:37:13 crc kubenswrapper[4758]: I0122 16:37:13.388944 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5876db6c88-xtp4p"] Jan 22 16:37:13 crc kubenswrapper[4758]: E0122 16:37:13.389314 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ec0d921-eec1-4b11-9df3-c3566dfbb4ee" containerName="route-controller-manager" Jan 22 16:37:13 crc kubenswrapper[4758]: I0122 16:37:13.389338 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ec0d921-eec1-4b11-9df3-c3566dfbb4ee" containerName="route-controller-manager" Jan 22 16:37:13 crc kubenswrapper[4758]: I0122 16:37:13.389465 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ec0d921-eec1-4b11-9df3-c3566dfbb4ee" containerName="route-controller-manager" Jan 22 16:37:13 crc kubenswrapper[4758]: I0122 16:37:13.390689 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5876db6c88-xtp4p" Jan 22 16:37:13 crc kubenswrapper[4758]: I0122 16:37:13.393441 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 22 16:37:13 crc kubenswrapper[4758]: I0122 16:37:13.394021 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 22 16:37:13 crc kubenswrapper[4758]: I0122 16:37:13.394297 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 22 16:37:13 crc kubenswrapper[4758]: I0122 16:37:13.394614 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 22 16:37:13 crc kubenswrapper[4758]: I0122 16:37:13.397183 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 22 16:37:13 crc kubenswrapper[4758]: I0122 16:37:13.397512 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 22 16:37:13 crc kubenswrapper[4758]: I0122 16:37:13.414125 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5876db6c88-xtp4p"] Jan 22 16:37:13 crc kubenswrapper[4758]: I0122 16:37:13.534092 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb5vm\" (UniqueName: \"kubernetes.io/projected/44a7e8fc-3f05-4b46-bbff-0a3394b8d884-kube-api-access-nb5vm\") pod \"route-controller-manager-5876db6c88-xtp4p\" (UID: \"44a7e8fc-3f05-4b46-bbff-0a3394b8d884\") " pod="openshift-route-controller-manager/route-controller-manager-5876db6c88-xtp4p" Jan 22 16:37:13 crc kubenswrapper[4758]: I0122 16:37:13.534178 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/44a7e8fc-3f05-4b46-bbff-0a3394b8d884-client-ca\") pod \"route-controller-manager-5876db6c88-xtp4p\" (UID: \"44a7e8fc-3f05-4b46-bbff-0a3394b8d884\") " pod="openshift-route-controller-manager/route-controller-manager-5876db6c88-xtp4p" Jan 22 16:37:13 crc kubenswrapper[4758]: I0122 16:37:13.534203 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44a7e8fc-3f05-4b46-bbff-0a3394b8d884-serving-cert\") pod \"route-controller-manager-5876db6c88-xtp4p\" (UID: \"44a7e8fc-3f05-4b46-bbff-0a3394b8d884\") " pod="openshift-route-controller-manager/route-controller-manager-5876db6c88-xtp4p" Jan 22 16:37:13 crc kubenswrapper[4758]: I0122 16:37:13.534279 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44a7e8fc-3f05-4b46-bbff-0a3394b8d884-config\") pod \"route-controller-manager-5876db6c88-xtp4p\" (UID: \"44a7e8fc-3f05-4b46-bbff-0a3394b8d884\") " pod="openshift-route-controller-manager/route-controller-manager-5876db6c88-xtp4p" Jan 22 16:37:13 crc kubenswrapper[4758]: I0122 16:37:13.636088 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44a7e8fc-3f05-4b46-bbff-0a3394b8d884-config\") pod \"route-controller-manager-5876db6c88-xtp4p\" (UID: \"44a7e8fc-3f05-4b46-bbff-0a3394b8d884\") " pod="openshift-route-controller-manager/route-controller-manager-5876db6c88-xtp4p" Jan 22 16:37:13 crc kubenswrapper[4758]: I0122 16:37:13.636153 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb5vm\" (UniqueName: \"kubernetes.io/projected/44a7e8fc-3f05-4b46-bbff-0a3394b8d884-kube-api-access-nb5vm\") pod \"route-controller-manager-5876db6c88-xtp4p\" (UID: \"44a7e8fc-3f05-4b46-bbff-0a3394b8d884\") " pod="openshift-route-controller-manager/route-controller-manager-5876db6c88-xtp4p" Jan 22 16:37:13 crc kubenswrapper[4758]: I0122 16:37:13.636202 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/44a7e8fc-3f05-4b46-bbff-0a3394b8d884-client-ca\") pod \"route-controller-manager-5876db6c88-xtp4p\" (UID: \"44a7e8fc-3f05-4b46-bbff-0a3394b8d884\") " pod="openshift-route-controller-manager/route-controller-manager-5876db6c88-xtp4p" Jan 22 16:37:13 crc kubenswrapper[4758]: I0122 16:37:13.636225 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44a7e8fc-3f05-4b46-bbff-0a3394b8d884-serving-cert\") pod \"route-controller-manager-5876db6c88-xtp4p\" (UID: \"44a7e8fc-3f05-4b46-bbff-0a3394b8d884\") " pod="openshift-route-controller-manager/route-controller-manager-5876db6c88-xtp4p" Jan 22 16:37:13 crc kubenswrapper[4758]: I0122 16:37:13.637617 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/44a7e8fc-3f05-4b46-bbff-0a3394b8d884-client-ca\") pod \"route-controller-manager-5876db6c88-xtp4p\" (UID: \"44a7e8fc-3f05-4b46-bbff-0a3394b8d884\") " pod="openshift-route-controller-manager/route-controller-manager-5876db6c88-xtp4p" Jan 22 16:37:13 crc kubenswrapper[4758]: I0122 16:37:13.637825 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44a7e8fc-3f05-4b46-bbff-0a3394b8d884-config\") pod \"route-controller-manager-5876db6c88-xtp4p\" (UID: \"44a7e8fc-3f05-4b46-bbff-0a3394b8d884\") " pod="openshift-route-controller-manager/route-controller-manager-5876db6c88-xtp4p" Jan 22 16:37:13 crc kubenswrapper[4758]: I0122 16:37:13.640682 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44a7e8fc-3f05-4b46-bbff-0a3394b8d884-serving-cert\") pod \"route-controller-manager-5876db6c88-xtp4p\" (UID: \"44a7e8fc-3f05-4b46-bbff-0a3394b8d884\") " pod="openshift-route-controller-manager/route-controller-manager-5876db6c88-xtp4p" Jan 22 16:37:13 crc kubenswrapper[4758]: I0122 16:37:13.653897 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb5vm\" (UniqueName: \"kubernetes.io/projected/44a7e8fc-3f05-4b46-bbff-0a3394b8d884-kube-api-access-nb5vm\") pod \"route-controller-manager-5876db6c88-xtp4p\" (UID: \"44a7e8fc-3f05-4b46-bbff-0a3394b8d884\") " pod="openshift-route-controller-manager/route-controller-manager-5876db6c88-xtp4p" Jan 22 16:37:13 crc kubenswrapper[4758]: I0122 16:37:13.716272 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5876db6c88-xtp4p" Jan 22 16:37:14 crc kubenswrapper[4758]: I0122 16:37:14.126153 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5876db6c88-xtp4p"] Jan 22 16:37:14 crc kubenswrapper[4758]: I0122 16:37:14.631292 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5876db6c88-xtp4p" event={"ID":"44a7e8fc-3f05-4b46-bbff-0a3394b8d884","Type":"ContainerStarted","Data":"e8777cf0f17e01e24afce4c779219984f07a84460a6091c21a77714efa0ba130"} Jan 22 16:37:14 crc kubenswrapper[4758]: I0122 16:37:14.631879 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5876db6c88-xtp4p" event={"ID":"44a7e8fc-3f05-4b46-bbff-0a3394b8d884","Type":"ContainerStarted","Data":"8ff11304c0028039b85fbfdf37de5fa8a40b4a2f67427d7f18ebee172fb62399"} Jan 22 16:37:14 crc kubenswrapper[4758]: I0122 16:37:14.632008 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5876db6c88-xtp4p" Jan 22 16:37:14 crc kubenswrapper[4758]: I0122 16:37:14.655358 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5876db6c88-xtp4p" podStartSLOduration=3.655336327 podStartE2EDuration="3.655336327s" podCreationTimestamp="2026-01-22 16:37:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:37:14.650333399 +0000 UTC m=+456.133672704" watchObservedRunningTime="2026-01-22 16:37:14.655336327 +0000 UTC m=+456.138675612" Jan 22 16:37:15 crc kubenswrapper[4758]: I0122 16:37:15.174593 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5876db6c88-xtp4p" Jan 22 16:37:28 crc kubenswrapper[4758]: I0122 16:37:28.478245 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-nkbvl" Jan 22 16:37:28 crc kubenswrapper[4758]: I0122 16:37:28.535331 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kd79d"] Jan 22 16:37:53 crc kubenswrapper[4758]: I0122 16:37:53.577929 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" podUID="1c983b09-f715-422e-960d-36dcc610c30b" containerName="registry" containerID="cri-o://a05e46dff7100ab1d08ccefc40448405fa7dd4821e00d9b7ec4ac4175d7c6f6b" gracePeriod=30 Jan 22 16:37:53 crc kubenswrapper[4758]: I0122 16:37:53.884534 4758 generic.go:334] "Generic (PLEG): container finished" podID="1c983b09-f715-422e-960d-36dcc610c30b" containerID="a05e46dff7100ab1d08ccefc40448405fa7dd4821e00d9b7ec4ac4175d7c6f6b" exitCode=0 Jan 22 16:37:53 crc kubenswrapper[4758]: I0122 16:37:53.884584 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" event={"ID":"1c983b09-f715-422e-960d-36dcc610c30b","Type":"ContainerDied","Data":"a05e46dff7100ab1d08ccefc40448405fa7dd4821e00d9b7ec4ac4175d7c6f6b"} Jan 22 16:37:54 crc kubenswrapper[4758]: I0122 16:37:54.511824 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:37:54 crc kubenswrapper[4758]: I0122 16:37:54.609283 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1c983b09-f715-422e-960d-36dcc610c30b-bound-sa-token\") pod \"1c983b09-f715-422e-960d-36dcc610c30b\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " Jan 22 16:37:54 crc kubenswrapper[4758]: I0122 16:37:54.609337 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1c983b09-f715-422e-960d-36dcc610c30b-registry-tls\") pod \"1c983b09-f715-422e-960d-36dcc610c30b\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " Jan 22 16:37:54 crc kubenswrapper[4758]: I0122 16:37:54.609557 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1c983b09-f715-422e-960d-36dcc610c30b-installation-pull-secrets\") pod \"1c983b09-f715-422e-960d-36dcc610c30b\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " Jan 22 16:37:54 crc kubenswrapper[4758]: I0122 16:37:54.609735 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5lhh\" (UniqueName: \"kubernetes.io/projected/1c983b09-f715-422e-960d-36dcc610c30b-kube-api-access-f5lhh\") pod \"1c983b09-f715-422e-960d-36dcc610c30b\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " Jan 22 16:37:54 crc kubenswrapper[4758]: I0122 16:37:54.609800 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1c983b09-f715-422e-960d-36dcc610c30b-registry-certificates\") pod \"1c983b09-f715-422e-960d-36dcc610c30b\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " Jan 22 16:37:54 crc kubenswrapper[4758]: I0122 16:37:54.609842 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1c983b09-f715-422e-960d-36dcc610c30b-trusted-ca\") pod \"1c983b09-f715-422e-960d-36dcc610c30b\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " Jan 22 16:37:54 crc kubenswrapper[4758]: I0122 16:37:54.609865 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1c983b09-f715-422e-960d-36dcc610c30b-ca-trust-extracted\") pod \"1c983b09-f715-422e-960d-36dcc610c30b\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " Jan 22 16:37:54 crc kubenswrapper[4758]: I0122 16:37:54.609965 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"1c983b09-f715-422e-960d-36dcc610c30b\" (UID: \"1c983b09-f715-422e-960d-36dcc610c30b\") " Jan 22 16:37:54 crc kubenswrapper[4758]: I0122 16:37:54.611096 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c983b09-f715-422e-960d-36dcc610c30b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "1c983b09-f715-422e-960d-36dcc610c30b" (UID: "1c983b09-f715-422e-960d-36dcc610c30b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:37:54 crc kubenswrapper[4758]: I0122 16:37:54.611882 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c983b09-f715-422e-960d-36dcc610c30b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "1c983b09-f715-422e-960d-36dcc610c30b" (UID: "1c983b09-f715-422e-960d-36dcc610c30b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:37:54 crc kubenswrapper[4758]: I0122 16:37:54.616793 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c983b09-f715-422e-960d-36dcc610c30b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "1c983b09-f715-422e-960d-36dcc610c30b" (UID: "1c983b09-f715-422e-960d-36dcc610c30b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:37:54 crc kubenswrapper[4758]: I0122 16:37:54.617255 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c983b09-f715-422e-960d-36dcc610c30b-kube-api-access-f5lhh" (OuterVolumeSpecName: "kube-api-access-f5lhh") pod "1c983b09-f715-422e-960d-36dcc610c30b" (UID: "1c983b09-f715-422e-960d-36dcc610c30b"). InnerVolumeSpecName "kube-api-access-f5lhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:37:54 crc kubenswrapper[4758]: I0122 16:37:54.617476 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c983b09-f715-422e-960d-36dcc610c30b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "1c983b09-f715-422e-960d-36dcc610c30b" (UID: "1c983b09-f715-422e-960d-36dcc610c30b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:37:54 crc kubenswrapper[4758]: I0122 16:37:54.620256 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c983b09-f715-422e-960d-36dcc610c30b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "1c983b09-f715-422e-960d-36dcc610c30b" (UID: "1c983b09-f715-422e-960d-36dcc610c30b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:37:54 crc kubenswrapper[4758]: I0122 16:37:54.627062 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "1c983b09-f715-422e-960d-36dcc610c30b" (UID: "1c983b09-f715-422e-960d-36dcc610c30b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 22 16:37:54 crc kubenswrapper[4758]: I0122 16:37:54.640942 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c983b09-f715-422e-960d-36dcc610c30b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "1c983b09-f715-422e-960d-36dcc610c30b" (UID: "1c983b09-f715-422e-960d-36dcc610c30b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:37:54 crc kubenswrapper[4758]: I0122 16:37:54.712517 4758 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1c983b09-f715-422e-960d-36dcc610c30b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 22 16:37:54 crc kubenswrapper[4758]: I0122 16:37:54.712573 4758 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1c983b09-f715-422e-960d-36dcc610c30b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:37:54 crc kubenswrapper[4758]: I0122 16:37:54.712592 4758 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1c983b09-f715-422e-960d-36dcc610c30b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 22 16:37:54 crc kubenswrapper[4758]: I0122 16:37:54.712609 4758 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1c983b09-f715-422e-960d-36dcc610c30b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 16:37:54 crc kubenswrapper[4758]: I0122 16:37:54.712625 4758 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1c983b09-f715-422e-960d-36dcc610c30b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 22 16:37:54 crc kubenswrapper[4758]: I0122 16:37:54.712637 4758 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1c983b09-f715-422e-960d-36dcc610c30b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 22 16:37:54 crc kubenswrapper[4758]: I0122 16:37:54.712650 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5lhh\" (UniqueName: \"kubernetes.io/projected/1c983b09-f715-422e-960d-36dcc610c30b-kube-api-access-f5lhh\") on node \"crc\" DevicePath \"\"" Jan 22 16:37:54 crc kubenswrapper[4758]: I0122 16:37:54.892250 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" event={"ID":"1c983b09-f715-422e-960d-36dcc610c30b","Type":"ContainerDied","Data":"522bd19cef8372e2e486841d1a62589fd1e4fd104e07af43a8df5af7304c1632"} Jan 22 16:37:54 crc kubenswrapper[4758]: I0122 16:37:54.892362 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-kd79d" Jan 22 16:37:54 crc kubenswrapper[4758]: I0122 16:37:54.892900 4758 scope.go:117] "RemoveContainer" containerID="a05e46dff7100ab1d08ccefc40448405fa7dd4821e00d9b7ec4ac4175d7c6f6b" Jan 22 16:37:54 crc kubenswrapper[4758]: I0122 16:37:54.918474 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kd79d"] Jan 22 16:37:54 crc kubenswrapper[4758]: I0122 16:37:54.923468 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kd79d"] Jan 22 16:37:56 crc kubenswrapper[4758]: I0122 16:37:56.816144 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c983b09-f715-422e-960d-36dcc610c30b" path="/var/lib/kubelet/pods/1c983b09-f715-422e-960d-36dcc610c30b/volumes" Jan 22 16:38:43 crc kubenswrapper[4758]: I0122 16:38:43.259462 4758 scope.go:117] "RemoveContainer" containerID="769905b650bf3b5b3b8be0a6146c1f7ba0f9a6d50f438f0fccc8f1f87fcdeefe" Jan 22 16:38:43 crc kubenswrapper[4758]: I0122 16:38:43.290126 4758 scope.go:117] "RemoveContainer" containerID="6a26ad8078d81d9531f2bbea178c58bbe2212adad5804e0620199758ada95f29" Jan 22 16:38:43 crc kubenswrapper[4758]: I0122 16:38:43.315914 4758 scope.go:117] "RemoveContainer" containerID="e96fc013e143123006a46ad80975200eecc834f0c5909cc49b4d46f53b63c771" Jan 22 16:38:43 crc kubenswrapper[4758]: I0122 16:38:43.331439 4758 scope.go:117] "RemoveContainer" containerID="d878934d875359337952188a2bebc2f7448d994129f6aa2d57436b2221188ed8" Jan 22 16:39:13 crc kubenswrapper[4758]: I0122 16:39:13.838469 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:39:13 crc kubenswrapper[4758]: I0122 16:39:13.839119 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:39:43 crc kubenswrapper[4758]: I0122 16:39:43.837499 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:39:43 crc kubenswrapper[4758]: I0122 16:39:43.838310 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:40:13 crc kubenswrapper[4758]: I0122 16:40:13.837336 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:40:13 crc kubenswrapper[4758]: I0122 16:40:13.839377 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:40:13 crc kubenswrapper[4758]: I0122 16:40:13.839465 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" Jan 22 16:40:13 crc kubenswrapper[4758]: I0122 16:40:13.840149 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d2534229fb8e289739e191d5d234a2856a0000b3c73a9c17a9c7dddb12404503"} pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 16:40:13 crc kubenswrapper[4758]: I0122 16:40:13.840245 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" containerID="cri-o://d2534229fb8e289739e191d5d234a2856a0000b3c73a9c17a9c7dddb12404503" gracePeriod=600 Jan 22 16:40:16 crc kubenswrapper[4758]: I0122 16:40:16.179054 4758 generic.go:334] "Generic (PLEG): container finished" podID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerID="d2534229fb8e289739e191d5d234a2856a0000b3c73a9c17a9c7dddb12404503" exitCode=0 Jan 22 16:40:16 crc kubenswrapper[4758]: I0122 16:40:16.179097 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerDied","Data":"d2534229fb8e289739e191d5d234a2856a0000b3c73a9c17a9c7dddb12404503"} Jan 22 16:40:16 crc kubenswrapper[4758]: I0122 16:40:16.179516 4758 scope.go:117] "RemoveContainer" containerID="a7cae046a3bb22e5d3a084fb0fecaa7e3bddc05b5196ba2795a8cbf04c254117" Jan 22 16:40:17 crc kubenswrapper[4758]: I0122 16:40:17.190219 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerStarted","Data":"d0b336b68370ee625e40b6f05f78d3e38cf1d61c80e48d4c0f21417f2aeb9ed4"} Jan 22 16:40:43 crc kubenswrapper[4758]: I0122 16:40:43.382131 4758 scope.go:117] "RemoveContainer" containerID="c8fe988cb0db8cceebd7070d798a4e7d4a5e4221466e370ef86ff48d66c220f6" Jan 22 16:40:43 crc kubenswrapper[4758]: I0122 16:40:43.405083 4758 scope.go:117] "RemoveContainer" containerID="b9a1b8bc551fc1a90f093ecbae7e6a2e5dee6207119888b22c551b5e4ad3baf0" Jan 22 16:40:43 crc kubenswrapper[4758]: I0122 16:40:43.424159 4758 scope.go:117] "RemoveContainer" containerID="461d9bec2eafa95296bfe9ed6d6ed0382e6d240aa56ca0934f9076d1c3e426f6" Jan 22 16:40:43 crc kubenswrapper[4758]: I0122 16:40:43.451137 4758 scope.go:117] "RemoveContainer" containerID="4a8e74249b93d523f6ff46053629cce981e04cada3afea5a2fe676f782f9c84a" Jan 22 16:40:43 crc kubenswrapper[4758]: I0122 16:40:43.479575 4758 scope.go:117] "RemoveContainer" containerID="e8ed5fc8196221585826d54aa6de4928df87bba04e5bc995b771c9ee1463907a" Jan 22 16:40:43 crc kubenswrapper[4758]: I0122 16:40:43.504684 4758 scope.go:117] "RemoveContainer" containerID="5280a70e36ea601ca10423751b3ae6b4478b1c7552ba2a0beb14a05778f13a39" Jan 22 16:40:43 crc kubenswrapper[4758]: I0122 16:40:43.529334 4758 scope.go:117] "RemoveContainer" containerID="4bd4d7a6ce0f5eabb7a54beaa4c2580649af38778d45594fed69a145ecfcece7" Jan 22 16:40:43 crc kubenswrapper[4758]: I0122 16:40:43.551254 4758 scope.go:117] "RemoveContainer" containerID="b05fae3fddde3f7f1e9fa6cefbb6b68ceb3550b54594bedb88e87d7e3f0fa3b3" Jan 22 16:41:15 crc kubenswrapper[4758]: I0122 16:41:15.265614 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-qg57g"] Jan 22 16:41:15 crc kubenswrapper[4758]: E0122 16:41:15.266472 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c983b09-f715-422e-960d-36dcc610c30b" containerName="registry" Jan 22 16:41:15 crc kubenswrapper[4758]: I0122 16:41:15.266491 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c983b09-f715-422e-960d-36dcc610c30b" containerName="registry" Jan 22 16:41:15 crc kubenswrapper[4758]: I0122 16:41:15.266671 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c983b09-f715-422e-960d-36dcc610c30b" containerName="registry" Jan 22 16:41:15 crc kubenswrapper[4758]: I0122 16:41:15.267215 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-qg57g" Jan 22 16:41:15 crc kubenswrapper[4758]: I0122 16:41:15.270508 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-qg57g"] Jan 22 16:41:15 crc kubenswrapper[4758]: I0122 16:41:15.272788 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 22 16:41:15 crc kubenswrapper[4758]: I0122 16:41:15.273574 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 22 16:41:15 crc kubenswrapper[4758]: I0122 16:41:15.273869 4758 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-x4h8f" Jan 22 16:41:15 crc kubenswrapper[4758]: I0122 16:41:15.285276 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-bpw4j"] Jan 22 16:41:15 crc kubenswrapper[4758]: I0122 16:41:15.286496 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-bpw4j" Jan 22 16:41:15 crc kubenswrapper[4758]: I0122 16:41:15.289209 4758 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-qcl9m" Jan 22 16:41:15 crc kubenswrapper[4758]: I0122 16:41:15.303790 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-hcn6c"] Jan 22 16:41:15 crc kubenswrapper[4758]: I0122 16:41:15.304710 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-hcn6c" Jan 22 16:41:15 crc kubenswrapper[4758]: I0122 16:41:15.306994 4758 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-9xxdc" Jan 22 16:41:15 crc kubenswrapper[4758]: I0122 16:41:15.314027 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-bpw4j"] Jan 22 16:41:15 crc kubenswrapper[4758]: I0122 16:41:15.333901 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-hcn6c"] Jan 22 16:41:15 crc kubenswrapper[4758]: I0122 16:41:15.403688 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlhzj\" (UniqueName: \"kubernetes.io/projected/9844066a-3c0e-4de2-b9d5-f6523e724066-kube-api-access-tlhzj\") pod \"cert-manager-webhook-687f57d79b-hcn6c\" (UID: \"9844066a-3c0e-4de2-b9d5-f6523e724066\") " pod="cert-manager/cert-manager-webhook-687f57d79b-hcn6c" Jan 22 16:41:15 crc kubenswrapper[4758]: I0122 16:41:15.403762 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgvld\" (UniqueName: \"kubernetes.io/projected/86017532-da20-4917-8f8b-34190218edc2-kube-api-access-xgvld\") pod \"cert-manager-cainjector-cf98fcc89-qg57g\" (UID: \"86017532-da20-4917-8f8b-34190218edc2\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-qg57g" Jan 22 16:41:15 crc kubenswrapper[4758]: I0122 16:41:15.403862 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2gsv\" (UniqueName: \"kubernetes.io/projected/36cf0be1-e796-4c9e-b232-2a0c0ceaaa79-kube-api-access-h2gsv\") pod \"cert-manager-858654f9db-bpw4j\" (UID: \"36cf0be1-e796-4c9e-b232-2a0c0ceaaa79\") " pod="cert-manager/cert-manager-858654f9db-bpw4j" Jan 22 16:41:15 crc kubenswrapper[4758]: I0122 16:41:15.505511 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlhzj\" (UniqueName: \"kubernetes.io/projected/9844066a-3c0e-4de2-b9d5-f6523e724066-kube-api-access-tlhzj\") pod \"cert-manager-webhook-687f57d79b-hcn6c\" (UID: \"9844066a-3c0e-4de2-b9d5-f6523e724066\") " pod="cert-manager/cert-manager-webhook-687f57d79b-hcn6c" Jan 22 16:41:15 crc kubenswrapper[4758]: I0122 16:41:15.505554 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgvld\" (UniqueName: \"kubernetes.io/projected/86017532-da20-4917-8f8b-34190218edc2-kube-api-access-xgvld\") pod \"cert-manager-cainjector-cf98fcc89-qg57g\" (UID: \"86017532-da20-4917-8f8b-34190218edc2\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-qg57g" Jan 22 16:41:15 crc kubenswrapper[4758]: I0122 16:41:15.505596 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2gsv\" (UniqueName: \"kubernetes.io/projected/36cf0be1-e796-4c9e-b232-2a0c0ceaaa79-kube-api-access-h2gsv\") pod \"cert-manager-858654f9db-bpw4j\" (UID: \"36cf0be1-e796-4c9e-b232-2a0c0ceaaa79\") " pod="cert-manager/cert-manager-858654f9db-bpw4j" Jan 22 16:41:15 crc kubenswrapper[4758]: I0122 16:41:15.524101 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgvld\" (UniqueName: \"kubernetes.io/projected/86017532-da20-4917-8f8b-34190218edc2-kube-api-access-xgvld\") pod \"cert-manager-cainjector-cf98fcc89-qg57g\" (UID: \"86017532-da20-4917-8f8b-34190218edc2\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-qg57g" Jan 22 16:41:15 crc kubenswrapper[4758]: I0122 16:41:15.524109 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlhzj\" (UniqueName: \"kubernetes.io/projected/9844066a-3c0e-4de2-b9d5-f6523e724066-kube-api-access-tlhzj\") pod \"cert-manager-webhook-687f57d79b-hcn6c\" (UID: \"9844066a-3c0e-4de2-b9d5-f6523e724066\") " pod="cert-manager/cert-manager-webhook-687f57d79b-hcn6c" Jan 22 16:41:15 crc kubenswrapper[4758]: I0122 16:41:15.524426 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2gsv\" (UniqueName: \"kubernetes.io/projected/36cf0be1-e796-4c9e-b232-2a0c0ceaaa79-kube-api-access-h2gsv\") pod \"cert-manager-858654f9db-bpw4j\" (UID: \"36cf0be1-e796-4c9e-b232-2a0c0ceaaa79\") " pod="cert-manager/cert-manager-858654f9db-bpw4j" Jan 22 16:41:15 crc kubenswrapper[4758]: I0122 16:41:15.592538 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-qg57g" Jan 22 16:41:15 crc kubenswrapper[4758]: I0122 16:41:15.607407 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-bpw4j" Jan 22 16:41:15 crc kubenswrapper[4758]: I0122 16:41:15.630947 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-hcn6c" Jan 22 16:41:16 crc kubenswrapper[4758]: I0122 16:41:16.049612 4758 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 16:41:16 crc kubenswrapper[4758]: I0122 16:41:16.060952 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-qg57g"] Jan 22 16:41:16 crc kubenswrapper[4758]: W0122 16:41:16.067293 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36cf0be1_e796_4c9e_b232_2a0c0ceaaa79.slice/crio-3e5b240ba41ad624b74f1050b232ebbfe5a667d727266701ba39b1353cd51137 WatchSource:0}: Error finding container 3e5b240ba41ad624b74f1050b232ebbfe5a667d727266701ba39b1353cd51137: Status 404 returned error can't find the container with id 3e5b240ba41ad624b74f1050b232ebbfe5a667d727266701ba39b1353cd51137 Jan 22 16:41:16 crc kubenswrapper[4758]: I0122 16:41:16.067614 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-bpw4j"] Jan 22 16:41:16 crc kubenswrapper[4758]: I0122 16:41:16.074566 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-hcn6c"] Jan 22 16:41:16 crc kubenswrapper[4758]: I0122 16:41:16.521945 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-bpw4j" event={"ID":"36cf0be1-e796-4c9e-b232-2a0c0ceaaa79","Type":"ContainerStarted","Data":"3e5b240ba41ad624b74f1050b232ebbfe5a667d727266701ba39b1353cd51137"} Jan 22 16:41:16 crc kubenswrapper[4758]: I0122 16:41:16.522894 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-hcn6c" event={"ID":"9844066a-3c0e-4de2-b9d5-f6523e724066","Type":"ContainerStarted","Data":"d64ec73dbe974410da991f901a134099b40d521544b71ecada58a491acaa01df"} Jan 22 16:41:16 crc kubenswrapper[4758]: I0122 16:41:16.524053 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-qg57g" event={"ID":"86017532-da20-4917-8f8b-34190218edc2","Type":"ContainerStarted","Data":"155ec6eacc912119af8524530ff5fbac888a1213cdf38afbd92e7054783d82f7"} Jan 22 16:41:22 crc kubenswrapper[4758]: I0122 16:41:22.565054 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-qg57g" event={"ID":"86017532-da20-4917-8f8b-34190218edc2","Type":"ContainerStarted","Data":"f4e1ecc33b122dfeea31b64b121de90bd388c7aeb97dc5736a98282952aea0bb"} Jan 22 16:41:22 crc kubenswrapper[4758]: I0122 16:41:22.566956 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-bpw4j" event={"ID":"36cf0be1-e796-4c9e-b232-2a0c0ceaaa79","Type":"ContainerStarted","Data":"c37297ebe88579c5a107fed428fe55697fdd51c3ee150191378695cbde38f831"} Jan 22 16:41:22 crc kubenswrapper[4758]: I0122 16:41:22.568357 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-hcn6c" event={"ID":"9844066a-3c0e-4de2-b9d5-f6523e724066","Type":"ContainerStarted","Data":"03934bafee8934e78a0894e0a3089a2f3b4f9921bd15e468cc2fc104fab67dae"} Jan 22 16:41:22 crc kubenswrapper[4758]: I0122 16:41:22.568502 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-hcn6c" Jan 22 16:41:22 crc kubenswrapper[4758]: I0122 16:41:22.586355 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-qg57g" podStartSLOduration=1.5176280979999999 podStartE2EDuration="7.586336432s" podCreationTimestamp="2026-01-22 16:41:15 +0000 UTC" firstStartedPulling="2026-01-22 16:41:16.049372144 +0000 UTC m=+697.532711429" lastFinishedPulling="2026-01-22 16:41:22.118080478 +0000 UTC m=+703.601419763" observedRunningTime="2026-01-22 16:41:22.582970359 +0000 UTC m=+704.066309644" watchObservedRunningTime="2026-01-22 16:41:22.586336432 +0000 UTC m=+704.069675727" Jan 22 16:41:22 crc kubenswrapper[4758]: I0122 16:41:22.626197 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-hcn6c" podStartSLOduration=1.510647464 podStartE2EDuration="7.626179801s" podCreationTimestamp="2026-01-22 16:41:15 +0000 UTC" firstStartedPulling="2026-01-22 16:41:16.079638755 +0000 UTC m=+697.562978030" lastFinishedPulling="2026-01-22 16:41:22.195171072 +0000 UTC m=+703.678510367" observedRunningTime="2026-01-22 16:41:22.624969077 +0000 UTC m=+704.108308372" watchObservedRunningTime="2026-01-22 16:41:22.626179801 +0000 UTC m=+704.109519096" Jan 22 16:41:22 crc kubenswrapper[4758]: I0122 16:41:22.629666 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-bpw4j" podStartSLOduration=1.583984494 podStartE2EDuration="7.629647067s" podCreationTimestamp="2026-01-22 16:41:15 +0000 UTC" firstStartedPulling="2026-01-22 16:41:16.072423395 +0000 UTC m=+697.555762680" lastFinishedPulling="2026-01-22 16:41:22.118085968 +0000 UTC m=+703.601425253" observedRunningTime="2026-01-22 16:41:22.610367142 +0000 UTC m=+704.093706427" watchObservedRunningTime="2026-01-22 16:41:22.629647067 +0000 UTC m=+704.112986352" Jan 22 16:41:25 crc kubenswrapper[4758]: I0122 16:41:25.182981 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jdpck"] Jan 22 16:41:25 crc kubenswrapper[4758]: I0122 16:41:25.183823 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerName="ovn-controller" containerID="cri-o://915d9141459dc9d0a72681717513aaef7a876003397a1ed89a62b755bb45dc67" gracePeriod=30 Jan 22 16:41:25 crc kubenswrapper[4758]: I0122 16:41:25.183928 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerName="nbdb" containerID="cri-o://596bd59377fe79f228ddda88e07b73a2f24a57ce836d0f0b2ca02d6008363020" gracePeriod=30 Jan 22 16:41:25 crc kubenswrapper[4758]: I0122 16:41:25.183999 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerName="kube-rbac-proxy-node" containerID="cri-o://385c8e25a62d5dad6aeac43a064397418c85c1b8720414cd44e3e925fa85a04d" gracePeriod=30 Jan 22 16:41:25 crc kubenswrapper[4758]: I0122 16:41:25.183988 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://f98a04a30984aea45235e40edb9801d2939b35a08519d1d63df0d0c6c47131a6" gracePeriod=30 Jan 22 16:41:25 crc kubenswrapper[4758]: I0122 16:41:25.183972 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerName="northd" containerID="cri-o://47ade0d50980af81530f1be5dbb599cf39cd13941d216485b18422f8474a1d8f" gracePeriod=30 Jan 22 16:41:25 crc kubenswrapper[4758]: I0122 16:41:25.184089 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerName="ovn-acl-logging" containerID="cri-o://c2bb807fa30678efaca258ed72a274a7f4e065ce20066caf601177dbc8466409" gracePeriod=30 Jan 22 16:41:25 crc kubenswrapper[4758]: I0122 16:41:25.184297 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerName="sbdb" containerID="cri-o://9cfdd5744f9e8afe2a851b86ac85473f44fb49066784a282306ca8c1d621974b" gracePeriod=30 Jan 22 16:41:25 crc kubenswrapper[4758]: I0122 16:41:25.224298 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerName="ovnkube-controller" containerID="cri-o://450f07057ff0cbdff80b0a0746974a16bb12814a6720db90adaf18b0968da691" gracePeriod=30 Jan 22 16:41:25 crc kubenswrapper[4758]: I0122 16:41:25.594580 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7dvfg_97853b38-352d-42df-ad31-639c0e58093a/kube-multus/2.log" Jan 22 16:41:25 crc kubenswrapper[4758]: I0122 16:41:25.595650 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7dvfg_97853b38-352d-42df-ad31-639c0e58093a/kube-multus/1.log" Jan 22 16:41:25 crc kubenswrapper[4758]: I0122 16:41:25.595841 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7dvfg" event={"ID":"97853b38-352d-42df-ad31-639c0e58093a","Type":"ContainerDied","Data":"733ea95ed7d8d4ff71e143ac3734ecdaaaec088e3579e9563ae043bb871c0a3d"} Jan 22 16:41:25 crc kubenswrapper[4758]: I0122 16:41:25.595911 4758 scope.go:117] "RemoveContainer" containerID="56af628fe62b476141809cfaea6a06fdd7dfa34ed41fb97425db4cdaa3ec7b4e" Jan 22 16:41:25 crc kubenswrapper[4758]: I0122 16:41:25.595738 4758 generic.go:334] "Generic (PLEG): container finished" podID="97853b38-352d-42df-ad31-639c0e58093a" containerID="733ea95ed7d8d4ff71e143ac3734ecdaaaec088e3579e9563ae043bb871c0a3d" exitCode=2 Jan 22 16:41:25 crc kubenswrapper[4758]: I0122 16:41:25.596532 4758 scope.go:117] "RemoveContainer" containerID="733ea95ed7d8d4ff71e143ac3734ecdaaaec088e3579e9563ae043bb871c0a3d" Jan 22 16:41:25 crc kubenswrapper[4758]: E0122 16:41:25.596871 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-7dvfg_openshift-multus(97853b38-352d-42df-ad31-639c0e58093a)\"" pod="openshift-multus/multus-7dvfg" podUID="97853b38-352d-42df-ad31-639c0e58093a" Jan 22 16:41:25 crc kubenswrapper[4758]: I0122 16:41:25.600140 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jdpck_9b60a09e-8bfa-4d2e-998d-e1db5dec0faa/ovnkube-controller/3.log" Jan 22 16:41:25 crc kubenswrapper[4758]: I0122 16:41:25.603350 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jdpck_9b60a09e-8bfa-4d2e-998d-e1db5dec0faa/ovn-acl-logging/0.log" Jan 22 16:41:25 crc kubenswrapper[4758]: I0122 16:41:25.604289 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jdpck_9b60a09e-8bfa-4d2e-998d-e1db5dec0faa/ovn-controller/0.log" Jan 22 16:41:25 crc kubenswrapper[4758]: I0122 16:41:25.604868 4758 generic.go:334] "Generic (PLEG): container finished" podID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerID="450f07057ff0cbdff80b0a0746974a16bb12814a6720db90adaf18b0968da691" exitCode=0 Jan 22 16:41:25 crc kubenswrapper[4758]: I0122 16:41:25.604910 4758 generic.go:334] "Generic (PLEG): container finished" podID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerID="f98a04a30984aea45235e40edb9801d2939b35a08519d1d63df0d0c6c47131a6" exitCode=0 Jan 22 16:41:25 crc kubenswrapper[4758]: I0122 16:41:25.604930 4758 generic.go:334] "Generic (PLEG): container finished" podID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerID="385c8e25a62d5dad6aeac43a064397418c85c1b8720414cd44e3e925fa85a04d" exitCode=0 Jan 22 16:41:25 crc kubenswrapper[4758]: I0122 16:41:25.604926 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" event={"ID":"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa","Type":"ContainerDied","Data":"450f07057ff0cbdff80b0a0746974a16bb12814a6720db90adaf18b0968da691"} Jan 22 16:41:25 crc kubenswrapper[4758]: I0122 16:41:25.604948 4758 generic.go:334] "Generic (PLEG): container finished" podID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerID="c2bb807fa30678efaca258ed72a274a7f4e065ce20066caf601177dbc8466409" exitCode=143 Jan 22 16:41:25 crc kubenswrapper[4758]: I0122 16:41:25.604966 4758 generic.go:334] "Generic (PLEG): container finished" podID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerID="915d9141459dc9d0a72681717513aaef7a876003397a1ed89a62b755bb45dc67" exitCode=143 Jan 22 16:41:25 crc kubenswrapper[4758]: I0122 16:41:25.604970 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" event={"ID":"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa","Type":"ContainerDied","Data":"f98a04a30984aea45235e40edb9801d2939b35a08519d1d63df0d0c6c47131a6"} Jan 22 16:41:25 crc kubenswrapper[4758]: I0122 16:41:25.604986 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" event={"ID":"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa","Type":"ContainerDied","Data":"385c8e25a62d5dad6aeac43a064397418c85c1b8720414cd44e3e925fa85a04d"} Jan 22 16:41:25 crc kubenswrapper[4758]: I0122 16:41:25.604998 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" event={"ID":"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa","Type":"ContainerDied","Data":"c2bb807fa30678efaca258ed72a274a7f4e065ce20066caf601177dbc8466409"} Jan 22 16:41:25 crc kubenswrapper[4758]: I0122 16:41:25.605011 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" event={"ID":"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa","Type":"ContainerDied","Data":"915d9141459dc9d0a72681717513aaef7a876003397a1ed89a62b755bb45dc67"} Jan 22 16:41:25 crc kubenswrapper[4758]: I0122 16:41:25.892041 4758 scope.go:117] "RemoveContainer" containerID="7a265cc950ba85a41da92efbf8a471efa10bdc6ef7aa7837fc86c3e4e023a263" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.611267 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jdpck_9b60a09e-8bfa-4d2e-998d-e1db5dec0faa/ovn-acl-logging/0.log" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.612233 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jdpck_9b60a09e-8bfa-4d2e-998d-e1db5dec0faa/ovn-controller/0.log" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.612712 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.613680 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jdpck_9b60a09e-8bfa-4d2e-998d-e1db5dec0faa/ovn-acl-logging/0.log" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.614493 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jdpck_9b60a09e-8bfa-4d2e-998d-e1db5dec0faa/ovn-controller/0.log" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.614909 4758 generic.go:334] "Generic (PLEG): container finished" podID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerID="9cfdd5744f9e8afe2a851b86ac85473f44fb49066784a282306ca8c1d621974b" exitCode=0 Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.614944 4758 generic.go:334] "Generic (PLEG): container finished" podID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerID="596bd59377fe79f228ddda88e07b73a2f24a57ce836d0f0b2ca02d6008363020" exitCode=0 Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.614959 4758 generic.go:334] "Generic (PLEG): container finished" podID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerID="47ade0d50980af81530f1be5dbb599cf39cd13941d216485b18422f8474a1d8f" exitCode=0 Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.614960 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" event={"ID":"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa","Type":"ContainerDied","Data":"9cfdd5744f9e8afe2a851b86ac85473f44fb49066784a282306ca8c1d621974b"} Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.615008 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" event={"ID":"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa","Type":"ContainerDied","Data":"596bd59377fe79f228ddda88e07b73a2f24a57ce836d0f0b2ca02d6008363020"} Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.615034 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" event={"ID":"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa","Type":"ContainerDied","Data":"47ade0d50980af81530f1be5dbb599cf39cd13941d216485b18422f8474a1d8f"} Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.615049 4758 scope.go:117] "RemoveContainer" containerID="450f07057ff0cbdff80b0a0746974a16bb12814a6720db90adaf18b0968da691" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.615055 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" event={"ID":"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa","Type":"ContainerDied","Data":"6be781d7852bbffdd00c288a7b2594b4d11ab247f25a95ae78082f08c77990e7"} Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.617181 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7dvfg_97853b38-352d-42df-ad31-639c0e58093a/kube-multus/2.log" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.635695 4758 scope.go:117] "RemoveContainer" containerID="9cfdd5744f9e8afe2a851b86ac85473f44fb49066784a282306ca8c1d621974b" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.670239 4758 scope.go:117] "RemoveContainer" containerID="596bd59377fe79f228ddda88e07b73a2f24a57ce836d0f0b2ca02d6008363020" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.674636 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qm8wq"] Jan 22 16:41:26 crc kubenswrapper[4758]: E0122 16:41:26.675345 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerName="ovnkube-controller" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.675372 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerName="ovnkube-controller" Jan 22 16:41:26 crc kubenswrapper[4758]: E0122 16:41:26.675386 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerName="ovn-controller" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.675395 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerName="ovn-controller" Jan 22 16:41:26 crc kubenswrapper[4758]: E0122 16:41:26.675409 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerName="ovnkube-controller" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.675418 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerName="ovnkube-controller" Jan 22 16:41:26 crc kubenswrapper[4758]: E0122 16:41:26.675430 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerName="ovnkube-controller" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.675438 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerName="ovnkube-controller" Jan 22 16:41:26 crc kubenswrapper[4758]: E0122 16:41:26.675449 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerName="kube-rbac-proxy-node" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.675456 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerName="kube-rbac-proxy-node" Jan 22 16:41:26 crc kubenswrapper[4758]: E0122 16:41:26.675470 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerName="ovnkube-controller" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.675478 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerName="ovnkube-controller" Jan 22 16:41:26 crc kubenswrapper[4758]: E0122 16:41:26.675487 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerName="ovnkube-controller" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.675494 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerName="ovnkube-controller" Jan 22 16:41:26 crc kubenswrapper[4758]: E0122 16:41:26.675506 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerName="nbdb" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.675513 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerName="nbdb" Jan 22 16:41:26 crc kubenswrapper[4758]: E0122 16:41:26.675524 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerName="kubecfg-setup" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.675533 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerName="kubecfg-setup" Jan 22 16:41:26 crc kubenswrapper[4758]: E0122 16:41:26.675543 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerName="sbdb" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.675551 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerName="sbdb" Jan 22 16:41:26 crc kubenswrapper[4758]: E0122 16:41:26.675560 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerName="kube-rbac-proxy-ovn-metrics" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.675568 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerName="kube-rbac-proxy-ovn-metrics" Jan 22 16:41:26 crc kubenswrapper[4758]: E0122 16:41:26.675577 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerName="northd" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.675585 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerName="northd" Jan 22 16:41:26 crc kubenswrapper[4758]: E0122 16:41:26.675599 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerName="ovn-acl-logging" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.675607 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerName="ovn-acl-logging" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.675718 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerName="ovnkube-controller" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.675731 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerName="ovnkube-controller" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.675761 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerName="ovn-controller" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.675773 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerName="sbdb" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.675785 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerName="ovn-acl-logging" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.675796 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerName="ovnkube-controller" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.675807 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerName="ovnkube-controller" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.675819 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerName="kube-rbac-proxy-ovn-metrics" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.675829 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerName="ovnkube-controller" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.675841 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerName="northd" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.675853 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerName="nbdb" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.675907 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" containerName="kube-rbac-proxy-node" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.679701 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.705611 4758 scope.go:117] "RemoveContainer" containerID="47ade0d50980af81530f1be5dbb599cf39cd13941d216485b18422f8474a1d8f" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.725598 4758 scope.go:117] "RemoveContainer" containerID="f98a04a30984aea45235e40edb9801d2939b35a08519d1d63df0d0c6c47131a6" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.740997 4758 scope.go:117] "RemoveContainer" containerID="385c8e25a62d5dad6aeac43a064397418c85c1b8720414cd44e3e925fa85a04d" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.742587 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-ovn-node-metrics-cert\") pod \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.742644 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-ovnkube-script-lib\") pod \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.742679 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-host-cni-netd\") pod \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.742706 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-host-kubelet\") pod \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.742732 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-run-openvswitch\") pod \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.742795 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-host-run-netns\") pod \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.742823 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-host-cni-bin\") pod \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.742865 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-ovnkube-config\") pod \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.742889 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-var-lib-openvswitch\") pod \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.742880 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" (UID: "9b60a09e-8bfa-4d2e-998d-e1db5dec0faa"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.742932 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-host-slash\") pod \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.742923 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" (UID: "9b60a09e-8bfa-4d2e-998d-e1db5dec0faa"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.742971 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-env-overrides\") pod \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.742934 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" (UID: "9b60a09e-8bfa-4d2e-998d-e1db5dec0faa"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.743010 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-run-systemd\") pod \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.742964 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" (UID: "9b60a09e-8bfa-4d2e-998d-e1db5dec0faa"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.742978 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" (UID: "9b60a09e-8bfa-4d2e-998d-e1db5dec0faa"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.743037 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-log-socket\") pod \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.743001 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-host-slash" (OuterVolumeSpecName: "host-slash") pod "9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" (UID: "9b60a09e-8bfa-4d2e-998d-e1db5dec0faa"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.742891 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" (UID: "9b60a09e-8bfa-4d2e-998d-e1db5dec0faa"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.743080 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-log-socket" (OuterVolumeSpecName: "log-socket") pod "9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" (UID: "9b60a09e-8bfa-4d2e-998d-e1db5dec0faa"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.743207 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" (UID: "9b60a09e-8bfa-4d2e-998d-e1db5dec0faa"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.743168 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-run-ovn\") pod \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.743282 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-node-log\") pod \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.743316 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-host-var-lib-cni-networks-ovn-kubernetes\") pod \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.743353 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-etc-openvswitch\") pod \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.743391 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-node-log" (OuterVolumeSpecName: "node-log") pod "9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" (UID: "9b60a09e-8bfa-4d2e-998d-e1db5dec0faa"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.743421 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" (UID: "9b60a09e-8bfa-4d2e-998d-e1db5dec0faa"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.743450 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" (UID: "9b60a09e-8bfa-4d2e-998d-e1db5dec0faa"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.743435 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-host-run-ovn-kubernetes\") pod \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.743470 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" (UID: "9b60a09e-8bfa-4d2e-998d-e1db5dec0faa"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.743512 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-systemd-units\") pod \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.743564 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" (UID: "9b60a09e-8bfa-4d2e-998d-e1db5dec0faa"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.743579 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96qwj\" (UniqueName: \"kubernetes.io/projected/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-kube-api-access-96qwj\") pod \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\" (UID: \"9b60a09e-8bfa-4d2e-998d-e1db5dec0faa\") " Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.743590 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" (UID: "9b60a09e-8bfa-4d2e-998d-e1db5dec0faa"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.743622 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" (UID: "9b60a09e-8bfa-4d2e-998d-e1db5dec0faa"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.743788 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-ovn-node-metrics-cert\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.743826 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-host-kubelet\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.743828 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" (UID: "9b60a09e-8bfa-4d2e-998d-e1db5dec0faa"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.743857 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-run-systemd\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.743922 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-host-cni-bin\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.744186 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-host-slash\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.744299 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-run-openvswitch\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.744479 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg4m9\" (UniqueName: \"kubernetes.io/projected/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-kube-api-access-kg4m9\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.744559 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-env-overrides\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.744584 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-log-socket\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.744734 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-etc-openvswitch\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.745151 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-ovnkube-config\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.745193 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-var-lib-openvswitch\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.745258 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-host-run-netns\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.745302 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-systemd-units\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.745378 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-host-cni-netd\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.745413 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-ovnkube-script-lib\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.745442 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-node-log\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.745485 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-run-ovn\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.745532 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.745590 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-host-run-ovn-kubernetes\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.745677 4758 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.745692 4758 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.745704 4758 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-host-slash\") on node \"crc\" DevicePath \"\"" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.745714 4758 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.745725 4758 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-log-socket\") on node \"crc\" DevicePath \"\"" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.745780 4758 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.745793 4758 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-node-log\") on node \"crc\" DevicePath \"\"" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.745804 4758 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.745817 4758 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.745842 4758 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.745853 4758 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.745865 4758 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.745876 4758 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.745919 4758 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.745934 4758 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.745949 4758 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.745964 4758 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.754400 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" (UID: "9b60a09e-8bfa-4d2e-998d-e1db5dec0faa"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.754410 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-kube-api-access-96qwj" (OuterVolumeSpecName: "kube-api-access-96qwj") pod "9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" (UID: "9b60a09e-8bfa-4d2e-998d-e1db5dec0faa"). InnerVolumeSpecName "kube-api-access-96qwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.757189 4758 scope.go:117] "RemoveContainer" containerID="c2bb807fa30678efaca258ed72a274a7f4e065ce20066caf601177dbc8466409" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.770090 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" (UID: "9b60a09e-8bfa-4d2e-998d-e1db5dec0faa"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.796089 4758 scope.go:117] "RemoveContainer" containerID="915d9141459dc9d0a72681717513aaef7a876003397a1ed89a62b755bb45dc67" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.809600 4758 scope.go:117] "RemoveContainer" containerID="2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.825822 4758 scope.go:117] "RemoveContainer" containerID="450f07057ff0cbdff80b0a0746974a16bb12814a6720db90adaf18b0968da691" Jan 22 16:41:26 crc kubenswrapper[4758]: E0122 16:41:26.826245 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"450f07057ff0cbdff80b0a0746974a16bb12814a6720db90adaf18b0968da691\": container with ID starting with 450f07057ff0cbdff80b0a0746974a16bb12814a6720db90adaf18b0968da691 not found: ID does not exist" containerID="450f07057ff0cbdff80b0a0746974a16bb12814a6720db90adaf18b0968da691" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.826295 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"450f07057ff0cbdff80b0a0746974a16bb12814a6720db90adaf18b0968da691"} err="failed to get container status \"450f07057ff0cbdff80b0a0746974a16bb12814a6720db90adaf18b0968da691\": rpc error: code = NotFound desc = could not find container \"450f07057ff0cbdff80b0a0746974a16bb12814a6720db90adaf18b0968da691\": container with ID starting with 450f07057ff0cbdff80b0a0746974a16bb12814a6720db90adaf18b0968da691 not found: ID does not exist" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.826326 4758 scope.go:117] "RemoveContainer" containerID="9cfdd5744f9e8afe2a851b86ac85473f44fb49066784a282306ca8c1d621974b" Jan 22 16:41:26 crc kubenswrapper[4758]: E0122 16:41:26.826648 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cfdd5744f9e8afe2a851b86ac85473f44fb49066784a282306ca8c1d621974b\": container with ID starting with 9cfdd5744f9e8afe2a851b86ac85473f44fb49066784a282306ca8c1d621974b not found: ID does not exist" containerID="9cfdd5744f9e8afe2a851b86ac85473f44fb49066784a282306ca8c1d621974b" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.826678 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cfdd5744f9e8afe2a851b86ac85473f44fb49066784a282306ca8c1d621974b"} err="failed to get container status \"9cfdd5744f9e8afe2a851b86ac85473f44fb49066784a282306ca8c1d621974b\": rpc error: code = NotFound desc = could not find container \"9cfdd5744f9e8afe2a851b86ac85473f44fb49066784a282306ca8c1d621974b\": container with ID starting with 9cfdd5744f9e8afe2a851b86ac85473f44fb49066784a282306ca8c1d621974b not found: ID does not exist" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.826701 4758 scope.go:117] "RemoveContainer" containerID="596bd59377fe79f228ddda88e07b73a2f24a57ce836d0f0b2ca02d6008363020" Jan 22 16:41:26 crc kubenswrapper[4758]: E0122 16:41:26.827152 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"596bd59377fe79f228ddda88e07b73a2f24a57ce836d0f0b2ca02d6008363020\": container with ID starting with 596bd59377fe79f228ddda88e07b73a2f24a57ce836d0f0b2ca02d6008363020 not found: ID does not exist" containerID="596bd59377fe79f228ddda88e07b73a2f24a57ce836d0f0b2ca02d6008363020" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.827180 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"596bd59377fe79f228ddda88e07b73a2f24a57ce836d0f0b2ca02d6008363020"} err="failed to get container status \"596bd59377fe79f228ddda88e07b73a2f24a57ce836d0f0b2ca02d6008363020\": rpc error: code = NotFound desc = could not find container \"596bd59377fe79f228ddda88e07b73a2f24a57ce836d0f0b2ca02d6008363020\": container with ID starting with 596bd59377fe79f228ddda88e07b73a2f24a57ce836d0f0b2ca02d6008363020 not found: ID does not exist" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.827205 4758 scope.go:117] "RemoveContainer" containerID="47ade0d50980af81530f1be5dbb599cf39cd13941d216485b18422f8474a1d8f" Jan 22 16:41:26 crc kubenswrapper[4758]: E0122 16:41:26.827514 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47ade0d50980af81530f1be5dbb599cf39cd13941d216485b18422f8474a1d8f\": container with ID starting with 47ade0d50980af81530f1be5dbb599cf39cd13941d216485b18422f8474a1d8f not found: ID does not exist" containerID="47ade0d50980af81530f1be5dbb599cf39cd13941d216485b18422f8474a1d8f" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.827567 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47ade0d50980af81530f1be5dbb599cf39cd13941d216485b18422f8474a1d8f"} err="failed to get container status \"47ade0d50980af81530f1be5dbb599cf39cd13941d216485b18422f8474a1d8f\": rpc error: code = NotFound desc = could not find container \"47ade0d50980af81530f1be5dbb599cf39cd13941d216485b18422f8474a1d8f\": container with ID starting with 47ade0d50980af81530f1be5dbb599cf39cd13941d216485b18422f8474a1d8f not found: ID does not exist" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.827596 4758 scope.go:117] "RemoveContainer" containerID="f98a04a30984aea45235e40edb9801d2939b35a08519d1d63df0d0c6c47131a6" Jan 22 16:41:26 crc kubenswrapper[4758]: E0122 16:41:26.828025 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f98a04a30984aea45235e40edb9801d2939b35a08519d1d63df0d0c6c47131a6\": container with ID starting with f98a04a30984aea45235e40edb9801d2939b35a08519d1d63df0d0c6c47131a6 not found: ID does not exist" containerID="f98a04a30984aea45235e40edb9801d2939b35a08519d1d63df0d0c6c47131a6" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.828048 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f98a04a30984aea45235e40edb9801d2939b35a08519d1d63df0d0c6c47131a6"} err="failed to get container status \"f98a04a30984aea45235e40edb9801d2939b35a08519d1d63df0d0c6c47131a6\": rpc error: code = NotFound desc = could not find container \"f98a04a30984aea45235e40edb9801d2939b35a08519d1d63df0d0c6c47131a6\": container with ID starting with f98a04a30984aea45235e40edb9801d2939b35a08519d1d63df0d0c6c47131a6 not found: ID does not exist" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.828063 4758 scope.go:117] "RemoveContainer" containerID="385c8e25a62d5dad6aeac43a064397418c85c1b8720414cd44e3e925fa85a04d" Jan 22 16:41:26 crc kubenswrapper[4758]: E0122 16:41:26.828299 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"385c8e25a62d5dad6aeac43a064397418c85c1b8720414cd44e3e925fa85a04d\": container with ID starting with 385c8e25a62d5dad6aeac43a064397418c85c1b8720414cd44e3e925fa85a04d not found: ID does not exist" containerID="385c8e25a62d5dad6aeac43a064397418c85c1b8720414cd44e3e925fa85a04d" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.828328 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"385c8e25a62d5dad6aeac43a064397418c85c1b8720414cd44e3e925fa85a04d"} err="failed to get container status \"385c8e25a62d5dad6aeac43a064397418c85c1b8720414cd44e3e925fa85a04d\": rpc error: code = NotFound desc = could not find container \"385c8e25a62d5dad6aeac43a064397418c85c1b8720414cd44e3e925fa85a04d\": container with ID starting with 385c8e25a62d5dad6aeac43a064397418c85c1b8720414cd44e3e925fa85a04d not found: ID does not exist" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.828345 4758 scope.go:117] "RemoveContainer" containerID="c2bb807fa30678efaca258ed72a274a7f4e065ce20066caf601177dbc8466409" Jan 22 16:41:26 crc kubenswrapper[4758]: E0122 16:41:26.828661 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2bb807fa30678efaca258ed72a274a7f4e065ce20066caf601177dbc8466409\": container with ID starting with c2bb807fa30678efaca258ed72a274a7f4e065ce20066caf601177dbc8466409 not found: ID does not exist" containerID="c2bb807fa30678efaca258ed72a274a7f4e065ce20066caf601177dbc8466409" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.828692 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2bb807fa30678efaca258ed72a274a7f4e065ce20066caf601177dbc8466409"} err="failed to get container status \"c2bb807fa30678efaca258ed72a274a7f4e065ce20066caf601177dbc8466409\": rpc error: code = NotFound desc = could not find container \"c2bb807fa30678efaca258ed72a274a7f4e065ce20066caf601177dbc8466409\": container with ID starting with c2bb807fa30678efaca258ed72a274a7f4e065ce20066caf601177dbc8466409 not found: ID does not exist" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.828711 4758 scope.go:117] "RemoveContainer" containerID="915d9141459dc9d0a72681717513aaef7a876003397a1ed89a62b755bb45dc67" Jan 22 16:41:26 crc kubenswrapper[4758]: E0122 16:41:26.828993 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"915d9141459dc9d0a72681717513aaef7a876003397a1ed89a62b755bb45dc67\": container with ID starting with 915d9141459dc9d0a72681717513aaef7a876003397a1ed89a62b755bb45dc67 not found: ID does not exist" containerID="915d9141459dc9d0a72681717513aaef7a876003397a1ed89a62b755bb45dc67" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.829020 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"915d9141459dc9d0a72681717513aaef7a876003397a1ed89a62b755bb45dc67"} err="failed to get container status \"915d9141459dc9d0a72681717513aaef7a876003397a1ed89a62b755bb45dc67\": rpc error: code = NotFound desc = could not find container \"915d9141459dc9d0a72681717513aaef7a876003397a1ed89a62b755bb45dc67\": container with ID starting with 915d9141459dc9d0a72681717513aaef7a876003397a1ed89a62b755bb45dc67 not found: ID does not exist" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.829036 4758 scope.go:117] "RemoveContainer" containerID="2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9" Jan 22 16:41:26 crc kubenswrapper[4758]: E0122 16:41:26.829341 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\": container with ID starting with 2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9 not found: ID does not exist" containerID="2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.829370 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9"} err="failed to get container status \"2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\": rpc error: code = NotFound desc = could not find container \"2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\": container with ID starting with 2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9 not found: ID does not exist" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.829388 4758 scope.go:117] "RemoveContainer" containerID="450f07057ff0cbdff80b0a0746974a16bb12814a6720db90adaf18b0968da691" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.830000 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"450f07057ff0cbdff80b0a0746974a16bb12814a6720db90adaf18b0968da691"} err="failed to get container status \"450f07057ff0cbdff80b0a0746974a16bb12814a6720db90adaf18b0968da691\": rpc error: code = NotFound desc = could not find container \"450f07057ff0cbdff80b0a0746974a16bb12814a6720db90adaf18b0968da691\": container with ID starting with 450f07057ff0cbdff80b0a0746974a16bb12814a6720db90adaf18b0968da691 not found: ID does not exist" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.830074 4758 scope.go:117] "RemoveContainer" containerID="9cfdd5744f9e8afe2a851b86ac85473f44fb49066784a282306ca8c1d621974b" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.830453 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cfdd5744f9e8afe2a851b86ac85473f44fb49066784a282306ca8c1d621974b"} err="failed to get container status \"9cfdd5744f9e8afe2a851b86ac85473f44fb49066784a282306ca8c1d621974b\": rpc error: code = NotFound desc = could not find container \"9cfdd5744f9e8afe2a851b86ac85473f44fb49066784a282306ca8c1d621974b\": container with ID starting with 9cfdd5744f9e8afe2a851b86ac85473f44fb49066784a282306ca8c1d621974b not found: ID does not exist" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.830498 4758 scope.go:117] "RemoveContainer" containerID="596bd59377fe79f228ddda88e07b73a2f24a57ce836d0f0b2ca02d6008363020" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.830834 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"596bd59377fe79f228ddda88e07b73a2f24a57ce836d0f0b2ca02d6008363020"} err="failed to get container status \"596bd59377fe79f228ddda88e07b73a2f24a57ce836d0f0b2ca02d6008363020\": rpc error: code = NotFound desc = could not find container \"596bd59377fe79f228ddda88e07b73a2f24a57ce836d0f0b2ca02d6008363020\": container with ID starting with 596bd59377fe79f228ddda88e07b73a2f24a57ce836d0f0b2ca02d6008363020 not found: ID does not exist" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.830861 4758 scope.go:117] "RemoveContainer" containerID="47ade0d50980af81530f1be5dbb599cf39cd13941d216485b18422f8474a1d8f" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.831168 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47ade0d50980af81530f1be5dbb599cf39cd13941d216485b18422f8474a1d8f"} err="failed to get container status \"47ade0d50980af81530f1be5dbb599cf39cd13941d216485b18422f8474a1d8f\": rpc error: code = NotFound desc = could not find container \"47ade0d50980af81530f1be5dbb599cf39cd13941d216485b18422f8474a1d8f\": container with ID starting with 47ade0d50980af81530f1be5dbb599cf39cd13941d216485b18422f8474a1d8f not found: ID does not exist" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.831226 4758 scope.go:117] "RemoveContainer" containerID="f98a04a30984aea45235e40edb9801d2939b35a08519d1d63df0d0c6c47131a6" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.831538 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f98a04a30984aea45235e40edb9801d2939b35a08519d1d63df0d0c6c47131a6"} err="failed to get container status \"f98a04a30984aea45235e40edb9801d2939b35a08519d1d63df0d0c6c47131a6\": rpc error: code = NotFound desc = could not find container \"f98a04a30984aea45235e40edb9801d2939b35a08519d1d63df0d0c6c47131a6\": container with ID starting with f98a04a30984aea45235e40edb9801d2939b35a08519d1d63df0d0c6c47131a6 not found: ID does not exist" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.831557 4758 scope.go:117] "RemoveContainer" containerID="385c8e25a62d5dad6aeac43a064397418c85c1b8720414cd44e3e925fa85a04d" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.831894 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"385c8e25a62d5dad6aeac43a064397418c85c1b8720414cd44e3e925fa85a04d"} err="failed to get container status \"385c8e25a62d5dad6aeac43a064397418c85c1b8720414cd44e3e925fa85a04d\": rpc error: code = NotFound desc = could not find container \"385c8e25a62d5dad6aeac43a064397418c85c1b8720414cd44e3e925fa85a04d\": container with ID starting with 385c8e25a62d5dad6aeac43a064397418c85c1b8720414cd44e3e925fa85a04d not found: ID does not exist" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.831931 4758 scope.go:117] "RemoveContainer" containerID="c2bb807fa30678efaca258ed72a274a7f4e065ce20066caf601177dbc8466409" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.832207 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2bb807fa30678efaca258ed72a274a7f4e065ce20066caf601177dbc8466409"} err="failed to get container status \"c2bb807fa30678efaca258ed72a274a7f4e065ce20066caf601177dbc8466409\": rpc error: code = NotFound desc = could not find container \"c2bb807fa30678efaca258ed72a274a7f4e065ce20066caf601177dbc8466409\": container with ID starting with c2bb807fa30678efaca258ed72a274a7f4e065ce20066caf601177dbc8466409 not found: ID does not exist" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.832225 4758 scope.go:117] "RemoveContainer" containerID="915d9141459dc9d0a72681717513aaef7a876003397a1ed89a62b755bb45dc67" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.832513 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"915d9141459dc9d0a72681717513aaef7a876003397a1ed89a62b755bb45dc67"} err="failed to get container status \"915d9141459dc9d0a72681717513aaef7a876003397a1ed89a62b755bb45dc67\": rpc error: code = NotFound desc = could not find container \"915d9141459dc9d0a72681717513aaef7a876003397a1ed89a62b755bb45dc67\": container with ID starting with 915d9141459dc9d0a72681717513aaef7a876003397a1ed89a62b755bb45dc67 not found: ID does not exist" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.832531 4758 scope.go:117] "RemoveContainer" containerID="2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.832861 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9"} err="failed to get container status \"2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\": rpc error: code = NotFound desc = could not find container \"2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\": container with ID starting with 2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9 not found: ID does not exist" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.832884 4758 scope.go:117] "RemoveContainer" containerID="450f07057ff0cbdff80b0a0746974a16bb12814a6720db90adaf18b0968da691" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.833125 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"450f07057ff0cbdff80b0a0746974a16bb12814a6720db90adaf18b0968da691"} err="failed to get container status \"450f07057ff0cbdff80b0a0746974a16bb12814a6720db90adaf18b0968da691\": rpc error: code = NotFound desc = could not find container \"450f07057ff0cbdff80b0a0746974a16bb12814a6720db90adaf18b0968da691\": container with ID starting with 450f07057ff0cbdff80b0a0746974a16bb12814a6720db90adaf18b0968da691 not found: ID does not exist" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.833142 4758 scope.go:117] "RemoveContainer" containerID="9cfdd5744f9e8afe2a851b86ac85473f44fb49066784a282306ca8c1d621974b" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.833545 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cfdd5744f9e8afe2a851b86ac85473f44fb49066784a282306ca8c1d621974b"} err="failed to get container status \"9cfdd5744f9e8afe2a851b86ac85473f44fb49066784a282306ca8c1d621974b\": rpc error: code = NotFound desc = could not find container \"9cfdd5744f9e8afe2a851b86ac85473f44fb49066784a282306ca8c1d621974b\": container with ID starting with 9cfdd5744f9e8afe2a851b86ac85473f44fb49066784a282306ca8c1d621974b not found: ID does not exist" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.833570 4758 scope.go:117] "RemoveContainer" containerID="596bd59377fe79f228ddda88e07b73a2f24a57ce836d0f0b2ca02d6008363020" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.833935 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"596bd59377fe79f228ddda88e07b73a2f24a57ce836d0f0b2ca02d6008363020"} err="failed to get container status \"596bd59377fe79f228ddda88e07b73a2f24a57ce836d0f0b2ca02d6008363020\": rpc error: code = NotFound desc = could not find container \"596bd59377fe79f228ddda88e07b73a2f24a57ce836d0f0b2ca02d6008363020\": container with ID starting with 596bd59377fe79f228ddda88e07b73a2f24a57ce836d0f0b2ca02d6008363020 not found: ID does not exist" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.833964 4758 scope.go:117] "RemoveContainer" containerID="47ade0d50980af81530f1be5dbb599cf39cd13941d216485b18422f8474a1d8f" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.834283 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47ade0d50980af81530f1be5dbb599cf39cd13941d216485b18422f8474a1d8f"} err="failed to get container status \"47ade0d50980af81530f1be5dbb599cf39cd13941d216485b18422f8474a1d8f\": rpc error: code = NotFound desc = could not find container \"47ade0d50980af81530f1be5dbb599cf39cd13941d216485b18422f8474a1d8f\": container with ID starting with 47ade0d50980af81530f1be5dbb599cf39cd13941d216485b18422f8474a1d8f not found: ID does not exist" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.834309 4758 scope.go:117] "RemoveContainer" containerID="f98a04a30984aea45235e40edb9801d2939b35a08519d1d63df0d0c6c47131a6" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.834574 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f98a04a30984aea45235e40edb9801d2939b35a08519d1d63df0d0c6c47131a6"} err="failed to get container status \"f98a04a30984aea45235e40edb9801d2939b35a08519d1d63df0d0c6c47131a6\": rpc error: code = NotFound desc = could not find container \"f98a04a30984aea45235e40edb9801d2939b35a08519d1d63df0d0c6c47131a6\": container with ID starting with f98a04a30984aea45235e40edb9801d2939b35a08519d1d63df0d0c6c47131a6 not found: ID does not exist" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.834600 4758 scope.go:117] "RemoveContainer" containerID="385c8e25a62d5dad6aeac43a064397418c85c1b8720414cd44e3e925fa85a04d" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.834925 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"385c8e25a62d5dad6aeac43a064397418c85c1b8720414cd44e3e925fa85a04d"} err="failed to get container status \"385c8e25a62d5dad6aeac43a064397418c85c1b8720414cd44e3e925fa85a04d\": rpc error: code = NotFound desc = could not find container \"385c8e25a62d5dad6aeac43a064397418c85c1b8720414cd44e3e925fa85a04d\": container with ID starting with 385c8e25a62d5dad6aeac43a064397418c85c1b8720414cd44e3e925fa85a04d not found: ID does not exist" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.834944 4758 scope.go:117] "RemoveContainer" containerID="c2bb807fa30678efaca258ed72a274a7f4e065ce20066caf601177dbc8466409" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.835177 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2bb807fa30678efaca258ed72a274a7f4e065ce20066caf601177dbc8466409"} err="failed to get container status \"c2bb807fa30678efaca258ed72a274a7f4e065ce20066caf601177dbc8466409\": rpc error: code = NotFound desc = could not find container \"c2bb807fa30678efaca258ed72a274a7f4e065ce20066caf601177dbc8466409\": container with ID starting with c2bb807fa30678efaca258ed72a274a7f4e065ce20066caf601177dbc8466409 not found: ID does not exist" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.835231 4758 scope.go:117] "RemoveContainer" containerID="915d9141459dc9d0a72681717513aaef7a876003397a1ed89a62b755bb45dc67" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.835560 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"915d9141459dc9d0a72681717513aaef7a876003397a1ed89a62b755bb45dc67"} err="failed to get container status \"915d9141459dc9d0a72681717513aaef7a876003397a1ed89a62b755bb45dc67\": rpc error: code = NotFound desc = could not find container \"915d9141459dc9d0a72681717513aaef7a876003397a1ed89a62b755bb45dc67\": container with ID starting with 915d9141459dc9d0a72681717513aaef7a876003397a1ed89a62b755bb45dc67 not found: ID does not exist" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.835585 4758 scope.go:117] "RemoveContainer" containerID="2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.835860 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9"} err="failed to get container status \"2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\": rpc error: code = NotFound desc = could not find container \"2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9\": container with ID starting with 2cdf70a8ef0a6df9742f1e06a0446c55c92b2d13b4175e4f3820d1ecb7d428f9 not found: ID does not exist" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.847526 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-env-overrides\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.847588 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-log-socket\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.847613 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-etc-openvswitch\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.847630 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-ovnkube-config\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.847648 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-var-lib-openvswitch\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.847664 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-host-run-netns\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.847666 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-log-socket\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.847688 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-systemd-units\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.847729 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-systemd-units\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.847670 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-etc-openvswitch\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.847798 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-var-lib-openvswitch\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.847827 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-host-cni-netd\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.847823 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-host-run-netns\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.847860 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-host-cni-netd\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.847900 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-ovnkube-script-lib\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.847935 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-node-log\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.848037 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-run-ovn\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.848072 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.848139 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-host-run-ovn-kubernetes\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.848188 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-host-kubelet\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.848212 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-ovn-node-metrics-cert\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.848234 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-run-systemd\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.848254 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-host-cni-bin\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.848288 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-host-slash\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.848334 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-run-openvswitch\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.848378 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kg4m9\" (UniqueName: \"kubernetes.io/projected/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-kube-api-access-kg4m9\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.848440 4758 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.848458 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96qwj\" (UniqueName: \"kubernetes.io/projected/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-kube-api-access-96qwj\") on node \"crc\" DevicePath \"\"" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.848459 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-ovnkube-config\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.848471 4758 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.848508 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-host-kubelet\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.848543 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-node-log\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.848569 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-run-ovn\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.848598 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.848632 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-host-cni-bin\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.848661 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-run-systemd\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.848665 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-run-openvswitch\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.848679 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-host-slash\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.848686 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-host-run-ovn-kubernetes\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.848692 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-ovnkube-script-lib\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.849231 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-env-overrides\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.852885 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-ovn-node-metrics-cert\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.864134 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kg4m9\" (UniqueName: \"kubernetes.io/projected/3b6994c8-7df9-43d3-bb9f-d7984d5c661f-kube-api-access-kg4m9\") pod \"ovnkube-node-qm8wq\" (UID: \"3b6994c8-7df9-43d3-bb9f-d7984d5c661f\") " pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:26 crc kubenswrapper[4758]: I0122 16:41:26.997941 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:27 crc kubenswrapper[4758]: W0122 16:41:27.027005 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6994c8_7df9_43d3_bb9f_d7984d5c661f.slice/crio-c0ff75ec93cf254753caaef82008c89cd3d8e397ff937c7bb01b5a004ca0da4e WatchSource:0}: Error finding container c0ff75ec93cf254753caaef82008c89cd3d8e397ff937c7bb01b5a004ca0da4e: Status 404 returned error can't find the container with id c0ff75ec93cf254753caaef82008c89cd3d8e397ff937c7bb01b5a004ca0da4e Jan 22 16:41:27 crc kubenswrapper[4758]: I0122 16:41:27.627830 4758 generic.go:334] "Generic (PLEG): container finished" podID="3b6994c8-7df9-43d3-bb9f-d7984d5c661f" containerID="246b66e1b623f74192713273a7e71deb3f938e0557dd8a6dc2f0dbf7619e1900" exitCode=0 Jan 22 16:41:27 crc kubenswrapper[4758]: I0122 16:41:27.628115 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" event={"ID":"3b6994c8-7df9-43d3-bb9f-d7984d5c661f","Type":"ContainerDied","Data":"246b66e1b623f74192713273a7e71deb3f938e0557dd8a6dc2f0dbf7619e1900"} Jan 22 16:41:27 crc kubenswrapper[4758]: I0122 16:41:27.628356 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" event={"ID":"3b6994c8-7df9-43d3-bb9f-d7984d5c661f","Type":"ContainerStarted","Data":"c0ff75ec93cf254753caaef82008c89cd3d8e397ff937c7bb01b5a004ca0da4e"} Jan 22 16:41:27 crc kubenswrapper[4758]: I0122 16:41:27.630593 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jdpck" Jan 22 16:41:27 crc kubenswrapper[4758]: I0122 16:41:27.698417 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jdpck"] Jan 22 16:41:27 crc kubenswrapper[4758]: I0122 16:41:27.710734 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jdpck"] Jan 22 16:41:28 crc kubenswrapper[4758]: I0122 16:41:28.641422 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" event={"ID":"3b6994c8-7df9-43d3-bb9f-d7984d5c661f","Type":"ContainerStarted","Data":"e33a9dbbc54b7a1d64ce47309b655fc6d2defe608b4b64a12b8a05dfda88ad62"} Jan 22 16:41:28 crc kubenswrapper[4758]: I0122 16:41:28.641862 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" event={"ID":"3b6994c8-7df9-43d3-bb9f-d7984d5c661f","Type":"ContainerStarted","Data":"23c930b50d014dcce80e5170d30db5f1ef2935fac49664acc22d4f8fbe397059"} Jan 22 16:41:28 crc kubenswrapper[4758]: I0122 16:41:28.641878 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" event={"ID":"3b6994c8-7df9-43d3-bb9f-d7984d5c661f","Type":"ContainerStarted","Data":"1d28963b8860215acd0c2c8f3114f13887ae5e288b9c8b9350634123fbfaffea"} Jan 22 16:41:28 crc kubenswrapper[4758]: I0122 16:41:28.641890 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" event={"ID":"3b6994c8-7df9-43d3-bb9f-d7984d5c661f","Type":"ContainerStarted","Data":"bfd464db0e7610b79634d5e6347e3cfeed4e4835efcb51c88985d0204125e444"} Jan 22 16:41:28 crc kubenswrapper[4758]: I0122 16:41:28.641918 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" event={"ID":"3b6994c8-7df9-43d3-bb9f-d7984d5c661f","Type":"ContainerStarted","Data":"4767eefc7828e07d5069e8953b69150951ae52f42db70ed9b17bf579eb091121"} Jan 22 16:41:28 crc kubenswrapper[4758]: I0122 16:41:28.641929 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" event={"ID":"3b6994c8-7df9-43d3-bb9f-d7984d5c661f","Type":"ContainerStarted","Data":"5bc8354fdc35217b031b24e178da668cc4acc0d658b70c522fc1271cfc16e59c"} Jan 22 16:41:28 crc kubenswrapper[4758]: I0122 16:41:28.818549 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b60a09e-8bfa-4d2e-998d-e1db5dec0faa" path="/var/lib/kubelet/pods/9b60a09e-8bfa-4d2e-998d-e1db5dec0faa/volumes" Jan 22 16:41:30 crc kubenswrapper[4758]: I0122 16:41:30.634854 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-hcn6c" Jan 22 16:41:30 crc kubenswrapper[4758]: I0122 16:41:30.657886 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" event={"ID":"3b6994c8-7df9-43d3-bb9f-d7984d5c661f","Type":"ContainerStarted","Data":"56dabcd1940ab9c3e89b6919ec1daf1ec74af62d787a883f54f559771c96c16f"} Jan 22 16:41:33 crc kubenswrapper[4758]: I0122 16:41:33.678410 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" event={"ID":"3b6994c8-7df9-43d3-bb9f-d7984d5c661f","Type":"ContainerStarted","Data":"9e8a05d6ea6149dfd924e54244847b2b620cb500b17898b73ea2565e1f2544ee"} Jan 22 16:41:33 crc kubenswrapper[4758]: I0122 16:41:33.679032 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:33 crc kubenswrapper[4758]: I0122 16:41:33.679055 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:33 crc kubenswrapper[4758]: I0122 16:41:33.679068 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:33 crc kubenswrapper[4758]: I0122 16:41:33.718051 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" podStartSLOduration=7.7180324890000005 podStartE2EDuration="7.718032489s" podCreationTimestamp="2026-01-22 16:41:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:41:33.715628612 +0000 UTC m=+715.198967907" watchObservedRunningTime="2026-01-22 16:41:33.718032489 +0000 UTC m=+715.201371774" Jan 22 16:41:33 crc kubenswrapper[4758]: I0122 16:41:33.728254 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:33 crc kubenswrapper[4758]: I0122 16:41:33.737984 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:41:39 crc kubenswrapper[4758]: I0122 16:41:39.808129 4758 scope.go:117] "RemoveContainer" containerID="733ea95ed7d8d4ff71e143ac3734ecdaaaec088e3579e9563ae043bb871c0a3d" Jan 22 16:41:39 crc kubenswrapper[4758]: E0122 16:41:39.809192 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-7dvfg_openshift-multus(97853b38-352d-42df-ad31-639c0e58093a)\"" pod="openshift-multus/multus-7dvfg" podUID="97853b38-352d-42df-ad31-639c0e58093a" Jan 22 16:41:53 crc kubenswrapper[4758]: I0122 16:41:53.808288 4758 scope.go:117] "RemoveContainer" containerID="733ea95ed7d8d4ff71e143ac3734ecdaaaec088e3579e9563ae043bb871c0a3d" Jan 22 16:41:54 crc kubenswrapper[4758]: I0122 16:41:54.803077 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7dvfg_97853b38-352d-42df-ad31-639c0e58093a/kube-multus/2.log" Jan 22 16:41:54 crc kubenswrapper[4758]: I0122 16:41:54.803525 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7dvfg" event={"ID":"97853b38-352d-42df-ad31-639c0e58093a","Type":"ContainerStarted","Data":"5d31ee738b3deb1194c4b7e553964124bedea4c3b82669b9dd13ea54bebe4bb7"} Jan 22 16:41:57 crc kubenswrapper[4758]: I0122 16:41:57.033238 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qm8wq" Jan 22 16:42:08 crc kubenswrapper[4758]: I0122 16:42:08.387793 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mhgrz"] Jan 22 16:42:08 crc kubenswrapper[4758]: I0122 16:42:08.389328 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mhgrz" Jan 22 16:42:08 crc kubenswrapper[4758]: I0122 16:42:08.391718 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 22 16:42:08 crc kubenswrapper[4758]: I0122 16:42:08.396005 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mhgrz"] Jan 22 16:42:08 crc kubenswrapper[4758]: I0122 16:42:08.535233 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68tfg\" (UniqueName: \"kubernetes.io/projected/8d48ec26-2fe3-4ade-82f3-db3d61bf969c-kube-api-access-68tfg\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mhgrz\" (UID: \"8d48ec26-2fe3-4ade-82f3-db3d61bf969c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mhgrz" Jan 22 16:42:08 crc kubenswrapper[4758]: I0122 16:42:08.535369 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8d48ec26-2fe3-4ade-82f3-db3d61bf969c-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mhgrz\" (UID: \"8d48ec26-2fe3-4ade-82f3-db3d61bf969c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mhgrz" Jan 22 16:42:08 crc kubenswrapper[4758]: I0122 16:42:08.535451 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8d48ec26-2fe3-4ade-82f3-db3d61bf969c-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mhgrz\" (UID: \"8d48ec26-2fe3-4ade-82f3-db3d61bf969c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mhgrz" Jan 22 16:42:08 crc kubenswrapper[4758]: I0122 16:42:08.636143 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8d48ec26-2fe3-4ade-82f3-db3d61bf969c-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mhgrz\" (UID: \"8d48ec26-2fe3-4ade-82f3-db3d61bf969c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mhgrz" Jan 22 16:42:08 crc kubenswrapper[4758]: I0122 16:42:08.636199 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68tfg\" (UniqueName: \"kubernetes.io/projected/8d48ec26-2fe3-4ade-82f3-db3d61bf969c-kube-api-access-68tfg\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mhgrz\" (UID: \"8d48ec26-2fe3-4ade-82f3-db3d61bf969c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mhgrz" Jan 22 16:42:08 crc kubenswrapper[4758]: I0122 16:42:08.636248 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8d48ec26-2fe3-4ade-82f3-db3d61bf969c-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mhgrz\" (UID: \"8d48ec26-2fe3-4ade-82f3-db3d61bf969c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mhgrz" Jan 22 16:42:08 crc kubenswrapper[4758]: I0122 16:42:08.636678 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8d48ec26-2fe3-4ade-82f3-db3d61bf969c-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mhgrz\" (UID: \"8d48ec26-2fe3-4ade-82f3-db3d61bf969c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mhgrz" Jan 22 16:42:08 crc kubenswrapper[4758]: I0122 16:42:08.636691 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8d48ec26-2fe3-4ade-82f3-db3d61bf969c-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mhgrz\" (UID: \"8d48ec26-2fe3-4ade-82f3-db3d61bf969c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mhgrz" Jan 22 16:42:08 crc kubenswrapper[4758]: I0122 16:42:08.656141 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68tfg\" (UniqueName: \"kubernetes.io/projected/8d48ec26-2fe3-4ade-82f3-db3d61bf969c-kube-api-access-68tfg\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mhgrz\" (UID: \"8d48ec26-2fe3-4ade-82f3-db3d61bf969c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mhgrz" Jan 22 16:42:08 crc kubenswrapper[4758]: I0122 16:42:08.703944 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mhgrz" Jan 22 16:42:09 crc kubenswrapper[4758]: I0122 16:42:09.100216 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mhgrz"] Jan 22 16:42:09 crc kubenswrapper[4758]: I0122 16:42:09.874885 4758 generic.go:334] "Generic (PLEG): container finished" podID="8d48ec26-2fe3-4ade-82f3-db3d61bf969c" containerID="2366697709ddab285fcee55b10983a3939d052397008043556ade79afa1e0bc3" exitCode=0 Jan 22 16:42:09 crc kubenswrapper[4758]: I0122 16:42:09.874955 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mhgrz" event={"ID":"8d48ec26-2fe3-4ade-82f3-db3d61bf969c","Type":"ContainerDied","Data":"2366697709ddab285fcee55b10983a3939d052397008043556ade79afa1e0bc3"} Jan 22 16:42:09 crc kubenswrapper[4758]: I0122 16:42:09.875258 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mhgrz" event={"ID":"8d48ec26-2fe3-4ade-82f3-db3d61bf969c","Type":"ContainerStarted","Data":"a9f18747bdc9735727a5428eb91b00afe0e6d3286e895eee0b71b1bb218744c8"} Jan 22 16:42:22 crc kubenswrapper[4758]: I0122 16:42:22.945141 4758 generic.go:334] "Generic (PLEG): container finished" podID="8d48ec26-2fe3-4ade-82f3-db3d61bf969c" containerID="1335f02fc25f1b4b8d57625870daec15bac552f5a996375bbed6041c23aee8ef" exitCode=0 Jan 22 16:42:22 crc kubenswrapper[4758]: I0122 16:42:22.945936 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mhgrz" event={"ID":"8d48ec26-2fe3-4ade-82f3-db3d61bf969c","Type":"ContainerDied","Data":"1335f02fc25f1b4b8d57625870daec15bac552f5a996375bbed6041c23aee8ef"} Jan 22 16:42:23 crc kubenswrapper[4758]: I0122 16:42:23.953951 4758 generic.go:334] "Generic (PLEG): container finished" podID="8d48ec26-2fe3-4ade-82f3-db3d61bf969c" containerID="6d02fcebe6e9ae46b10103982e90654b3077dea15a9734a7ce1b90ad2b4161cc" exitCode=0 Jan 22 16:42:23 crc kubenswrapper[4758]: I0122 16:42:23.954282 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mhgrz" event={"ID":"8d48ec26-2fe3-4ade-82f3-db3d61bf969c","Type":"ContainerDied","Data":"6d02fcebe6e9ae46b10103982e90654b3077dea15a9734a7ce1b90ad2b4161cc"} Jan 22 16:42:25 crc kubenswrapper[4758]: I0122 16:42:25.408450 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mhgrz" Jan 22 16:42:25 crc kubenswrapper[4758]: I0122 16:42:25.518877 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68tfg\" (UniqueName: \"kubernetes.io/projected/8d48ec26-2fe3-4ade-82f3-db3d61bf969c-kube-api-access-68tfg\") pod \"8d48ec26-2fe3-4ade-82f3-db3d61bf969c\" (UID: \"8d48ec26-2fe3-4ade-82f3-db3d61bf969c\") " Jan 22 16:42:25 crc kubenswrapper[4758]: I0122 16:42:25.519274 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8d48ec26-2fe3-4ade-82f3-db3d61bf969c-util\") pod \"8d48ec26-2fe3-4ade-82f3-db3d61bf969c\" (UID: \"8d48ec26-2fe3-4ade-82f3-db3d61bf969c\") " Jan 22 16:42:25 crc kubenswrapper[4758]: I0122 16:42:25.519356 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8d48ec26-2fe3-4ade-82f3-db3d61bf969c-bundle\") pod \"8d48ec26-2fe3-4ade-82f3-db3d61bf969c\" (UID: \"8d48ec26-2fe3-4ade-82f3-db3d61bf969c\") " Jan 22 16:42:25 crc kubenswrapper[4758]: I0122 16:42:25.521190 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d48ec26-2fe3-4ade-82f3-db3d61bf969c-bundle" (OuterVolumeSpecName: "bundle") pod "8d48ec26-2fe3-4ade-82f3-db3d61bf969c" (UID: "8d48ec26-2fe3-4ade-82f3-db3d61bf969c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:42:25 crc kubenswrapper[4758]: I0122 16:42:25.523988 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d48ec26-2fe3-4ade-82f3-db3d61bf969c-kube-api-access-68tfg" (OuterVolumeSpecName: "kube-api-access-68tfg") pod "8d48ec26-2fe3-4ade-82f3-db3d61bf969c" (UID: "8d48ec26-2fe3-4ade-82f3-db3d61bf969c"). InnerVolumeSpecName "kube-api-access-68tfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:42:25 crc kubenswrapper[4758]: I0122 16:42:25.529819 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d48ec26-2fe3-4ade-82f3-db3d61bf969c-util" (OuterVolumeSpecName: "util") pod "8d48ec26-2fe3-4ade-82f3-db3d61bf969c" (UID: "8d48ec26-2fe3-4ade-82f3-db3d61bf969c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:42:25 crc kubenswrapper[4758]: I0122 16:42:25.620439 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68tfg\" (UniqueName: \"kubernetes.io/projected/8d48ec26-2fe3-4ade-82f3-db3d61bf969c-kube-api-access-68tfg\") on node \"crc\" DevicePath \"\"" Jan 22 16:42:25 crc kubenswrapper[4758]: I0122 16:42:25.620863 4758 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8d48ec26-2fe3-4ade-82f3-db3d61bf969c-util\") on node \"crc\" DevicePath \"\"" Jan 22 16:42:25 crc kubenswrapper[4758]: I0122 16:42:25.621009 4758 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8d48ec26-2fe3-4ade-82f3-db3d61bf969c-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:42:25 crc kubenswrapper[4758]: I0122 16:42:25.970997 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mhgrz" event={"ID":"8d48ec26-2fe3-4ade-82f3-db3d61bf969c","Type":"ContainerDied","Data":"a9f18747bdc9735727a5428eb91b00afe0e6d3286e895eee0b71b1bb218744c8"} Jan 22 16:42:25 crc kubenswrapper[4758]: I0122 16:42:25.971051 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9f18747bdc9735727a5428eb91b00afe0e6d3286e895eee0b71b1bb218744c8" Jan 22 16:42:25 crc kubenswrapper[4758]: I0122 16:42:25.971113 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mhgrz" Jan 22 16:42:33 crc kubenswrapper[4758]: I0122 16:42:33.122446 4758 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 22 16:42:37 crc kubenswrapper[4758]: I0122 16:42:37.560978 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-54jp6"] Jan 22 16:42:37 crc kubenswrapper[4758]: E0122 16:42:37.561735 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d48ec26-2fe3-4ade-82f3-db3d61bf969c" containerName="util" Jan 22 16:42:37 crc kubenswrapper[4758]: I0122 16:42:37.561793 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d48ec26-2fe3-4ade-82f3-db3d61bf969c" containerName="util" Jan 22 16:42:37 crc kubenswrapper[4758]: E0122 16:42:37.561810 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d48ec26-2fe3-4ade-82f3-db3d61bf969c" containerName="extract" Jan 22 16:42:37 crc kubenswrapper[4758]: I0122 16:42:37.561817 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d48ec26-2fe3-4ade-82f3-db3d61bf969c" containerName="extract" Jan 22 16:42:37 crc kubenswrapper[4758]: E0122 16:42:37.561837 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d48ec26-2fe3-4ade-82f3-db3d61bf969c" containerName="pull" Jan 22 16:42:37 crc kubenswrapper[4758]: I0122 16:42:37.561842 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d48ec26-2fe3-4ade-82f3-db3d61bf969c" containerName="pull" Jan 22 16:42:37 crc kubenswrapper[4758]: I0122 16:42:37.562072 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d48ec26-2fe3-4ade-82f3-db3d61bf969c" containerName="extract" Jan 22 16:42:37 crc kubenswrapper[4758]: I0122 16:42:37.562631 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-54jp6" Jan 22 16:42:37 crc kubenswrapper[4758]: I0122 16:42:37.569284 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 22 16:42:37 crc kubenswrapper[4758]: I0122 16:42:37.569438 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 22 16:42:37 crc kubenswrapper[4758]: I0122 16:42:37.569532 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-4jql8" Jan 22 16:42:37 crc kubenswrapper[4758]: I0122 16:42:37.579140 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-54jp6"] Jan 22 16:42:37 crc kubenswrapper[4758]: I0122 16:42:37.647709 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-647895bbd9-wx9dr"] Jan 22 16:42:37 crc kubenswrapper[4758]: I0122 16:42:37.648556 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-647895bbd9-wx9dr" Jan 22 16:42:37 crc kubenswrapper[4758]: I0122 16:42:37.652435 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 22 16:42:37 crc kubenswrapper[4758]: I0122 16:42:37.652518 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-z2sxt" Jan 22 16:42:37 crc kubenswrapper[4758]: I0122 16:42:37.658442 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ce26f110-8bb8-42b0-82cc-a001c2c1ebaf-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-647895bbd9-wx9dr\" (UID: \"ce26f110-8bb8-42b0-82cc-a001c2c1ebaf\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-647895bbd9-wx9dr" Jan 22 16:42:37 crc kubenswrapper[4758]: I0122 16:42:37.658497 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ce26f110-8bb8-42b0-82cc-a001c2c1ebaf-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-647895bbd9-wx9dr\" (UID: \"ce26f110-8bb8-42b0-82cc-a001c2c1ebaf\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-647895bbd9-wx9dr" Jan 22 16:42:37 crc kubenswrapper[4758]: I0122 16:42:37.658566 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9bgb\" (UniqueName: \"kubernetes.io/projected/fdd4969c-d2b9-45fa-b5b2-da97462c0122-kube-api-access-p9bgb\") pod \"obo-prometheus-operator-68bc856cb9-54jp6\" (UID: \"fdd4969c-d2b9-45fa-b5b2-da97462c0122\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-54jp6" Jan 22 16:42:37 crc kubenswrapper[4758]: I0122 16:42:37.658631 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-647895bbd9-4wr75"] Jan 22 16:42:37 crc kubenswrapper[4758]: I0122 16:42:37.659496 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-647895bbd9-4wr75" Jan 22 16:42:37 crc kubenswrapper[4758]: I0122 16:42:37.673916 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-647895bbd9-4wr75"] Jan 22 16:42:37 crc kubenswrapper[4758]: I0122 16:42:37.705930 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-647895bbd9-wx9dr"] Jan 22 16:42:37 crc kubenswrapper[4758]: I0122 16:42:37.759298 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ce26f110-8bb8-42b0-82cc-a001c2c1ebaf-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-647895bbd9-wx9dr\" (UID: \"ce26f110-8bb8-42b0-82cc-a001c2c1ebaf\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-647895bbd9-wx9dr" Jan 22 16:42:37 crc kubenswrapper[4758]: I0122 16:42:37.759587 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e73c81be-8209-43d3-9756-49c2157dde87-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-647895bbd9-4wr75\" (UID: \"e73c81be-8209-43d3-9756-49c2157dde87\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-647895bbd9-4wr75" Jan 22 16:42:37 crc kubenswrapper[4758]: I0122 16:42:37.759723 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9bgb\" (UniqueName: \"kubernetes.io/projected/fdd4969c-d2b9-45fa-b5b2-da97462c0122-kube-api-access-p9bgb\") pod \"obo-prometheus-operator-68bc856cb9-54jp6\" (UID: \"fdd4969c-d2b9-45fa-b5b2-da97462c0122\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-54jp6" Jan 22 16:42:37 crc kubenswrapper[4758]: I0122 16:42:37.759883 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e73c81be-8209-43d3-9756-49c2157dde87-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-647895bbd9-4wr75\" (UID: \"e73c81be-8209-43d3-9756-49c2157dde87\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-647895bbd9-4wr75" Jan 22 16:42:37 crc kubenswrapper[4758]: I0122 16:42:37.759991 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ce26f110-8bb8-42b0-82cc-a001c2c1ebaf-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-647895bbd9-wx9dr\" (UID: \"ce26f110-8bb8-42b0-82cc-a001c2c1ebaf\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-647895bbd9-wx9dr" Jan 22 16:42:37 crc kubenswrapper[4758]: I0122 16:42:37.765517 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ce26f110-8bb8-42b0-82cc-a001c2c1ebaf-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-647895bbd9-wx9dr\" (UID: \"ce26f110-8bb8-42b0-82cc-a001c2c1ebaf\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-647895bbd9-wx9dr" Jan 22 16:42:37 crc kubenswrapper[4758]: I0122 16:42:37.770491 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ce26f110-8bb8-42b0-82cc-a001c2c1ebaf-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-647895bbd9-wx9dr\" (UID: \"ce26f110-8bb8-42b0-82cc-a001c2c1ebaf\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-647895bbd9-wx9dr" Jan 22 16:42:37 crc kubenswrapper[4758]: I0122 16:42:37.778145 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9bgb\" (UniqueName: \"kubernetes.io/projected/fdd4969c-d2b9-45fa-b5b2-da97462c0122-kube-api-access-p9bgb\") pod \"obo-prometheus-operator-68bc856cb9-54jp6\" (UID: \"fdd4969c-d2b9-45fa-b5b2-da97462c0122\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-54jp6" Jan 22 16:42:37 crc kubenswrapper[4758]: I0122 16:42:37.793241 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-thgv5"] Jan 22 16:42:37 crc kubenswrapper[4758]: I0122 16:42:37.793875 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-thgv5" Jan 22 16:42:37 crc kubenswrapper[4758]: I0122 16:42:37.795234 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-rdwz2" Jan 22 16:42:37 crc kubenswrapper[4758]: I0122 16:42:37.795925 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 22 16:42:37 crc kubenswrapper[4758]: I0122 16:42:37.816479 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-thgv5"] Jan 22 16:42:37 crc kubenswrapper[4758]: I0122 16:42:37.860950 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e73c81be-8209-43d3-9756-49c2157dde87-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-647895bbd9-4wr75\" (UID: \"e73c81be-8209-43d3-9756-49c2157dde87\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-647895bbd9-4wr75" Jan 22 16:42:37 crc kubenswrapper[4758]: I0122 16:42:37.861047 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e73c81be-8209-43d3-9756-49c2157dde87-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-647895bbd9-4wr75\" (UID: \"e73c81be-8209-43d3-9756-49c2157dde87\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-647895bbd9-4wr75" Jan 22 16:42:37 crc kubenswrapper[4758]: I0122 16:42:37.861070 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/e12dec2b-da40-4cad-92b5-91ab59c0e7c2-observability-operator-tls\") pod \"observability-operator-59bdc8b94-thgv5\" (UID: \"e12dec2b-da40-4cad-92b5-91ab59c0e7c2\") " pod="openshift-operators/observability-operator-59bdc8b94-thgv5" Jan 22 16:42:37 crc kubenswrapper[4758]: I0122 16:42:37.861093 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px4j2\" (UniqueName: \"kubernetes.io/projected/e12dec2b-da40-4cad-92b5-91ab59c0e7c2-kube-api-access-px4j2\") pod \"observability-operator-59bdc8b94-thgv5\" (UID: \"e12dec2b-da40-4cad-92b5-91ab59c0e7c2\") " pod="openshift-operators/observability-operator-59bdc8b94-thgv5" Jan 22 16:42:37 crc kubenswrapper[4758]: I0122 16:42:37.866254 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e73c81be-8209-43d3-9756-49c2157dde87-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-647895bbd9-4wr75\" (UID: \"e73c81be-8209-43d3-9756-49c2157dde87\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-647895bbd9-4wr75" Jan 22 16:42:37 crc kubenswrapper[4758]: I0122 16:42:37.866406 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e73c81be-8209-43d3-9756-49c2157dde87-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-647895bbd9-4wr75\" (UID: \"e73c81be-8209-43d3-9756-49c2157dde87\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-647895bbd9-4wr75" Jan 22 16:42:37 crc kubenswrapper[4758]: I0122 16:42:37.921597 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-54jp6" Jan 22 16:42:37 crc kubenswrapper[4758]: I0122 16:42:37.965044 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-647895bbd9-wx9dr" Jan 22 16:42:37 crc kubenswrapper[4758]: I0122 16:42:37.965323 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/e12dec2b-da40-4cad-92b5-91ab59c0e7c2-observability-operator-tls\") pod \"observability-operator-59bdc8b94-thgv5\" (UID: \"e12dec2b-da40-4cad-92b5-91ab59c0e7c2\") " pod="openshift-operators/observability-operator-59bdc8b94-thgv5" Jan 22 16:42:37 crc kubenswrapper[4758]: I0122 16:42:37.965382 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px4j2\" (UniqueName: \"kubernetes.io/projected/e12dec2b-da40-4cad-92b5-91ab59c0e7c2-kube-api-access-px4j2\") pod \"observability-operator-59bdc8b94-thgv5\" (UID: \"e12dec2b-da40-4cad-92b5-91ab59c0e7c2\") " pod="openshift-operators/observability-operator-59bdc8b94-thgv5" Jan 22 16:42:37 crc kubenswrapper[4758]: I0122 16:42:37.968455 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/e12dec2b-da40-4cad-92b5-91ab59c0e7c2-observability-operator-tls\") pod \"observability-operator-59bdc8b94-thgv5\" (UID: \"e12dec2b-da40-4cad-92b5-91ab59c0e7c2\") " pod="openshift-operators/observability-operator-59bdc8b94-thgv5" Jan 22 16:42:37 crc kubenswrapper[4758]: I0122 16:42:37.983115 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-647895bbd9-4wr75" Jan 22 16:42:38 crc kubenswrapper[4758]: I0122 16:42:38.001538 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px4j2\" (UniqueName: \"kubernetes.io/projected/e12dec2b-da40-4cad-92b5-91ab59c0e7c2-kube-api-access-px4j2\") pod \"observability-operator-59bdc8b94-thgv5\" (UID: \"e12dec2b-da40-4cad-92b5-91ab59c0e7c2\") " pod="openshift-operators/observability-operator-59bdc8b94-thgv5" Jan 22 16:42:38 crc kubenswrapper[4758]: I0122 16:42:38.008465 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-fgjds"] Jan 22 16:42:38 crc kubenswrapper[4758]: I0122 16:42:38.012878 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-fgjds" Jan 22 16:42:38 crc kubenswrapper[4758]: I0122 16:42:38.015677 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-c658k" Jan 22 16:42:38 crc kubenswrapper[4758]: I0122 16:42:38.018072 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-fgjds"] Jan 22 16:42:38 crc kubenswrapper[4758]: I0122 16:42:38.129101 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-thgv5" Jan 22 16:42:38 crc kubenswrapper[4758]: I0122 16:42:38.176308 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgr95\" (UniqueName: \"kubernetes.io/projected/1a0e3e73-5ee6-4155-b3b2-0ada1f94100e-kube-api-access-mgr95\") pod \"perses-operator-5bf474d74f-fgjds\" (UID: \"1a0e3e73-5ee6-4155-b3b2-0ada1f94100e\") " pod="openshift-operators/perses-operator-5bf474d74f-fgjds" Jan 22 16:42:38 crc kubenswrapper[4758]: I0122 16:42:38.176706 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/1a0e3e73-5ee6-4155-b3b2-0ada1f94100e-openshift-service-ca\") pod \"perses-operator-5bf474d74f-fgjds\" (UID: \"1a0e3e73-5ee6-4155-b3b2-0ada1f94100e\") " pod="openshift-operators/perses-operator-5bf474d74f-fgjds" Jan 22 16:42:38 crc kubenswrapper[4758]: I0122 16:42:38.281619 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/1a0e3e73-5ee6-4155-b3b2-0ada1f94100e-openshift-service-ca\") pod \"perses-operator-5bf474d74f-fgjds\" (UID: \"1a0e3e73-5ee6-4155-b3b2-0ada1f94100e\") " pod="openshift-operators/perses-operator-5bf474d74f-fgjds" Jan 22 16:42:38 crc kubenswrapper[4758]: I0122 16:42:38.281679 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgr95\" (UniqueName: \"kubernetes.io/projected/1a0e3e73-5ee6-4155-b3b2-0ada1f94100e-kube-api-access-mgr95\") pod \"perses-operator-5bf474d74f-fgjds\" (UID: \"1a0e3e73-5ee6-4155-b3b2-0ada1f94100e\") " pod="openshift-operators/perses-operator-5bf474d74f-fgjds" Jan 22 16:42:38 crc kubenswrapper[4758]: I0122 16:42:38.283678 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/1a0e3e73-5ee6-4155-b3b2-0ada1f94100e-openshift-service-ca\") pod \"perses-operator-5bf474d74f-fgjds\" (UID: \"1a0e3e73-5ee6-4155-b3b2-0ada1f94100e\") " pod="openshift-operators/perses-operator-5bf474d74f-fgjds" Jan 22 16:42:38 crc kubenswrapper[4758]: I0122 16:42:38.297694 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-54jp6"] Jan 22 16:42:38 crc kubenswrapper[4758]: I0122 16:42:38.320872 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgr95\" (UniqueName: \"kubernetes.io/projected/1a0e3e73-5ee6-4155-b3b2-0ada1f94100e-kube-api-access-mgr95\") pod \"perses-operator-5bf474d74f-fgjds\" (UID: \"1a0e3e73-5ee6-4155-b3b2-0ada1f94100e\") " pod="openshift-operators/perses-operator-5bf474d74f-fgjds" Jan 22 16:42:38 crc kubenswrapper[4758]: I0122 16:42:38.337097 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-fgjds" Jan 22 16:42:38 crc kubenswrapper[4758]: I0122 16:42:38.543312 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-thgv5"] Jan 22 16:42:38 crc kubenswrapper[4758]: W0122 16:42:38.549847 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode12dec2b_da40_4cad_92b5_91ab59c0e7c2.slice/crio-a118b0bdc9d1bfd424a5ec1bc1a1a10a5cf5027a2aa56d3a1420ffac6f0c49f9 WatchSource:0}: Error finding container a118b0bdc9d1bfd424a5ec1bc1a1a10a5cf5027a2aa56d3a1420ffac6f0c49f9: Status 404 returned error can't find the container with id a118b0bdc9d1bfd424a5ec1bc1a1a10a5cf5027a2aa56d3a1420ffac6f0c49f9 Jan 22 16:42:38 crc kubenswrapper[4758]: I0122 16:42:38.601502 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-647895bbd9-wx9dr"] Jan 22 16:42:38 crc kubenswrapper[4758]: W0122 16:42:38.613876 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce26f110_8bb8_42b0_82cc_a001c2c1ebaf.slice/crio-9b403a6fc3a6b5ef8eb96024b5f52fbaf7b4137a9c147d64d1702d60868090c9 WatchSource:0}: Error finding container 9b403a6fc3a6b5ef8eb96024b5f52fbaf7b4137a9c147d64d1702d60868090c9: Status 404 returned error can't find the container with id 9b403a6fc3a6b5ef8eb96024b5f52fbaf7b4137a9c147d64d1702d60868090c9 Jan 22 16:42:38 crc kubenswrapper[4758]: I0122 16:42:38.629413 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-647895bbd9-4wr75"] Jan 22 16:42:38 crc kubenswrapper[4758]: W0122 16:42:38.644644 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode73c81be_8209_43d3_9756_49c2157dde87.slice/crio-9840f76aeb7ae57736afe93484f00b154ae99f568204e5d3c79a5d8a24eb7cbd WatchSource:0}: Error finding container 9840f76aeb7ae57736afe93484f00b154ae99f568204e5d3c79a5d8a24eb7cbd: Status 404 returned error can't find the container with id 9840f76aeb7ae57736afe93484f00b154ae99f568204e5d3c79a5d8a24eb7cbd Jan 22 16:42:38 crc kubenswrapper[4758]: I0122 16:42:38.729656 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-fgjds"] Jan 22 16:42:38 crc kubenswrapper[4758]: W0122 16:42:38.736806 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a0e3e73_5ee6_4155_b3b2_0ada1f94100e.slice/crio-6d06d69048afdeaa41c8501e57df13809a5defa8b78e74930b05917474f9b596 WatchSource:0}: Error finding container 6d06d69048afdeaa41c8501e57df13809a5defa8b78e74930b05917474f9b596: Status 404 returned error can't find the container with id 6d06d69048afdeaa41c8501e57df13809a5defa8b78e74930b05917474f9b596 Jan 22 16:42:39 crc kubenswrapper[4758]: I0122 16:42:39.073386 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-54jp6" event={"ID":"fdd4969c-d2b9-45fa-b5b2-da97462c0122","Type":"ContainerStarted","Data":"1aece415e7568a2bb18df4ab73010bcf09162808ac382c044c43cf8e4dee463e"} Jan 22 16:42:39 crc kubenswrapper[4758]: I0122 16:42:39.074942 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-fgjds" event={"ID":"1a0e3e73-5ee6-4155-b3b2-0ada1f94100e","Type":"ContainerStarted","Data":"6d06d69048afdeaa41c8501e57df13809a5defa8b78e74930b05917474f9b596"} Jan 22 16:42:39 crc kubenswrapper[4758]: I0122 16:42:39.075950 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-647895bbd9-4wr75" event={"ID":"e73c81be-8209-43d3-9756-49c2157dde87","Type":"ContainerStarted","Data":"9840f76aeb7ae57736afe93484f00b154ae99f568204e5d3c79a5d8a24eb7cbd"} Jan 22 16:42:39 crc kubenswrapper[4758]: I0122 16:42:39.076943 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-647895bbd9-wx9dr" event={"ID":"ce26f110-8bb8-42b0-82cc-a001c2c1ebaf","Type":"ContainerStarted","Data":"9b403a6fc3a6b5ef8eb96024b5f52fbaf7b4137a9c147d64d1702d60868090c9"} Jan 22 16:42:39 crc kubenswrapper[4758]: I0122 16:42:39.077779 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-thgv5" event={"ID":"e12dec2b-da40-4cad-92b5-91ab59c0e7c2","Type":"ContainerStarted","Data":"a118b0bdc9d1bfd424a5ec1bc1a1a10a5cf5027a2aa56d3a1420ffac6f0c49f9"} Jan 22 16:42:43 crc kubenswrapper[4758]: I0122 16:42:43.838264 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:42:43 crc kubenswrapper[4758]: I0122 16:42:43.838768 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:42:52 crc kubenswrapper[4758]: E0122 16:42:52.315075 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a" Jan 22 16:42:52 crc kubenswrapper[4758]: E0122 16:42:52.315827 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a,Command:[],Args:[--prometheus-config-reloader=$(RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER) --prometheus-instance-selector=app.kubernetes.io/managed-by=observability-operator --alertmanager-instance-selector=app.kubernetes.io/managed-by=observability-operator --thanos-ruler-instance-selector=app.kubernetes.io/managed-by=observability-operator --watch-referenced-objects-in-all-namespaces=true --disable-unmanaged-prometheus-configuration=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOGC,Value:30,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER,Value:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{157286400 0} {} 150Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p9bgb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-68bc856cb9-54jp6_openshift-operators(fdd4969c-d2b9-45fa-b5b2-da97462c0122): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 16:42:52 crc kubenswrapper[4758]: E0122 16:42:52.317054 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-54jp6" podUID="fdd4969c-d2b9-45fa-b5b2-da97462c0122" Jan 22 16:42:53 crc kubenswrapper[4758]: E0122 16:42:53.081793 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8" Jan 22 16:42:53 crc kubenswrapper[4758]: E0122 16:42:53.082385 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:perses-operator,Image:registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{134217728 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openshift-service-ca,ReadOnly:true,MountPath:/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mgr95,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000350000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod perses-operator-5bf474d74f-fgjds_openshift-operators(1a0e3e73-5ee6-4155-b3b2-0ada1f94100e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 16:42:53 crc kubenswrapper[4758]: E0122 16:42:53.083440 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"perses-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/perses-operator-5bf474d74f-fgjds" podUID="1a0e3e73-5ee6-4155-b3b2-0ada1f94100e" Jan 22 16:42:53 crc kubenswrapper[4758]: E0122 16:42:53.196839 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"perses-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8\\\"\"" pod="openshift-operators/perses-operator-5bf474d74f-fgjds" podUID="1a0e3e73-5ee6-4155-b3b2-0ada1f94100e" Jan 22 16:42:53 crc kubenswrapper[4758]: E0122 16:42:53.196859 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a\\\"\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-54jp6" podUID="fdd4969c-d2b9-45fa-b5b2-da97462c0122" Jan 22 16:42:54 crc kubenswrapper[4758]: I0122 16:42:54.202808 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-647895bbd9-4wr75" event={"ID":"e73c81be-8209-43d3-9756-49c2157dde87","Type":"ContainerStarted","Data":"c51bbc037b03d5808bd164453810eec8d70635261d6eac0aff3f1ae4ede8650f"} Jan 22 16:42:54 crc kubenswrapper[4758]: I0122 16:42:54.204629 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-647895bbd9-wx9dr" event={"ID":"ce26f110-8bb8-42b0-82cc-a001c2c1ebaf","Type":"ContainerStarted","Data":"b2cda76da9cbe2eb3b3f2cf9da9be841ced07a97cec6b528bc221d8067575154"} Jan 22 16:42:54 crc kubenswrapper[4758]: I0122 16:42:54.206156 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-thgv5" event={"ID":"e12dec2b-da40-4cad-92b5-91ab59c0e7c2","Type":"ContainerStarted","Data":"1d39dd9e3e0165d270e4ad4254717593e8195796d4f1a0980434d4e65fb6f736"} Jan 22 16:42:54 crc kubenswrapper[4758]: I0122 16:42:54.206390 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-thgv5" Jan 22 16:42:54 crc kubenswrapper[4758]: I0122 16:42:54.221399 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-647895bbd9-4wr75" podStartSLOduration=2.784824488 podStartE2EDuration="17.221380573s" podCreationTimestamp="2026-01-22 16:42:37 +0000 UTC" firstStartedPulling="2026-01-22 16:42:38.646930583 +0000 UTC m=+780.130269868" lastFinishedPulling="2026-01-22 16:42:53.083486668 +0000 UTC m=+794.566825953" observedRunningTime="2026-01-22 16:42:54.219048676 +0000 UTC m=+795.702387961" watchObservedRunningTime="2026-01-22 16:42:54.221380573 +0000 UTC m=+795.704719858" Jan 22 16:42:54 crc kubenswrapper[4758]: I0122 16:42:54.246855 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-647895bbd9-wx9dr" podStartSLOduration=2.7509016969999998 podStartE2EDuration="17.246818985s" podCreationTimestamp="2026-01-22 16:42:37 +0000 UTC" firstStartedPulling="2026-01-22 16:42:38.620425276 +0000 UTC m=+780.103764561" lastFinishedPulling="2026-01-22 16:42:53.116342564 +0000 UTC m=+794.599681849" observedRunningTime="2026-01-22 16:42:54.246068104 +0000 UTC m=+795.729407399" watchObservedRunningTime="2026-01-22 16:42:54.246818985 +0000 UTC m=+795.730158280" Jan 22 16:42:54 crc kubenswrapper[4758]: I0122 16:42:54.273335 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-thgv5" podStartSLOduration=2.711531416 podStartE2EDuration="17.273317848s" podCreationTimestamp="2026-01-22 16:42:37 +0000 UTC" firstStartedPulling="2026-01-22 16:42:38.554611823 +0000 UTC m=+780.037951108" lastFinishedPulling="2026-01-22 16:42:53.116398255 +0000 UTC m=+794.599737540" observedRunningTime="2026-01-22 16:42:54.272359491 +0000 UTC m=+795.755698776" watchObservedRunningTime="2026-01-22 16:42:54.273317848 +0000 UTC m=+795.756657133" Jan 22 16:42:54 crc kubenswrapper[4758]: I0122 16:42:54.407497 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-thgv5" Jan 22 16:43:08 crc kubenswrapper[4758]: I0122 16:43:08.283653 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-54jp6" event={"ID":"fdd4969c-d2b9-45fa-b5b2-da97462c0122","Type":"ContainerStarted","Data":"e2b677c913de1204a89120f661f9596179f50da162abb24ca7c534236248a4ed"} Jan 22 16:43:08 crc kubenswrapper[4758]: I0122 16:43:08.306379 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-54jp6" podStartSLOduration=1.8227110720000002 podStartE2EDuration="31.306358718s" podCreationTimestamp="2026-01-22 16:42:37 +0000 UTC" firstStartedPulling="2026-01-22 16:42:38.315983095 +0000 UTC m=+779.799322380" lastFinishedPulling="2026-01-22 16:43:07.799630741 +0000 UTC m=+809.282970026" observedRunningTime="2026-01-22 16:43:08.30227187 +0000 UTC m=+809.785611165" watchObservedRunningTime="2026-01-22 16:43:08.306358718 +0000 UTC m=+809.789698003" Jan 22 16:43:09 crc kubenswrapper[4758]: I0122 16:43:09.294608 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-fgjds" event={"ID":"1a0e3e73-5ee6-4155-b3b2-0ada1f94100e","Type":"ContainerStarted","Data":"b267df85a0a4e655b260942a989ebaff4be699dfb35b885bb4375381a4d35328"} Jan 22 16:43:09 crc kubenswrapper[4758]: I0122 16:43:09.295973 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-fgjds" Jan 22 16:43:13 crc kubenswrapper[4758]: I0122 16:43:13.837256 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:43:13 crc kubenswrapper[4758]: I0122 16:43:13.837611 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:43:18 crc kubenswrapper[4758]: I0122 16:43:18.340763 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-fgjds" Jan 22 16:43:18 crc kubenswrapper[4758]: I0122 16:43:18.360612 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-fgjds" podStartSLOduration=11.309589659 podStartE2EDuration="41.360592055s" podCreationTimestamp="2026-01-22 16:42:37 +0000 UTC" firstStartedPulling="2026-01-22 16:42:38.739418656 +0000 UTC m=+780.222757941" lastFinishedPulling="2026-01-22 16:43:08.790421052 +0000 UTC m=+810.273760337" observedRunningTime="2026-01-22 16:43:09.314146787 +0000 UTC m=+810.797486082" watchObservedRunningTime="2026-01-22 16:43:18.360592055 +0000 UTC m=+819.843931340" Jan 22 16:43:35 crc kubenswrapper[4758]: I0122 16:43:35.937166 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zkspj"] Jan 22 16:43:35 crc kubenswrapper[4758]: I0122 16:43:35.938806 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zkspj" Jan 22 16:43:35 crc kubenswrapper[4758]: I0122 16:43:35.948351 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 22 16:43:35 crc kubenswrapper[4758]: I0122 16:43:35.953408 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zkspj"] Jan 22 16:43:36 crc kubenswrapper[4758]: I0122 16:43:36.065546 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5hff\" (UniqueName: \"kubernetes.io/projected/89caa1d0-37ab-4cb9-b204-30a78b86fd9f-kube-api-access-b5hff\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zkspj\" (UID: \"89caa1d0-37ab-4cb9-b204-30a78b86fd9f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zkspj" Jan 22 16:43:36 crc kubenswrapper[4758]: I0122 16:43:36.065689 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/89caa1d0-37ab-4cb9-b204-30a78b86fd9f-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zkspj\" (UID: \"89caa1d0-37ab-4cb9-b204-30a78b86fd9f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zkspj" Jan 22 16:43:36 crc kubenswrapper[4758]: I0122 16:43:36.065720 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/89caa1d0-37ab-4cb9-b204-30a78b86fd9f-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zkspj\" (UID: \"89caa1d0-37ab-4cb9-b204-30a78b86fd9f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zkspj" Jan 22 16:43:36 crc kubenswrapper[4758]: I0122 16:43:36.167041 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/89caa1d0-37ab-4cb9-b204-30a78b86fd9f-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zkspj\" (UID: \"89caa1d0-37ab-4cb9-b204-30a78b86fd9f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zkspj" Jan 22 16:43:36 crc kubenswrapper[4758]: I0122 16:43:36.167342 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/89caa1d0-37ab-4cb9-b204-30a78b86fd9f-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zkspj\" (UID: \"89caa1d0-37ab-4cb9-b204-30a78b86fd9f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zkspj" Jan 22 16:43:36 crc kubenswrapper[4758]: I0122 16:43:36.167483 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5hff\" (UniqueName: \"kubernetes.io/projected/89caa1d0-37ab-4cb9-b204-30a78b86fd9f-kube-api-access-b5hff\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zkspj\" (UID: \"89caa1d0-37ab-4cb9-b204-30a78b86fd9f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zkspj" Jan 22 16:43:36 crc kubenswrapper[4758]: I0122 16:43:36.167732 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/89caa1d0-37ab-4cb9-b204-30a78b86fd9f-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zkspj\" (UID: \"89caa1d0-37ab-4cb9-b204-30a78b86fd9f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zkspj" Jan 22 16:43:36 crc kubenswrapper[4758]: I0122 16:43:36.167857 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/89caa1d0-37ab-4cb9-b204-30a78b86fd9f-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zkspj\" (UID: \"89caa1d0-37ab-4cb9-b204-30a78b86fd9f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zkspj" Jan 22 16:43:36 crc kubenswrapper[4758]: I0122 16:43:36.194010 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5hff\" (UniqueName: \"kubernetes.io/projected/89caa1d0-37ab-4cb9-b204-30a78b86fd9f-kube-api-access-b5hff\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zkspj\" (UID: \"89caa1d0-37ab-4cb9-b204-30a78b86fd9f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zkspj" Jan 22 16:43:36 crc kubenswrapper[4758]: I0122 16:43:36.256035 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zkspj" Jan 22 16:43:36 crc kubenswrapper[4758]: I0122 16:43:36.680901 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zkspj"] Jan 22 16:43:37 crc kubenswrapper[4758]: I0122 16:43:37.651698 4758 generic.go:334] "Generic (PLEG): container finished" podID="89caa1d0-37ab-4cb9-b204-30a78b86fd9f" containerID="f84428f8f079af18a2742c71995ee648123cb73c35e2081f0b1010f535e561fa" exitCode=0 Jan 22 16:43:37 crc kubenswrapper[4758]: I0122 16:43:37.651787 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zkspj" event={"ID":"89caa1d0-37ab-4cb9-b204-30a78b86fd9f","Type":"ContainerDied","Data":"f84428f8f079af18a2742c71995ee648123cb73c35e2081f0b1010f535e561fa"} Jan 22 16:43:37 crc kubenswrapper[4758]: I0122 16:43:37.651997 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zkspj" event={"ID":"89caa1d0-37ab-4cb9-b204-30a78b86fd9f","Type":"ContainerStarted","Data":"8bc7edaf58e9f1805aa4a2b8de65eb5a5a1614ee43340e74f1ffefef43124138"} Jan 22 16:43:37 crc kubenswrapper[4758]: I0122 16:43:37.655143 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9d6rp"] Jan 22 16:43:37 crc kubenswrapper[4758]: I0122 16:43:37.656250 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9d6rp" Jan 22 16:43:37 crc kubenswrapper[4758]: I0122 16:43:37.683038 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9d6rp"] Jan 22 16:43:37 crc kubenswrapper[4758]: I0122 16:43:37.789067 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a85c69d6-3710-452b-9588-8749343b7d2a-utilities\") pod \"redhat-operators-9d6rp\" (UID: \"a85c69d6-3710-452b-9588-8749343b7d2a\") " pod="openshift-marketplace/redhat-operators-9d6rp" Jan 22 16:43:37 crc kubenswrapper[4758]: I0122 16:43:37.789165 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a85c69d6-3710-452b-9588-8749343b7d2a-catalog-content\") pod \"redhat-operators-9d6rp\" (UID: \"a85c69d6-3710-452b-9588-8749343b7d2a\") " pod="openshift-marketplace/redhat-operators-9d6rp" Jan 22 16:43:37 crc kubenswrapper[4758]: I0122 16:43:37.789240 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk57f\" (UniqueName: \"kubernetes.io/projected/a85c69d6-3710-452b-9588-8749343b7d2a-kube-api-access-pk57f\") pod \"redhat-operators-9d6rp\" (UID: \"a85c69d6-3710-452b-9588-8749343b7d2a\") " pod="openshift-marketplace/redhat-operators-9d6rp" Jan 22 16:43:37 crc kubenswrapper[4758]: I0122 16:43:37.890005 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a85c69d6-3710-452b-9588-8749343b7d2a-utilities\") pod \"redhat-operators-9d6rp\" (UID: \"a85c69d6-3710-452b-9588-8749343b7d2a\") " pod="openshift-marketplace/redhat-operators-9d6rp" Jan 22 16:43:37 crc kubenswrapper[4758]: I0122 16:43:37.890400 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a85c69d6-3710-452b-9588-8749343b7d2a-catalog-content\") pod \"redhat-operators-9d6rp\" (UID: \"a85c69d6-3710-452b-9588-8749343b7d2a\") " pod="openshift-marketplace/redhat-operators-9d6rp" Jan 22 16:43:37 crc kubenswrapper[4758]: I0122 16:43:37.890503 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pk57f\" (UniqueName: \"kubernetes.io/projected/a85c69d6-3710-452b-9588-8749343b7d2a-kube-api-access-pk57f\") pod \"redhat-operators-9d6rp\" (UID: \"a85c69d6-3710-452b-9588-8749343b7d2a\") " pod="openshift-marketplace/redhat-operators-9d6rp" Jan 22 16:43:37 crc kubenswrapper[4758]: I0122 16:43:37.890576 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a85c69d6-3710-452b-9588-8749343b7d2a-utilities\") pod \"redhat-operators-9d6rp\" (UID: \"a85c69d6-3710-452b-9588-8749343b7d2a\") " pod="openshift-marketplace/redhat-operators-9d6rp" Jan 22 16:43:37 crc kubenswrapper[4758]: I0122 16:43:37.890995 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a85c69d6-3710-452b-9588-8749343b7d2a-catalog-content\") pod \"redhat-operators-9d6rp\" (UID: \"a85c69d6-3710-452b-9588-8749343b7d2a\") " pod="openshift-marketplace/redhat-operators-9d6rp" Jan 22 16:43:37 crc kubenswrapper[4758]: I0122 16:43:37.917975 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pk57f\" (UniqueName: \"kubernetes.io/projected/a85c69d6-3710-452b-9588-8749343b7d2a-kube-api-access-pk57f\") pod \"redhat-operators-9d6rp\" (UID: \"a85c69d6-3710-452b-9588-8749343b7d2a\") " pod="openshift-marketplace/redhat-operators-9d6rp" Jan 22 16:43:37 crc kubenswrapper[4758]: I0122 16:43:37.977656 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9d6rp" Jan 22 16:43:38 crc kubenswrapper[4758]: I0122 16:43:38.236076 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9d6rp"] Jan 22 16:43:38 crc kubenswrapper[4758]: I0122 16:43:38.659155 4758 generic.go:334] "Generic (PLEG): container finished" podID="a85c69d6-3710-452b-9588-8749343b7d2a" containerID="6d7a0a02923094ffd0e11fd5c139e6b05b1f91bdafd4e7ba121ff392f4ef264c" exitCode=0 Jan 22 16:43:38 crc kubenswrapper[4758]: I0122 16:43:38.659219 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9d6rp" event={"ID":"a85c69d6-3710-452b-9588-8749343b7d2a","Type":"ContainerDied","Data":"6d7a0a02923094ffd0e11fd5c139e6b05b1f91bdafd4e7ba121ff392f4ef264c"} Jan 22 16:43:38 crc kubenswrapper[4758]: I0122 16:43:38.659477 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9d6rp" event={"ID":"a85c69d6-3710-452b-9588-8749343b7d2a","Type":"ContainerStarted","Data":"ba21453e1fe3b3ee8fcf75cdf4066def582752ea461d928d015f01f4f1c361e0"} Jan 22 16:43:39 crc kubenswrapper[4758]: I0122 16:43:39.666013 4758 generic.go:334] "Generic (PLEG): container finished" podID="89caa1d0-37ab-4cb9-b204-30a78b86fd9f" containerID="2eedbb19b6c32de8ecf3a2ec5172ba347a694e0d3f3e53accb6d33674bd86e0a" exitCode=0 Jan 22 16:43:39 crc kubenswrapper[4758]: I0122 16:43:39.666094 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zkspj" event={"ID":"89caa1d0-37ab-4cb9-b204-30a78b86fd9f","Type":"ContainerDied","Data":"2eedbb19b6c32de8ecf3a2ec5172ba347a694e0d3f3e53accb6d33674bd86e0a"} Jan 22 16:43:40 crc kubenswrapper[4758]: I0122 16:43:40.769088 4758 generic.go:334] "Generic (PLEG): container finished" podID="89caa1d0-37ab-4cb9-b204-30a78b86fd9f" containerID="b039e90decf1083de133367457c5259008995cd77bec4876227f7fe1cc27af29" exitCode=0 Jan 22 16:43:40 crc kubenswrapper[4758]: I0122 16:43:40.769815 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zkspj" event={"ID":"89caa1d0-37ab-4cb9-b204-30a78b86fd9f","Type":"ContainerDied","Data":"b039e90decf1083de133367457c5259008995cd77bec4876227f7fe1cc27af29"} Jan 22 16:43:40 crc kubenswrapper[4758]: I0122 16:43:40.771212 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9d6rp" event={"ID":"a85c69d6-3710-452b-9588-8749343b7d2a","Type":"ContainerStarted","Data":"701ed7be15db42c7f643dc10d035d41464427c22f85ca8a29d312c001e0ecb01"} Jan 22 16:43:42 crc kubenswrapper[4758]: I0122 16:43:42.445177 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zkspj" Jan 22 16:43:42 crc kubenswrapper[4758]: I0122 16:43:42.575244 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/89caa1d0-37ab-4cb9-b204-30a78b86fd9f-util\") pod \"89caa1d0-37ab-4cb9-b204-30a78b86fd9f\" (UID: \"89caa1d0-37ab-4cb9-b204-30a78b86fd9f\") " Jan 22 16:43:42 crc kubenswrapper[4758]: I0122 16:43:42.575334 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5hff\" (UniqueName: \"kubernetes.io/projected/89caa1d0-37ab-4cb9-b204-30a78b86fd9f-kube-api-access-b5hff\") pod \"89caa1d0-37ab-4cb9-b204-30a78b86fd9f\" (UID: \"89caa1d0-37ab-4cb9-b204-30a78b86fd9f\") " Jan 22 16:43:42 crc kubenswrapper[4758]: I0122 16:43:42.575377 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/89caa1d0-37ab-4cb9-b204-30a78b86fd9f-bundle\") pod \"89caa1d0-37ab-4cb9-b204-30a78b86fd9f\" (UID: \"89caa1d0-37ab-4cb9-b204-30a78b86fd9f\") " Jan 22 16:43:42 crc kubenswrapper[4758]: I0122 16:43:42.576119 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89caa1d0-37ab-4cb9-b204-30a78b86fd9f-bundle" (OuterVolumeSpecName: "bundle") pod "89caa1d0-37ab-4cb9-b204-30a78b86fd9f" (UID: "89caa1d0-37ab-4cb9-b204-30a78b86fd9f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:43:42 crc kubenswrapper[4758]: I0122 16:43:42.576814 4758 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/89caa1d0-37ab-4cb9-b204-30a78b86fd9f-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:43:42 crc kubenswrapper[4758]: I0122 16:43:42.580837 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89caa1d0-37ab-4cb9-b204-30a78b86fd9f-kube-api-access-b5hff" (OuterVolumeSpecName: "kube-api-access-b5hff") pod "89caa1d0-37ab-4cb9-b204-30a78b86fd9f" (UID: "89caa1d0-37ab-4cb9-b204-30a78b86fd9f"). InnerVolumeSpecName "kube-api-access-b5hff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:43:42 crc kubenswrapper[4758]: I0122 16:43:42.604275 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89caa1d0-37ab-4cb9-b204-30a78b86fd9f-util" (OuterVolumeSpecName: "util") pod "89caa1d0-37ab-4cb9-b204-30a78b86fd9f" (UID: "89caa1d0-37ab-4cb9-b204-30a78b86fd9f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:43:42 crc kubenswrapper[4758]: I0122 16:43:42.677821 4758 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/89caa1d0-37ab-4cb9-b204-30a78b86fd9f-util\") on node \"crc\" DevicePath \"\"" Jan 22 16:43:42 crc kubenswrapper[4758]: I0122 16:43:42.677879 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5hff\" (UniqueName: \"kubernetes.io/projected/89caa1d0-37ab-4cb9-b204-30a78b86fd9f-kube-api-access-b5hff\") on node \"crc\" DevicePath \"\"" Jan 22 16:43:42 crc kubenswrapper[4758]: I0122 16:43:42.790941 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zkspj" event={"ID":"89caa1d0-37ab-4cb9-b204-30a78b86fd9f","Type":"ContainerDied","Data":"8bc7edaf58e9f1805aa4a2b8de65eb5a5a1614ee43340e74f1ffefef43124138"} Jan 22 16:43:42 crc kubenswrapper[4758]: I0122 16:43:42.791007 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bc7edaf58e9f1805aa4a2b8de65eb5a5a1614ee43340e74f1ffefef43124138" Jan 22 16:43:42 crc kubenswrapper[4758]: I0122 16:43:42.791096 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zkspj" Jan 22 16:43:43 crc kubenswrapper[4758]: I0122 16:43:43.801642 4758 generic.go:334] "Generic (PLEG): container finished" podID="a85c69d6-3710-452b-9588-8749343b7d2a" containerID="701ed7be15db42c7f643dc10d035d41464427c22f85ca8a29d312c001e0ecb01" exitCode=0 Jan 22 16:43:43 crc kubenswrapper[4758]: I0122 16:43:43.801905 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9d6rp" event={"ID":"a85c69d6-3710-452b-9588-8749343b7d2a","Type":"ContainerDied","Data":"701ed7be15db42c7f643dc10d035d41464427c22f85ca8a29d312c001e0ecb01"} Jan 22 16:43:43 crc kubenswrapper[4758]: I0122 16:43:43.837162 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:43:43 crc kubenswrapper[4758]: I0122 16:43:43.837246 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:43:43 crc kubenswrapper[4758]: I0122 16:43:43.837642 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" Jan 22 16:43:43 crc kubenswrapper[4758]: I0122 16:43:43.838427 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d0b336b68370ee625e40b6f05f78d3e38cf1d61c80e48d4c0f21417f2aeb9ed4"} pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 16:43:43 crc kubenswrapper[4758]: I0122 16:43:43.838528 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" containerID="cri-o://d0b336b68370ee625e40b6f05f78d3e38cf1d61c80e48d4c0f21417f2aeb9ed4" gracePeriod=600 Jan 22 16:43:44 crc kubenswrapper[4758]: I0122 16:43:44.815307 4758 generic.go:334] "Generic (PLEG): container finished" podID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerID="d0b336b68370ee625e40b6f05f78d3e38cf1d61c80e48d4c0f21417f2aeb9ed4" exitCode=0 Jan 22 16:43:44 crc kubenswrapper[4758]: I0122 16:43:44.817350 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9d6rp" event={"ID":"a85c69d6-3710-452b-9588-8749343b7d2a","Type":"ContainerStarted","Data":"6f0e87874139bf5c77823efdcfbb6114f7cec10c37383d6d56f66d1151f47839"} Jan 22 16:43:44 crc kubenswrapper[4758]: I0122 16:43:44.817403 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerDied","Data":"d0b336b68370ee625e40b6f05f78d3e38cf1d61c80e48d4c0f21417f2aeb9ed4"} Jan 22 16:43:44 crc kubenswrapper[4758]: I0122 16:43:44.817418 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerStarted","Data":"4e70c152f84eff4ec2f397a05d06e518ec83c49b8fe5a577f81aa8dda8239367"} Jan 22 16:43:44 crc kubenswrapper[4758]: I0122 16:43:44.817439 4758 scope.go:117] "RemoveContainer" containerID="d2534229fb8e289739e191d5d234a2856a0000b3c73a9c17a9c7dddb12404503" Jan 22 16:43:44 crc kubenswrapper[4758]: I0122 16:43:44.840366 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9d6rp" podStartSLOduration=2.254326044 podStartE2EDuration="7.840351231s" podCreationTimestamp="2026-01-22 16:43:37 +0000 UTC" firstStartedPulling="2026-01-22 16:43:38.66056617 +0000 UTC m=+840.143905455" lastFinishedPulling="2026-01-22 16:43:44.246591357 +0000 UTC m=+845.729930642" observedRunningTime="2026-01-22 16:43:44.838142299 +0000 UTC m=+846.321481584" watchObservedRunningTime="2026-01-22 16:43:44.840351231 +0000 UTC m=+846.323690516" Jan 22 16:43:46 crc kubenswrapper[4758]: I0122 16:43:46.337294 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-xbrd4"] Jan 22 16:43:46 crc kubenswrapper[4758]: E0122 16:43:46.337922 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89caa1d0-37ab-4cb9-b204-30a78b86fd9f" containerName="extract" Jan 22 16:43:46 crc kubenswrapper[4758]: I0122 16:43:46.337940 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="89caa1d0-37ab-4cb9-b204-30a78b86fd9f" containerName="extract" Jan 22 16:43:46 crc kubenswrapper[4758]: E0122 16:43:46.337954 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89caa1d0-37ab-4cb9-b204-30a78b86fd9f" containerName="pull" Jan 22 16:43:46 crc kubenswrapper[4758]: I0122 16:43:46.337962 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="89caa1d0-37ab-4cb9-b204-30a78b86fd9f" containerName="pull" Jan 22 16:43:46 crc kubenswrapper[4758]: E0122 16:43:46.337979 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89caa1d0-37ab-4cb9-b204-30a78b86fd9f" containerName="util" Jan 22 16:43:46 crc kubenswrapper[4758]: I0122 16:43:46.337986 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="89caa1d0-37ab-4cb9-b204-30a78b86fd9f" containerName="util" Jan 22 16:43:46 crc kubenswrapper[4758]: I0122 16:43:46.338117 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="89caa1d0-37ab-4cb9-b204-30a78b86fd9f" containerName="extract" Jan 22 16:43:46 crc kubenswrapper[4758]: I0122 16:43:46.338613 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-xbrd4" Jan 22 16:43:46 crc kubenswrapper[4758]: I0122 16:43:46.341722 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 22 16:43:46 crc kubenswrapper[4758]: I0122 16:43:46.343809 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-2sf4f" Jan 22 16:43:46 crc kubenswrapper[4758]: I0122 16:43:46.350039 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-xbrd4"] Jan 22 16:43:46 crc kubenswrapper[4758]: I0122 16:43:46.353680 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 22 16:43:46 crc kubenswrapper[4758]: I0122 16:43:46.429646 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnwgs\" (UniqueName: \"kubernetes.io/projected/6f530d4b-935a-43a2-91a1-d3e786e42edd-kube-api-access-cnwgs\") pod \"nmstate-operator-646758c888-xbrd4\" (UID: \"6f530d4b-935a-43a2-91a1-d3e786e42edd\") " pod="openshift-nmstate/nmstate-operator-646758c888-xbrd4" Jan 22 16:43:46 crc kubenswrapper[4758]: I0122 16:43:46.530966 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnwgs\" (UniqueName: \"kubernetes.io/projected/6f530d4b-935a-43a2-91a1-d3e786e42edd-kube-api-access-cnwgs\") pod \"nmstate-operator-646758c888-xbrd4\" (UID: \"6f530d4b-935a-43a2-91a1-d3e786e42edd\") " pod="openshift-nmstate/nmstate-operator-646758c888-xbrd4" Jan 22 16:43:46 crc kubenswrapper[4758]: I0122 16:43:46.554514 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnwgs\" (UniqueName: \"kubernetes.io/projected/6f530d4b-935a-43a2-91a1-d3e786e42edd-kube-api-access-cnwgs\") pod \"nmstate-operator-646758c888-xbrd4\" (UID: \"6f530d4b-935a-43a2-91a1-d3e786e42edd\") " pod="openshift-nmstate/nmstate-operator-646758c888-xbrd4" Jan 22 16:43:46 crc kubenswrapper[4758]: I0122 16:43:46.654362 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-xbrd4" Jan 22 16:43:46 crc kubenswrapper[4758]: I0122 16:43:46.909147 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-xbrd4"] Jan 22 16:43:47 crc kubenswrapper[4758]: I0122 16:43:47.839581 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-xbrd4" event={"ID":"6f530d4b-935a-43a2-91a1-d3e786e42edd","Type":"ContainerStarted","Data":"5b35ce45daf07f127d3eb8c0f185d6755ccfc5932f59941b4b05df6116b71a96"} Jan 22 16:43:47 crc kubenswrapper[4758]: I0122 16:43:47.978435 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9d6rp" Jan 22 16:43:47 crc kubenswrapper[4758]: I0122 16:43:47.978480 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9d6rp" Jan 22 16:43:49 crc kubenswrapper[4758]: I0122 16:43:49.041972 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9d6rp" podUID="a85c69d6-3710-452b-9588-8749343b7d2a" containerName="registry-server" probeResult="failure" output=< Jan 22 16:43:49 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 22 16:43:49 crc kubenswrapper[4758]: > Jan 22 16:43:50 crc kubenswrapper[4758]: I0122 16:43:50.863872 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-xbrd4" event={"ID":"6f530d4b-935a-43a2-91a1-d3e786e42edd","Type":"ContainerStarted","Data":"b89a35aaa251754e0a903f1c7c1723adba03affc018607278d955bdff749a393"} Jan 22 16:43:50 crc kubenswrapper[4758]: I0122 16:43:50.883782 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-xbrd4" podStartSLOduration=2.042770333 podStartE2EDuration="4.883762016s" podCreationTimestamp="2026-01-22 16:43:46 +0000 UTC" firstStartedPulling="2026-01-22 16:43:46.905634753 +0000 UTC m=+848.388974038" lastFinishedPulling="2026-01-22 16:43:49.746626436 +0000 UTC m=+851.229965721" observedRunningTime="2026-01-22 16:43:50.88067337 +0000 UTC m=+852.364012675" watchObservedRunningTime="2026-01-22 16:43:50.883762016 +0000 UTC m=+852.367101301" Jan 22 16:43:57 crc kubenswrapper[4758]: I0122 16:43:57.603554 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-zqjtk"] Jan 22 16:43:57 crc kubenswrapper[4758]: I0122 16:43:57.605344 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-zqjtk" Jan 22 16:43:57 crc kubenswrapper[4758]: I0122 16:43:57.607976 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-v97lh" Jan 22 16:43:57 crc kubenswrapper[4758]: I0122 16:43:57.609276 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-6tvr2"] Jan 22 16:43:57 crc kubenswrapper[4758]: I0122 16:43:57.610007 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6tvr2" Jan 22 16:43:57 crc kubenswrapper[4758]: I0122 16:43:57.611560 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 22 16:43:57 crc kubenswrapper[4758]: I0122 16:43:57.627337 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-zqjtk"] Jan 22 16:43:57 crc kubenswrapper[4758]: I0122 16:43:57.644422 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-6tvr2"] Jan 22 16:43:57 crc kubenswrapper[4758]: I0122 16:43:57.669111 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-bxw2x"] Jan 22 16:43:57 crc kubenswrapper[4758]: I0122 16:43:57.675083 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-bxw2x" Jan 22 16:43:57 crc kubenswrapper[4758]: I0122 16:43:57.745869 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-vf6r8"] Jan 22 16:43:57 crc kubenswrapper[4758]: I0122 16:43:57.747252 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-vf6r8" Jan 22 16:43:57 crc kubenswrapper[4758]: I0122 16:43:57.749980 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-ckpvf" Jan 22 16:43:57 crc kubenswrapper[4758]: I0122 16:43:57.750166 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 22 16:43:57 crc kubenswrapper[4758]: I0122 16:43:57.751643 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 22 16:43:57 crc kubenswrapper[4758]: I0122 16:43:57.757556 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-vf6r8"] Jan 22 16:43:57 crc kubenswrapper[4758]: I0122 16:43:57.771455 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/9371e907-70ad-4d4e-85ed-42d886f3a58c-nmstate-lock\") pod \"nmstate-handler-bxw2x\" (UID: \"9371e907-70ad-4d4e-85ed-42d886f3a58c\") " pod="openshift-nmstate/nmstate-handler-bxw2x" Jan 22 16:43:57 crc kubenswrapper[4758]: I0122 16:43:57.772224 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/9371e907-70ad-4d4e-85ed-42d886f3a58c-dbus-socket\") pod \"nmstate-handler-bxw2x\" (UID: \"9371e907-70ad-4d4e-85ed-42d886f3a58c\") " pod="openshift-nmstate/nmstate-handler-bxw2x" Jan 22 16:43:57 crc kubenswrapper[4758]: I0122 16:43:57.772354 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk65p\" (UniqueName: \"kubernetes.io/projected/2af60d67-9e48-435e-a5a5-3786c6e44da3-kube-api-access-jk65p\") pod \"nmstate-metrics-54757c584b-zqjtk\" (UID: \"2af60d67-9e48-435e-a5a5-3786c6e44da3\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-zqjtk" Jan 22 16:43:57 crc kubenswrapper[4758]: I0122 16:43:57.772489 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/9371e907-70ad-4d4e-85ed-42d886f3a58c-ovs-socket\") pod \"nmstate-handler-bxw2x\" (UID: \"9371e907-70ad-4d4e-85ed-42d886f3a58c\") " pod="openshift-nmstate/nmstate-handler-bxw2x" Jan 22 16:43:57 crc kubenswrapper[4758]: I0122 16:43:57.772594 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrgz6\" (UniqueName: \"kubernetes.io/projected/9371e907-70ad-4d4e-85ed-42d886f3a58c-kube-api-access-xrgz6\") pod \"nmstate-handler-bxw2x\" (UID: \"9371e907-70ad-4d4e-85ed-42d886f3a58c\") " pod="openshift-nmstate/nmstate-handler-bxw2x" Jan 22 16:43:57 crc kubenswrapper[4758]: I0122 16:43:57.772673 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ad84bac3-9a0e-40d9-a603-7d8503a45b32-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-6tvr2\" (UID: \"ad84bac3-9a0e-40d9-a603-7d8503a45b32\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6tvr2" Jan 22 16:43:57 crc kubenswrapper[4758]: I0122 16:43:57.772758 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78bbh\" (UniqueName: \"kubernetes.io/projected/ad84bac3-9a0e-40d9-a603-7d8503a45b32-kube-api-access-78bbh\") pod \"nmstate-webhook-8474b5b9d8-6tvr2\" (UID: \"ad84bac3-9a0e-40d9-a603-7d8503a45b32\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6tvr2" Jan 22 16:43:57 crc kubenswrapper[4758]: I0122 16:43:57.874688 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/851f106a-fb00-4a5d-9112-d188f5bf363d-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-vf6r8\" (UID: \"851f106a-fb00-4a5d-9112-d188f5bf363d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-vf6r8" Jan 22 16:43:57 crc kubenswrapper[4758]: I0122 16:43:57.875264 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzqgj\" (UniqueName: \"kubernetes.io/projected/851f106a-fb00-4a5d-9112-d188f5bf363d-kube-api-access-rzqgj\") pod \"nmstate-console-plugin-7754f76f8b-vf6r8\" (UID: \"851f106a-fb00-4a5d-9112-d188f5bf363d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-vf6r8" Jan 22 16:43:57 crc kubenswrapper[4758]: I0122 16:43:57.875463 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/9371e907-70ad-4d4e-85ed-42d886f3a58c-nmstate-lock\") pod \"nmstate-handler-bxw2x\" (UID: \"9371e907-70ad-4d4e-85ed-42d886f3a58c\") " pod="openshift-nmstate/nmstate-handler-bxw2x" Jan 22 16:43:57 crc kubenswrapper[4758]: I0122 16:43:57.875589 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/9371e907-70ad-4d4e-85ed-42d886f3a58c-dbus-socket\") pod \"nmstate-handler-bxw2x\" (UID: \"9371e907-70ad-4d4e-85ed-42d886f3a58c\") " pod="openshift-nmstate/nmstate-handler-bxw2x" Jan 22 16:43:57 crc kubenswrapper[4758]: I0122 16:43:57.875667 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jk65p\" (UniqueName: \"kubernetes.io/projected/2af60d67-9e48-435e-a5a5-3786c6e44da3-kube-api-access-jk65p\") pod \"nmstate-metrics-54757c584b-zqjtk\" (UID: \"2af60d67-9e48-435e-a5a5-3786c6e44da3\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-zqjtk" Jan 22 16:43:57 crc kubenswrapper[4758]: I0122 16:43:57.875787 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/9371e907-70ad-4d4e-85ed-42d886f3a58c-ovs-socket\") pod \"nmstate-handler-bxw2x\" (UID: \"9371e907-70ad-4d4e-85ed-42d886f3a58c\") " pod="openshift-nmstate/nmstate-handler-bxw2x" Jan 22 16:43:57 crc kubenswrapper[4758]: I0122 16:43:57.875876 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/851f106a-fb00-4a5d-9112-d188f5bf363d-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-vf6r8\" (UID: \"851f106a-fb00-4a5d-9112-d188f5bf363d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-vf6r8" Jan 22 16:43:57 crc kubenswrapper[4758]: I0122 16:43:57.876028 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrgz6\" (UniqueName: \"kubernetes.io/projected/9371e907-70ad-4d4e-85ed-42d886f3a58c-kube-api-access-xrgz6\") pod \"nmstate-handler-bxw2x\" (UID: \"9371e907-70ad-4d4e-85ed-42d886f3a58c\") " pod="openshift-nmstate/nmstate-handler-bxw2x" Jan 22 16:43:57 crc kubenswrapper[4758]: I0122 16:43:57.876078 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ad84bac3-9a0e-40d9-a603-7d8503a45b32-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-6tvr2\" (UID: \"ad84bac3-9a0e-40d9-a603-7d8503a45b32\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6tvr2" Jan 22 16:43:57 crc kubenswrapper[4758]: I0122 16:43:57.876103 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78bbh\" (UniqueName: \"kubernetes.io/projected/ad84bac3-9a0e-40d9-a603-7d8503a45b32-kube-api-access-78bbh\") pod \"nmstate-webhook-8474b5b9d8-6tvr2\" (UID: \"ad84bac3-9a0e-40d9-a603-7d8503a45b32\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6tvr2" Jan 22 16:43:57 crc kubenswrapper[4758]: I0122 16:43:57.876544 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/9371e907-70ad-4d4e-85ed-42d886f3a58c-nmstate-lock\") pod \"nmstate-handler-bxw2x\" (UID: \"9371e907-70ad-4d4e-85ed-42d886f3a58c\") " pod="openshift-nmstate/nmstate-handler-bxw2x" Jan 22 16:43:57 crc kubenswrapper[4758]: I0122 16:43:57.876660 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/9371e907-70ad-4d4e-85ed-42d886f3a58c-ovs-socket\") pod \"nmstate-handler-bxw2x\" (UID: \"9371e907-70ad-4d4e-85ed-42d886f3a58c\") " pod="openshift-nmstate/nmstate-handler-bxw2x" Jan 22 16:43:57 crc kubenswrapper[4758]: E0122 16:43:57.876726 4758 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 22 16:43:57 crc kubenswrapper[4758]: E0122 16:43:57.876786 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad84bac3-9a0e-40d9-a603-7d8503a45b32-tls-key-pair podName:ad84bac3-9a0e-40d9-a603-7d8503a45b32 nodeName:}" failed. No retries permitted until 2026-01-22 16:43:58.376770635 +0000 UTC m=+859.860109920 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/ad84bac3-9a0e-40d9-a603-7d8503a45b32-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-6tvr2" (UID: "ad84bac3-9a0e-40d9-a603-7d8503a45b32") : secret "openshift-nmstate-webhook" not found Jan 22 16:43:57 crc kubenswrapper[4758]: I0122 16:43:57.877117 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/9371e907-70ad-4d4e-85ed-42d886f3a58c-dbus-socket\") pod \"nmstate-handler-bxw2x\" (UID: \"9371e907-70ad-4d4e-85ed-42d886f3a58c\") " pod="openshift-nmstate/nmstate-handler-bxw2x" Jan 22 16:43:57 crc kubenswrapper[4758]: I0122 16:43:57.901381 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78bbh\" (UniqueName: \"kubernetes.io/projected/ad84bac3-9a0e-40d9-a603-7d8503a45b32-kube-api-access-78bbh\") pod \"nmstate-webhook-8474b5b9d8-6tvr2\" (UID: \"ad84bac3-9a0e-40d9-a603-7d8503a45b32\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6tvr2" Jan 22 16:43:57 crc kubenswrapper[4758]: I0122 16:43:57.916441 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrgz6\" (UniqueName: \"kubernetes.io/projected/9371e907-70ad-4d4e-85ed-42d886f3a58c-kube-api-access-xrgz6\") pod \"nmstate-handler-bxw2x\" (UID: \"9371e907-70ad-4d4e-85ed-42d886f3a58c\") " pod="openshift-nmstate/nmstate-handler-bxw2x" Jan 22 16:43:57 crc kubenswrapper[4758]: I0122 16:43:57.918655 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jk65p\" (UniqueName: \"kubernetes.io/projected/2af60d67-9e48-435e-a5a5-3786c6e44da3-kube-api-access-jk65p\") pod \"nmstate-metrics-54757c584b-zqjtk\" (UID: \"2af60d67-9e48-435e-a5a5-3786c6e44da3\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-zqjtk" Jan 22 16:43:57 crc kubenswrapper[4758]: I0122 16:43:57.962681 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-fb6558556-llwv4"] Jan 22 16:43:57 crc kubenswrapper[4758]: I0122 16:43:57.964158 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-fb6558556-llwv4" Jan 22 16:43:57 crc kubenswrapper[4758]: I0122 16:43:57.977175 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-zqjtk" Jan 22 16:43:57 crc kubenswrapper[4758]: I0122 16:43:57.977507 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/851f106a-fb00-4a5d-9112-d188f5bf363d-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-vf6r8\" (UID: \"851f106a-fb00-4a5d-9112-d188f5bf363d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-vf6r8" Jan 22 16:43:57 crc kubenswrapper[4758]: I0122 16:43:57.977676 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzqgj\" (UniqueName: \"kubernetes.io/projected/851f106a-fb00-4a5d-9112-d188f5bf363d-kube-api-access-rzqgj\") pod \"nmstate-console-plugin-7754f76f8b-vf6r8\" (UID: \"851f106a-fb00-4a5d-9112-d188f5bf363d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-vf6r8" Jan 22 16:43:57 crc kubenswrapper[4758]: I0122 16:43:57.977818 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/851f106a-fb00-4a5d-9112-d188f5bf363d-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-vf6r8\" (UID: \"851f106a-fb00-4a5d-9112-d188f5bf363d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-vf6r8" Jan 22 16:43:57 crc kubenswrapper[4758]: I0122 16:43:57.978762 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/851f106a-fb00-4a5d-9112-d188f5bf363d-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-vf6r8\" (UID: \"851f106a-fb00-4a5d-9112-d188f5bf363d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-vf6r8" Jan 22 16:43:57 crc kubenswrapper[4758]: I0122 16:43:57.981325 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/851f106a-fb00-4a5d-9112-d188f5bf363d-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-vf6r8\" (UID: \"851f106a-fb00-4a5d-9112-d188f5bf363d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-vf6r8" Jan 22 16:43:57 crc kubenswrapper[4758]: I0122 16:43:57.986705 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-fb6558556-llwv4"] Jan 22 16:43:58 crc kubenswrapper[4758]: I0122 16:43:58.014732 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-bxw2x" Jan 22 16:43:58 crc kubenswrapper[4758]: I0122 16:43:58.022606 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzqgj\" (UniqueName: \"kubernetes.io/projected/851f106a-fb00-4a5d-9112-d188f5bf363d-kube-api-access-rzqgj\") pod \"nmstate-console-plugin-7754f76f8b-vf6r8\" (UID: \"851f106a-fb00-4a5d-9112-d188f5bf363d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-vf6r8" Jan 22 16:43:58 crc kubenswrapper[4758]: I0122 16:43:58.066135 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9d6rp" Jan 22 16:43:58 crc kubenswrapper[4758]: I0122 16:43:58.079243 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/34be0947-bed5-41d7-9d4d-75e0d04d7421-oauth-serving-cert\") pod \"console-fb6558556-llwv4\" (UID: \"34be0947-bed5-41d7-9d4d-75e0d04d7421\") " pod="openshift-console/console-fb6558556-llwv4" Jan 22 16:43:58 crc kubenswrapper[4758]: I0122 16:43:58.079333 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34be0947-bed5-41d7-9d4d-75e0d04d7421-trusted-ca-bundle\") pod \"console-fb6558556-llwv4\" (UID: \"34be0947-bed5-41d7-9d4d-75e0d04d7421\") " pod="openshift-console/console-fb6558556-llwv4" Jan 22 16:43:58 crc kubenswrapper[4758]: I0122 16:43:58.079368 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48npb\" (UniqueName: \"kubernetes.io/projected/34be0947-bed5-41d7-9d4d-75e0d04d7421-kube-api-access-48npb\") pod \"console-fb6558556-llwv4\" (UID: \"34be0947-bed5-41d7-9d4d-75e0d04d7421\") " pod="openshift-console/console-fb6558556-llwv4" Jan 22 16:43:58 crc kubenswrapper[4758]: I0122 16:43:58.079394 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/34be0947-bed5-41d7-9d4d-75e0d04d7421-service-ca\") pod \"console-fb6558556-llwv4\" (UID: \"34be0947-bed5-41d7-9d4d-75e0d04d7421\") " pod="openshift-console/console-fb6558556-llwv4" Jan 22 16:43:58 crc kubenswrapper[4758]: I0122 16:43:58.079424 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/34be0947-bed5-41d7-9d4d-75e0d04d7421-console-serving-cert\") pod \"console-fb6558556-llwv4\" (UID: \"34be0947-bed5-41d7-9d4d-75e0d04d7421\") " pod="openshift-console/console-fb6558556-llwv4" Jan 22 16:43:58 crc kubenswrapper[4758]: I0122 16:43:58.079481 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/34be0947-bed5-41d7-9d4d-75e0d04d7421-console-oauth-config\") pod \"console-fb6558556-llwv4\" (UID: \"34be0947-bed5-41d7-9d4d-75e0d04d7421\") " pod="openshift-console/console-fb6558556-llwv4" Jan 22 16:43:58 crc kubenswrapper[4758]: I0122 16:43:58.079507 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/34be0947-bed5-41d7-9d4d-75e0d04d7421-console-config\") pod \"console-fb6558556-llwv4\" (UID: \"34be0947-bed5-41d7-9d4d-75e0d04d7421\") " pod="openshift-console/console-fb6558556-llwv4" Jan 22 16:43:58 crc kubenswrapper[4758]: I0122 16:43:58.089887 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-vf6r8" Jan 22 16:43:58 crc kubenswrapper[4758]: I0122 16:43:58.122434 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9d6rp" Jan 22 16:43:58 crc kubenswrapper[4758]: I0122 16:43:58.181123 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/34be0947-bed5-41d7-9d4d-75e0d04d7421-oauth-serving-cert\") pod \"console-fb6558556-llwv4\" (UID: \"34be0947-bed5-41d7-9d4d-75e0d04d7421\") " pod="openshift-console/console-fb6558556-llwv4" Jan 22 16:43:58 crc kubenswrapper[4758]: I0122 16:43:58.181211 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34be0947-bed5-41d7-9d4d-75e0d04d7421-trusted-ca-bundle\") pod \"console-fb6558556-llwv4\" (UID: \"34be0947-bed5-41d7-9d4d-75e0d04d7421\") " pod="openshift-console/console-fb6558556-llwv4" Jan 22 16:43:58 crc kubenswrapper[4758]: I0122 16:43:58.181239 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48npb\" (UniqueName: \"kubernetes.io/projected/34be0947-bed5-41d7-9d4d-75e0d04d7421-kube-api-access-48npb\") pod \"console-fb6558556-llwv4\" (UID: \"34be0947-bed5-41d7-9d4d-75e0d04d7421\") " pod="openshift-console/console-fb6558556-llwv4" Jan 22 16:43:58 crc kubenswrapper[4758]: I0122 16:43:58.181265 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/34be0947-bed5-41d7-9d4d-75e0d04d7421-service-ca\") pod \"console-fb6558556-llwv4\" (UID: \"34be0947-bed5-41d7-9d4d-75e0d04d7421\") " pod="openshift-console/console-fb6558556-llwv4" Jan 22 16:43:58 crc kubenswrapper[4758]: I0122 16:43:58.181290 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/34be0947-bed5-41d7-9d4d-75e0d04d7421-console-serving-cert\") pod \"console-fb6558556-llwv4\" (UID: \"34be0947-bed5-41d7-9d4d-75e0d04d7421\") " pod="openshift-console/console-fb6558556-llwv4" Jan 22 16:43:58 crc kubenswrapper[4758]: I0122 16:43:58.181325 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/34be0947-bed5-41d7-9d4d-75e0d04d7421-console-oauth-config\") pod \"console-fb6558556-llwv4\" (UID: \"34be0947-bed5-41d7-9d4d-75e0d04d7421\") " pod="openshift-console/console-fb6558556-llwv4" Jan 22 16:43:58 crc kubenswrapper[4758]: I0122 16:43:58.181776 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/34be0947-bed5-41d7-9d4d-75e0d04d7421-console-config\") pod \"console-fb6558556-llwv4\" (UID: \"34be0947-bed5-41d7-9d4d-75e0d04d7421\") " pod="openshift-console/console-fb6558556-llwv4" Jan 22 16:43:58 crc kubenswrapper[4758]: I0122 16:43:58.182468 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/34be0947-bed5-41d7-9d4d-75e0d04d7421-oauth-serving-cert\") pod \"console-fb6558556-llwv4\" (UID: \"34be0947-bed5-41d7-9d4d-75e0d04d7421\") " pod="openshift-console/console-fb6558556-llwv4" Jan 22 16:43:58 crc kubenswrapper[4758]: I0122 16:43:58.182534 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/34be0947-bed5-41d7-9d4d-75e0d04d7421-service-ca\") pod \"console-fb6558556-llwv4\" (UID: \"34be0947-bed5-41d7-9d4d-75e0d04d7421\") " pod="openshift-console/console-fb6558556-llwv4" Jan 22 16:43:58 crc kubenswrapper[4758]: I0122 16:43:58.189311 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/34be0947-bed5-41d7-9d4d-75e0d04d7421-console-config\") pod \"console-fb6558556-llwv4\" (UID: \"34be0947-bed5-41d7-9d4d-75e0d04d7421\") " pod="openshift-console/console-fb6558556-llwv4" Jan 22 16:43:58 crc kubenswrapper[4758]: I0122 16:43:58.189483 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/34be0947-bed5-41d7-9d4d-75e0d04d7421-console-oauth-config\") pod \"console-fb6558556-llwv4\" (UID: \"34be0947-bed5-41d7-9d4d-75e0d04d7421\") " pod="openshift-console/console-fb6558556-llwv4" Jan 22 16:43:58 crc kubenswrapper[4758]: I0122 16:43:58.189517 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34be0947-bed5-41d7-9d4d-75e0d04d7421-trusted-ca-bundle\") pod \"console-fb6558556-llwv4\" (UID: \"34be0947-bed5-41d7-9d4d-75e0d04d7421\") " pod="openshift-console/console-fb6558556-llwv4" Jan 22 16:43:58 crc kubenswrapper[4758]: I0122 16:43:58.191776 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/34be0947-bed5-41d7-9d4d-75e0d04d7421-console-serving-cert\") pod \"console-fb6558556-llwv4\" (UID: \"34be0947-bed5-41d7-9d4d-75e0d04d7421\") " pod="openshift-console/console-fb6558556-llwv4" Jan 22 16:43:58 crc kubenswrapper[4758]: I0122 16:43:58.199238 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48npb\" (UniqueName: \"kubernetes.io/projected/34be0947-bed5-41d7-9d4d-75e0d04d7421-kube-api-access-48npb\") pod \"console-fb6558556-llwv4\" (UID: \"34be0947-bed5-41d7-9d4d-75e0d04d7421\") " pod="openshift-console/console-fb6558556-llwv4" Jan 22 16:43:58 crc kubenswrapper[4758]: I0122 16:43:58.282224 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-fb6558556-llwv4" Jan 22 16:43:58 crc kubenswrapper[4758]: I0122 16:43:58.307961 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9d6rp"] Jan 22 16:43:58 crc kubenswrapper[4758]: W0122 16:43:58.310560 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2af60d67_9e48_435e_a5a5_3786c6e44da3.slice/crio-487f91128cb324902265f8785ce0e2ab961170c6e34f88b0bb37e6dbfe658f9d WatchSource:0}: Error finding container 487f91128cb324902265f8785ce0e2ab961170c6e34f88b0bb37e6dbfe658f9d: Status 404 returned error can't find the container with id 487f91128cb324902265f8785ce0e2ab961170c6e34f88b0bb37e6dbfe658f9d Jan 22 16:43:58 crc kubenswrapper[4758]: I0122 16:43:58.312631 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-zqjtk"] Jan 22 16:43:58 crc kubenswrapper[4758]: I0122 16:43:58.384953 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ad84bac3-9a0e-40d9-a603-7d8503a45b32-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-6tvr2\" (UID: \"ad84bac3-9a0e-40d9-a603-7d8503a45b32\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6tvr2" Jan 22 16:43:58 crc kubenswrapper[4758]: I0122 16:43:58.388504 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ad84bac3-9a0e-40d9-a603-7d8503a45b32-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-6tvr2\" (UID: \"ad84bac3-9a0e-40d9-a603-7d8503a45b32\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6tvr2" Jan 22 16:43:58 crc kubenswrapper[4758]: I0122 16:43:58.488194 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-fb6558556-llwv4"] Jan 22 16:43:58 crc kubenswrapper[4758]: W0122 16:43:58.490225 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34be0947_bed5_41d7_9d4d_75e0d04d7421.slice/crio-225cf5706fb1d1285022b352e4f6fd1d6fc107f19888443635c3a76cde35edac WatchSource:0}: Error finding container 225cf5706fb1d1285022b352e4f6fd1d6fc107f19888443635c3a76cde35edac: Status 404 returned error can't find the container with id 225cf5706fb1d1285022b352e4f6fd1d6fc107f19888443635c3a76cde35edac Jan 22 16:43:58 crc kubenswrapper[4758]: I0122 16:43:58.568445 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-vf6r8"] Jan 22 16:43:58 crc kubenswrapper[4758]: W0122 16:43:58.579021 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod851f106a_fb00_4a5d_9112_d188f5bf363d.slice/crio-d54a7b27a2daf93c943221c4928257ec82ed02b378bee30051f4db41d2afa1fa WatchSource:0}: Error finding container d54a7b27a2daf93c943221c4928257ec82ed02b378bee30051f4db41d2afa1fa: Status 404 returned error can't find the container with id d54a7b27a2daf93c943221c4928257ec82ed02b378bee30051f4db41d2afa1fa Jan 22 16:43:58 crc kubenswrapper[4758]: I0122 16:43:58.603110 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6tvr2" Jan 22 16:43:58 crc kubenswrapper[4758]: I0122 16:43:58.919989 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-fb6558556-llwv4" event={"ID":"34be0947-bed5-41d7-9d4d-75e0d04d7421","Type":"ContainerStarted","Data":"225cf5706fb1d1285022b352e4f6fd1d6fc107f19888443635c3a76cde35edac"} Jan 22 16:43:58 crc kubenswrapper[4758]: I0122 16:43:58.920909 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-zqjtk" event={"ID":"2af60d67-9e48-435e-a5a5-3786c6e44da3","Type":"ContainerStarted","Data":"487f91128cb324902265f8785ce0e2ab961170c6e34f88b0bb37e6dbfe658f9d"} Jan 22 16:43:58 crc kubenswrapper[4758]: I0122 16:43:58.921637 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-bxw2x" event={"ID":"9371e907-70ad-4d4e-85ed-42d886f3a58c","Type":"ContainerStarted","Data":"846068471e145c18bf5fdd2e2881ce4129c902d5a12cb7a16524bbd2c5a49d6d"} Jan 22 16:43:58 crc kubenswrapper[4758]: I0122 16:43:58.923845 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-vf6r8" event={"ID":"851f106a-fb00-4a5d-9112-d188f5bf363d","Type":"ContainerStarted","Data":"d54a7b27a2daf93c943221c4928257ec82ed02b378bee30051f4db41d2afa1fa"} Jan 22 16:43:59 crc kubenswrapper[4758]: I0122 16:43:59.020113 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-6tvr2"] Jan 22 16:43:59 crc kubenswrapper[4758]: I0122 16:43:59.930803 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6tvr2" event={"ID":"ad84bac3-9a0e-40d9-a603-7d8503a45b32","Type":"ContainerStarted","Data":"da5c338aa98392a028e931f0aac0f2cfdc1f179d9e071ce64c7a93fbfca54aca"} Jan 22 16:43:59 crc kubenswrapper[4758]: I0122 16:43:59.930957 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9d6rp" podUID="a85c69d6-3710-452b-9588-8749343b7d2a" containerName="registry-server" containerID="cri-o://6f0e87874139bf5c77823efdcfbb6114f7cec10c37383d6d56f66d1151f47839" gracePeriod=2 Jan 22 16:44:00 crc kubenswrapper[4758]: I0122 16:44:00.941022 4758 generic.go:334] "Generic (PLEG): container finished" podID="a85c69d6-3710-452b-9588-8749343b7d2a" containerID="6f0e87874139bf5c77823efdcfbb6114f7cec10c37383d6d56f66d1151f47839" exitCode=0 Jan 22 16:44:00 crc kubenswrapper[4758]: I0122 16:44:00.941065 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9d6rp" event={"ID":"a85c69d6-3710-452b-9588-8749343b7d2a","Type":"ContainerDied","Data":"6f0e87874139bf5c77823efdcfbb6114f7cec10c37383d6d56f66d1151f47839"} Jan 22 16:44:00 crc kubenswrapper[4758]: I0122 16:44:00.941752 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9d6rp" event={"ID":"a85c69d6-3710-452b-9588-8749343b7d2a","Type":"ContainerDied","Data":"ba21453e1fe3b3ee8fcf75cdf4066def582752ea461d928d015f01f4f1c361e0"} Jan 22 16:44:00 crc kubenswrapper[4758]: I0122 16:44:00.941794 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba21453e1fe3b3ee8fcf75cdf4066def582752ea461d928d015f01f4f1c361e0" Jan 22 16:44:00 crc kubenswrapper[4758]: I0122 16:44:00.943700 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-fb6558556-llwv4" event={"ID":"34be0947-bed5-41d7-9d4d-75e0d04d7421","Type":"ContainerStarted","Data":"e070925d9c12c58fbb6fe8b10e46ad01586400de414a195d556f0b2eec2712b3"} Jan 22 16:44:00 crc kubenswrapper[4758]: I0122 16:44:00.965097 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-fb6558556-llwv4" podStartSLOduration=3.965080251 podStartE2EDuration="3.965080251s" podCreationTimestamp="2026-01-22 16:43:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:44:00.960973076 +0000 UTC m=+862.444312361" watchObservedRunningTime="2026-01-22 16:44:00.965080251 +0000 UTC m=+862.448419536" Jan 22 16:44:00 crc kubenswrapper[4758]: I0122 16:44:00.969169 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9d6rp" Jan 22 16:44:01 crc kubenswrapper[4758]: I0122 16:44:01.125714 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a85c69d6-3710-452b-9588-8749343b7d2a-catalog-content\") pod \"a85c69d6-3710-452b-9588-8749343b7d2a\" (UID: \"a85c69d6-3710-452b-9588-8749343b7d2a\") " Jan 22 16:44:01 crc kubenswrapper[4758]: I0122 16:44:01.125865 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a85c69d6-3710-452b-9588-8749343b7d2a-utilities\") pod \"a85c69d6-3710-452b-9588-8749343b7d2a\" (UID: \"a85c69d6-3710-452b-9588-8749343b7d2a\") " Jan 22 16:44:01 crc kubenswrapper[4758]: I0122 16:44:01.125941 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pk57f\" (UniqueName: \"kubernetes.io/projected/a85c69d6-3710-452b-9588-8749343b7d2a-kube-api-access-pk57f\") pod \"a85c69d6-3710-452b-9588-8749343b7d2a\" (UID: \"a85c69d6-3710-452b-9588-8749343b7d2a\") " Jan 22 16:44:01 crc kubenswrapper[4758]: I0122 16:44:01.146316 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a85c69d6-3710-452b-9588-8749343b7d2a-utilities" (OuterVolumeSpecName: "utilities") pod "a85c69d6-3710-452b-9588-8749343b7d2a" (UID: "a85c69d6-3710-452b-9588-8749343b7d2a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:44:01 crc kubenswrapper[4758]: I0122 16:44:01.245660 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a85c69d6-3710-452b-9588-8749343b7d2a-kube-api-access-pk57f" (OuterVolumeSpecName: "kube-api-access-pk57f") pod "a85c69d6-3710-452b-9588-8749343b7d2a" (UID: "a85c69d6-3710-452b-9588-8749343b7d2a"). InnerVolumeSpecName "kube-api-access-pk57f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:44:01 crc kubenswrapper[4758]: I0122 16:44:01.247135 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a85c69d6-3710-452b-9588-8749343b7d2a-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:44:01 crc kubenswrapper[4758]: I0122 16:44:01.247361 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pk57f\" (UniqueName: \"kubernetes.io/projected/a85c69d6-3710-452b-9588-8749343b7d2a-kube-api-access-pk57f\") on node \"crc\" DevicePath \"\"" Jan 22 16:44:01 crc kubenswrapper[4758]: I0122 16:44:01.345921 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a85c69d6-3710-452b-9588-8749343b7d2a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a85c69d6-3710-452b-9588-8749343b7d2a" (UID: "a85c69d6-3710-452b-9588-8749343b7d2a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:44:01 crc kubenswrapper[4758]: I0122 16:44:01.349020 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a85c69d6-3710-452b-9588-8749343b7d2a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:44:01 crc kubenswrapper[4758]: I0122 16:44:01.949276 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9d6rp" Jan 22 16:44:01 crc kubenswrapper[4758]: I0122 16:44:01.980314 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9d6rp"] Jan 22 16:44:01 crc kubenswrapper[4758]: I0122 16:44:01.988117 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9d6rp"] Jan 22 16:44:02 crc kubenswrapper[4758]: I0122 16:44:02.817573 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a85c69d6-3710-452b-9588-8749343b7d2a" path="/var/lib/kubelet/pods/a85c69d6-3710-452b-9588-8749343b7d2a/volumes" Jan 22 16:44:03 crc kubenswrapper[4758]: I0122 16:44:03.979287 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6tvr2" event={"ID":"ad84bac3-9a0e-40d9-a603-7d8503a45b32","Type":"ContainerStarted","Data":"2bf4d46e038c924bc7d3a33bc3a44022e1c4b155307b2bccdd9683c63003f7e8"} Jan 22 16:44:03 crc kubenswrapper[4758]: I0122 16:44:03.979658 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6tvr2" Jan 22 16:44:03 crc kubenswrapper[4758]: I0122 16:44:03.993625 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-zqjtk" event={"ID":"2af60d67-9e48-435e-a5a5-3786c6e44da3","Type":"ContainerStarted","Data":"0981e8bd186c6c5c2bb1a8f26bd57219b2cf6b9dd77f146dade0d6847ea69246"} Jan 22 16:44:03 crc kubenswrapper[4758]: I0122 16:44:03.998652 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-bxw2x" event={"ID":"9371e907-70ad-4d4e-85ed-42d886f3a58c","Type":"ContainerStarted","Data":"834c0bfc0a577aa3c1f51bde876adf0a2c78b9c87f63a0ea68e2795da9145977"} Jan 22 16:44:03 crc kubenswrapper[4758]: I0122 16:44:03.998943 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6tvr2" podStartSLOduration=2.5037094079999997 podStartE2EDuration="6.99893016s" podCreationTimestamp="2026-01-22 16:43:57 +0000 UTC" firstStartedPulling="2026-01-22 16:43:59.032806885 +0000 UTC m=+860.516146170" lastFinishedPulling="2026-01-22 16:44:03.528027637 +0000 UTC m=+865.011366922" observedRunningTime="2026-01-22 16:44:03.996905743 +0000 UTC m=+865.480245038" watchObservedRunningTime="2026-01-22 16:44:03.99893016 +0000 UTC m=+865.482269445" Jan 22 16:44:03 crc kubenswrapper[4758]: I0122 16:44:03.999585 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-bxw2x" Jan 22 16:44:04 crc kubenswrapper[4758]: I0122 16:44:04.000947 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-vf6r8" event={"ID":"851f106a-fb00-4a5d-9112-d188f5bf363d","Type":"ContainerStarted","Data":"b34b86eff23ef19355e2d0f58c71ae257fae6e9bfb69abade7d26b6686dfdf5b"} Jan 22 16:44:04 crc kubenswrapper[4758]: I0122 16:44:04.041704 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-bxw2x" podStartSLOduration=1.566300358 podStartE2EDuration="7.041680811s" podCreationTimestamp="2026-01-22 16:43:57 +0000 UTC" firstStartedPulling="2026-01-22 16:43:58.050826212 +0000 UTC m=+859.534165497" lastFinishedPulling="2026-01-22 16:44:03.526206665 +0000 UTC m=+865.009545950" observedRunningTime="2026-01-22 16:44:04.020630389 +0000 UTC m=+865.503969684" watchObservedRunningTime="2026-01-22 16:44:04.041680811 +0000 UTC m=+865.525020096" Jan 22 16:44:04 crc kubenswrapper[4758]: I0122 16:44:04.053029 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-vf6r8" podStartSLOduration=2.106935876 podStartE2EDuration="7.053003918s" podCreationTimestamp="2026-01-22 16:43:57 +0000 UTC" firstStartedPulling="2026-01-22 16:43:58.580402962 +0000 UTC m=+860.063742247" lastFinishedPulling="2026-01-22 16:44:03.526471004 +0000 UTC m=+865.009810289" observedRunningTime="2026-01-22 16:44:04.039033226 +0000 UTC m=+865.522372521" watchObservedRunningTime="2026-01-22 16:44:04.053003918 +0000 UTC m=+865.536343223" Jan 22 16:44:07 crc kubenswrapper[4758]: I0122 16:44:07.032575 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-zqjtk" event={"ID":"2af60d67-9e48-435e-a5a5-3786c6e44da3","Type":"ContainerStarted","Data":"02d1acc453c0a1bf7f0c9ef0a972bc096f4275bf387150df5bb9615e322d2cf3"} Jan 22 16:44:07 crc kubenswrapper[4758]: I0122 16:44:07.055588 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-zqjtk" podStartSLOduration=2.476135534 podStartE2EDuration="10.055537208s" podCreationTimestamp="2026-01-22 16:43:57 +0000 UTC" firstStartedPulling="2026-01-22 16:43:58.312919441 +0000 UTC m=+859.796258726" lastFinishedPulling="2026-01-22 16:44:05.892321115 +0000 UTC m=+867.375660400" observedRunningTime="2026-01-22 16:44:07.048723806 +0000 UTC m=+868.532063121" watchObservedRunningTime="2026-01-22 16:44:07.055537208 +0000 UTC m=+868.538876503" Jan 22 16:44:08 crc kubenswrapper[4758]: I0122 16:44:08.050920 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-bxw2x" Jan 22 16:44:08 crc kubenswrapper[4758]: I0122 16:44:08.283099 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-fb6558556-llwv4" Jan 22 16:44:08 crc kubenswrapper[4758]: I0122 16:44:08.283168 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-fb6558556-llwv4" Jan 22 16:44:08 crc kubenswrapper[4758]: I0122 16:44:08.288304 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-fb6558556-llwv4" Jan 22 16:44:09 crc kubenswrapper[4758]: I0122 16:44:09.047161 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-fb6558556-llwv4" Jan 22 16:44:09 crc kubenswrapper[4758]: I0122 16:44:09.098613 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-n2kln"] Jan 22 16:44:18 crc kubenswrapper[4758]: I0122 16:44:18.607920 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6tvr2" Jan 22 16:44:34 crc kubenswrapper[4758]: I0122 16:44:34.152527 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-n2kln" podUID="8f67259d-8eec-4f78-929f-01e7abe893f6" containerName="console" containerID="cri-o://60bde2bb53e4460e2d758fe68dffc61e9e1c41ffb7d0a5c6bfd5f4ca86544c4f" gracePeriod=15 Jan 22 16:44:34 crc kubenswrapper[4758]: I0122 16:44:34.834786 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-n2kln_8f67259d-8eec-4f78-929f-01e7abe893f6/console/0.log" Jan 22 16:44:34 crc kubenswrapper[4758]: I0122 16:44:34.835122 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-n2kln" Jan 22 16:44:34 crc kubenswrapper[4758]: I0122 16:44:34.947764 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dv2x7\" (UniqueName: \"kubernetes.io/projected/8f67259d-8eec-4f78-929f-01e7abe893f6-kube-api-access-dv2x7\") pod \"8f67259d-8eec-4f78-929f-01e7abe893f6\" (UID: \"8f67259d-8eec-4f78-929f-01e7abe893f6\") " Jan 22 16:44:34 crc kubenswrapper[4758]: I0122 16:44:34.948111 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8f67259d-8eec-4f78-929f-01e7abe893f6-console-config\") pod \"8f67259d-8eec-4f78-929f-01e7abe893f6\" (UID: \"8f67259d-8eec-4f78-929f-01e7abe893f6\") " Jan 22 16:44:34 crc kubenswrapper[4758]: I0122 16:44:34.948159 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8f67259d-8eec-4f78-929f-01e7abe893f6-oauth-serving-cert\") pod \"8f67259d-8eec-4f78-929f-01e7abe893f6\" (UID: \"8f67259d-8eec-4f78-929f-01e7abe893f6\") " Jan 22 16:44:34 crc kubenswrapper[4758]: I0122 16:44:34.948183 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f67259d-8eec-4f78-929f-01e7abe893f6-trusted-ca-bundle\") pod \"8f67259d-8eec-4f78-929f-01e7abe893f6\" (UID: \"8f67259d-8eec-4f78-929f-01e7abe893f6\") " Jan 22 16:44:34 crc kubenswrapper[4758]: I0122 16:44:34.948243 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8f67259d-8eec-4f78-929f-01e7abe893f6-console-oauth-config\") pod \"8f67259d-8eec-4f78-929f-01e7abe893f6\" (UID: \"8f67259d-8eec-4f78-929f-01e7abe893f6\") " Jan 22 16:44:34 crc kubenswrapper[4758]: I0122 16:44:34.948276 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8f67259d-8eec-4f78-929f-01e7abe893f6-service-ca\") pod \"8f67259d-8eec-4f78-929f-01e7abe893f6\" (UID: \"8f67259d-8eec-4f78-929f-01e7abe893f6\") " Jan 22 16:44:34 crc kubenswrapper[4758]: I0122 16:44:34.948304 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8f67259d-8eec-4f78-929f-01e7abe893f6-console-serving-cert\") pod \"8f67259d-8eec-4f78-929f-01e7abe893f6\" (UID: \"8f67259d-8eec-4f78-929f-01e7abe893f6\") " Jan 22 16:44:34 crc kubenswrapper[4758]: I0122 16:44:34.949944 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f67259d-8eec-4f78-929f-01e7abe893f6-service-ca" (OuterVolumeSpecName: "service-ca") pod "8f67259d-8eec-4f78-929f-01e7abe893f6" (UID: "8f67259d-8eec-4f78-929f-01e7abe893f6"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:44:34 crc kubenswrapper[4758]: I0122 16:44:34.949932 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f67259d-8eec-4f78-929f-01e7abe893f6-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "8f67259d-8eec-4f78-929f-01e7abe893f6" (UID: "8f67259d-8eec-4f78-929f-01e7abe893f6"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:44:34 crc kubenswrapper[4758]: I0122 16:44:34.949989 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f67259d-8eec-4f78-929f-01e7abe893f6-console-config" (OuterVolumeSpecName: "console-config") pod "8f67259d-8eec-4f78-929f-01e7abe893f6" (UID: "8f67259d-8eec-4f78-929f-01e7abe893f6"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:44:34 crc kubenswrapper[4758]: I0122 16:44:34.950030 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f67259d-8eec-4f78-929f-01e7abe893f6-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "8f67259d-8eec-4f78-929f-01e7abe893f6" (UID: "8f67259d-8eec-4f78-929f-01e7abe893f6"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:44:34 crc kubenswrapper[4758]: I0122 16:44:34.961728 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f67259d-8eec-4f78-929f-01e7abe893f6-kube-api-access-dv2x7" (OuterVolumeSpecName: "kube-api-access-dv2x7") pod "8f67259d-8eec-4f78-929f-01e7abe893f6" (UID: "8f67259d-8eec-4f78-929f-01e7abe893f6"). InnerVolumeSpecName "kube-api-access-dv2x7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:44:34 crc kubenswrapper[4758]: I0122 16:44:34.962007 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f67259d-8eec-4f78-929f-01e7abe893f6-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "8f67259d-8eec-4f78-929f-01e7abe893f6" (UID: "8f67259d-8eec-4f78-929f-01e7abe893f6"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:44:34 crc kubenswrapper[4758]: I0122 16:44:34.962125 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f67259d-8eec-4f78-929f-01e7abe893f6-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "8f67259d-8eec-4f78-929f-01e7abe893f6" (UID: "8f67259d-8eec-4f78-929f-01e7abe893f6"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:44:35 crc kubenswrapper[4758]: I0122 16:44:35.050597 4758 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8f67259d-8eec-4f78-929f-01e7abe893f6-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:44:35 crc kubenswrapper[4758]: I0122 16:44:35.050634 4758 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f67259d-8eec-4f78-929f-01e7abe893f6-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:44:35 crc kubenswrapper[4758]: I0122 16:44:35.050646 4758 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8f67259d-8eec-4f78-929f-01e7abe893f6-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:44:35 crc kubenswrapper[4758]: I0122 16:44:35.050656 4758 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8f67259d-8eec-4f78-929f-01e7abe893f6-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:44:35 crc kubenswrapper[4758]: I0122 16:44:35.050665 4758 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8f67259d-8eec-4f78-929f-01e7abe893f6-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:44:35 crc kubenswrapper[4758]: I0122 16:44:35.050672 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dv2x7\" (UniqueName: \"kubernetes.io/projected/8f67259d-8eec-4f78-929f-01e7abe893f6-kube-api-access-dv2x7\") on node \"crc\" DevicePath \"\"" Jan 22 16:44:35 crc kubenswrapper[4758]: I0122 16:44:35.050683 4758 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8f67259d-8eec-4f78-929f-01e7abe893f6-console-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:44:35 crc kubenswrapper[4758]: I0122 16:44:35.212037 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-n2kln_8f67259d-8eec-4f78-929f-01e7abe893f6/console/0.log" Jan 22 16:44:35 crc kubenswrapper[4758]: I0122 16:44:35.212365 4758 generic.go:334] "Generic (PLEG): container finished" podID="8f67259d-8eec-4f78-929f-01e7abe893f6" containerID="60bde2bb53e4460e2d758fe68dffc61e9e1c41ffb7d0a5c6bfd5f4ca86544c4f" exitCode=2 Jan 22 16:44:35 crc kubenswrapper[4758]: I0122 16:44:35.212425 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-n2kln" event={"ID":"8f67259d-8eec-4f78-929f-01e7abe893f6","Type":"ContainerDied","Data":"60bde2bb53e4460e2d758fe68dffc61e9e1c41ffb7d0a5c6bfd5f4ca86544c4f"} Jan 22 16:44:35 crc kubenswrapper[4758]: I0122 16:44:35.212447 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-n2kln" Jan 22 16:44:35 crc kubenswrapper[4758]: I0122 16:44:35.212485 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-n2kln" event={"ID":"8f67259d-8eec-4f78-929f-01e7abe893f6","Type":"ContainerDied","Data":"8eae8d77a6d95ef19ce0215a07ffad917c59d31ab1e66c73689f56ba04b8b0b1"} Jan 22 16:44:35 crc kubenswrapper[4758]: I0122 16:44:35.212508 4758 scope.go:117] "RemoveContainer" containerID="60bde2bb53e4460e2d758fe68dffc61e9e1c41ffb7d0a5c6bfd5f4ca86544c4f" Jan 22 16:44:35 crc kubenswrapper[4758]: I0122 16:44:35.243229 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-n2kln"] Jan 22 16:44:35 crc kubenswrapper[4758]: I0122 16:44:35.247639 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-n2kln"] Jan 22 16:44:35 crc kubenswrapper[4758]: I0122 16:44:35.247989 4758 scope.go:117] "RemoveContainer" containerID="60bde2bb53e4460e2d758fe68dffc61e9e1c41ffb7d0a5c6bfd5f4ca86544c4f" Jan 22 16:44:35 crc kubenswrapper[4758]: E0122 16:44:35.248392 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60bde2bb53e4460e2d758fe68dffc61e9e1c41ffb7d0a5c6bfd5f4ca86544c4f\": container with ID starting with 60bde2bb53e4460e2d758fe68dffc61e9e1c41ffb7d0a5c6bfd5f4ca86544c4f not found: ID does not exist" containerID="60bde2bb53e4460e2d758fe68dffc61e9e1c41ffb7d0a5c6bfd5f4ca86544c4f" Jan 22 16:44:35 crc kubenswrapper[4758]: I0122 16:44:35.248428 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60bde2bb53e4460e2d758fe68dffc61e9e1c41ffb7d0a5c6bfd5f4ca86544c4f"} err="failed to get container status \"60bde2bb53e4460e2d758fe68dffc61e9e1c41ffb7d0a5c6bfd5f4ca86544c4f\": rpc error: code = NotFound desc = could not find container \"60bde2bb53e4460e2d758fe68dffc61e9e1c41ffb7d0a5c6bfd5f4ca86544c4f\": container with ID starting with 60bde2bb53e4460e2d758fe68dffc61e9e1c41ffb7d0a5c6bfd5f4ca86544c4f not found: ID does not exist" Jan 22 16:44:36 crc kubenswrapper[4758]: I0122 16:44:36.815218 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f67259d-8eec-4f78-929f-01e7abe893f6" path="/var/lib/kubelet/pods/8f67259d-8eec-4f78-929f-01e7abe893f6/volumes" Jan 22 16:44:38 crc kubenswrapper[4758]: I0122 16:44:38.606109 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrdn59"] Jan 22 16:44:38 crc kubenswrapper[4758]: E0122 16:44:38.606339 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a85c69d6-3710-452b-9588-8749343b7d2a" containerName="extract-content" Jan 22 16:44:38 crc kubenswrapper[4758]: I0122 16:44:38.606351 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a85c69d6-3710-452b-9588-8749343b7d2a" containerName="extract-content" Jan 22 16:44:38 crc kubenswrapper[4758]: E0122 16:44:38.606364 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a85c69d6-3710-452b-9588-8749343b7d2a" containerName="extract-utilities" Jan 22 16:44:38 crc kubenswrapper[4758]: I0122 16:44:38.606370 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a85c69d6-3710-452b-9588-8749343b7d2a" containerName="extract-utilities" Jan 22 16:44:38 crc kubenswrapper[4758]: E0122 16:44:38.606382 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a85c69d6-3710-452b-9588-8749343b7d2a" containerName="registry-server" Jan 22 16:44:38 crc kubenswrapper[4758]: I0122 16:44:38.606388 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a85c69d6-3710-452b-9588-8749343b7d2a" containerName="registry-server" Jan 22 16:44:38 crc kubenswrapper[4758]: E0122 16:44:38.606398 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f67259d-8eec-4f78-929f-01e7abe893f6" containerName="console" Jan 22 16:44:38 crc kubenswrapper[4758]: I0122 16:44:38.606404 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f67259d-8eec-4f78-929f-01e7abe893f6" containerName="console" Jan 22 16:44:38 crc kubenswrapper[4758]: I0122 16:44:38.606526 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a85c69d6-3710-452b-9588-8749343b7d2a" containerName="registry-server" Jan 22 16:44:38 crc kubenswrapper[4758]: I0122 16:44:38.606536 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f67259d-8eec-4f78-929f-01e7abe893f6" containerName="console" Jan 22 16:44:38 crc kubenswrapper[4758]: I0122 16:44:38.607404 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrdn59" Jan 22 16:44:38 crc kubenswrapper[4758]: I0122 16:44:38.609938 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 22 16:44:38 crc kubenswrapper[4758]: I0122 16:44:38.616427 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrdn59"] Jan 22 16:44:38 crc kubenswrapper[4758]: I0122 16:44:38.631336 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/265db705-34c5-40d6-b7ef-c58046650cc9-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrdn59\" (UID: \"265db705-34c5-40d6-b7ef-c58046650cc9\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrdn59" Jan 22 16:44:38 crc kubenswrapper[4758]: I0122 16:44:38.631441 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6gwb\" (UniqueName: \"kubernetes.io/projected/265db705-34c5-40d6-b7ef-c58046650cc9-kube-api-access-g6gwb\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrdn59\" (UID: \"265db705-34c5-40d6-b7ef-c58046650cc9\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrdn59" Jan 22 16:44:38 crc kubenswrapper[4758]: I0122 16:44:38.631519 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/265db705-34c5-40d6-b7ef-c58046650cc9-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrdn59\" (UID: \"265db705-34c5-40d6-b7ef-c58046650cc9\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrdn59" Jan 22 16:44:38 crc kubenswrapper[4758]: I0122 16:44:38.734071 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/265db705-34c5-40d6-b7ef-c58046650cc9-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrdn59\" (UID: \"265db705-34c5-40d6-b7ef-c58046650cc9\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrdn59" Jan 22 16:44:38 crc kubenswrapper[4758]: I0122 16:44:38.734175 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/265db705-34c5-40d6-b7ef-c58046650cc9-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrdn59\" (UID: \"265db705-34c5-40d6-b7ef-c58046650cc9\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrdn59" Jan 22 16:44:38 crc kubenswrapper[4758]: I0122 16:44:38.734235 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6gwb\" (UniqueName: \"kubernetes.io/projected/265db705-34c5-40d6-b7ef-c58046650cc9-kube-api-access-g6gwb\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrdn59\" (UID: \"265db705-34c5-40d6-b7ef-c58046650cc9\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrdn59" Jan 22 16:44:38 crc kubenswrapper[4758]: I0122 16:44:38.734699 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/265db705-34c5-40d6-b7ef-c58046650cc9-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrdn59\" (UID: \"265db705-34c5-40d6-b7ef-c58046650cc9\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrdn59" Jan 22 16:44:38 crc kubenswrapper[4758]: I0122 16:44:38.734714 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/265db705-34c5-40d6-b7ef-c58046650cc9-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrdn59\" (UID: \"265db705-34c5-40d6-b7ef-c58046650cc9\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrdn59" Jan 22 16:44:38 crc kubenswrapper[4758]: I0122 16:44:38.760906 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6gwb\" (UniqueName: \"kubernetes.io/projected/265db705-34c5-40d6-b7ef-c58046650cc9-kube-api-access-g6gwb\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrdn59\" (UID: \"265db705-34c5-40d6-b7ef-c58046650cc9\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrdn59" Jan 22 16:44:38 crc kubenswrapper[4758]: I0122 16:44:38.926183 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 22 16:44:38 crc kubenswrapper[4758]: I0122 16:44:38.934679 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrdn59" Jan 22 16:44:39 crc kubenswrapper[4758]: I0122 16:44:39.608033 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrdn59"] Jan 22 16:44:39 crc kubenswrapper[4758]: W0122 16:44:39.616291 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod265db705_34c5_40d6_b7ef_c58046650cc9.slice/crio-8dd377ca5c39252647d6907352dd27ecfbc60c02f691f19501ec7b6655eb3811 WatchSource:0}: Error finding container 8dd377ca5c39252647d6907352dd27ecfbc60c02f691f19501ec7b6655eb3811: Status 404 returned error can't find the container with id 8dd377ca5c39252647d6907352dd27ecfbc60c02f691f19501ec7b6655eb3811 Jan 22 16:44:40 crc kubenswrapper[4758]: I0122 16:44:40.250310 4758 generic.go:334] "Generic (PLEG): container finished" podID="265db705-34c5-40d6-b7ef-c58046650cc9" containerID="37d46d43768e939f2b9d5270d85447e989ede9df9777e3dfca4eb3d36ee949c8" exitCode=0 Jan 22 16:44:40 crc kubenswrapper[4758]: I0122 16:44:40.250399 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrdn59" event={"ID":"265db705-34c5-40d6-b7ef-c58046650cc9","Type":"ContainerDied","Data":"37d46d43768e939f2b9d5270d85447e989ede9df9777e3dfca4eb3d36ee949c8"} Jan 22 16:44:40 crc kubenswrapper[4758]: I0122 16:44:40.250696 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrdn59" event={"ID":"265db705-34c5-40d6-b7ef-c58046650cc9","Type":"ContainerStarted","Data":"8dd377ca5c39252647d6907352dd27ecfbc60c02f691f19501ec7b6655eb3811"} Jan 22 16:44:42 crc kubenswrapper[4758]: I0122 16:44:42.271612 4758 generic.go:334] "Generic (PLEG): container finished" podID="265db705-34c5-40d6-b7ef-c58046650cc9" containerID="d3105c7e51706c6b12473991420dfb78d7be8a93174a112ea583568ef6447698" exitCode=0 Jan 22 16:44:42 crc kubenswrapper[4758]: I0122 16:44:42.271690 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrdn59" event={"ID":"265db705-34c5-40d6-b7ef-c58046650cc9","Type":"ContainerDied","Data":"d3105c7e51706c6b12473991420dfb78d7be8a93174a112ea583568ef6447698"} Jan 22 16:44:43 crc kubenswrapper[4758]: I0122 16:44:43.279455 4758 generic.go:334] "Generic (PLEG): container finished" podID="265db705-34c5-40d6-b7ef-c58046650cc9" containerID="60eb54262cb9bdb462523fd900edfb5776a8f98b35fd77e1e67843639880e5aa" exitCode=0 Jan 22 16:44:43 crc kubenswrapper[4758]: I0122 16:44:43.279564 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrdn59" event={"ID":"265db705-34c5-40d6-b7ef-c58046650cc9","Type":"ContainerDied","Data":"60eb54262cb9bdb462523fd900edfb5776a8f98b35fd77e1e67843639880e5aa"} Jan 22 16:44:44 crc kubenswrapper[4758]: I0122 16:44:44.577382 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrdn59" Jan 22 16:44:44 crc kubenswrapper[4758]: I0122 16:44:44.709289 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6gwb\" (UniqueName: \"kubernetes.io/projected/265db705-34c5-40d6-b7ef-c58046650cc9-kube-api-access-g6gwb\") pod \"265db705-34c5-40d6-b7ef-c58046650cc9\" (UID: \"265db705-34c5-40d6-b7ef-c58046650cc9\") " Jan 22 16:44:44 crc kubenswrapper[4758]: I0122 16:44:44.709597 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/265db705-34c5-40d6-b7ef-c58046650cc9-bundle\") pod \"265db705-34c5-40d6-b7ef-c58046650cc9\" (UID: \"265db705-34c5-40d6-b7ef-c58046650cc9\") " Jan 22 16:44:44 crc kubenswrapper[4758]: I0122 16:44:44.709917 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/265db705-34c5-40d6-b7ef-c58046650cc9-util\") pod \"265db705-34c5-40d6-b7ef-c58046650cc9\" (UID: \"265db705-34c5-40d6-b7ef-c58046650cc9\") " Jan 22 16:44:44 crc kubenswrapper[4758]: I0122 16:44:44.711186 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/265db705-34c5-40d6-b7ef-c58046650cc9-bundle" (OuterVolumeSpecName: "bundle") pod "265db705-34c5-40d6-b7ef-c58046650cc9" (UID: "265db705-34c5-40d6-b7ef-c58046650cc9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:44:44 crc kubenswrapper[4758]: I0122 16:44:44.715598 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/265db705-34c5-40d6-b7ef-c58046650cc9-kube-api-access-g6gwb" (OuterVolumeSpecName: "kube-api-access-g6gwb") pod "265db705-34c5-40d6-b7ef-c58046650cc9" (UID: "265db705-34c5-40d6-b7ef-c58046650cc9"). InnerVolumeSpecName "kube-api-access-g6gwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:44:44 crc kubenswrapper[4758]: I0122 16:44:44.733782 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/265db705-34c5-40d6-b7ef-c58046650cc9-util" (OuterVolumeSpecName: "util") pod "265db705-34c5-40d6-b7ef-c58046650cc9" (UID: "265db705-34c5-40d6-b7ef-c58046650cc9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:44:44 crc kubenswrapper[4758]: I0122 16:44:44.812258 4758 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/265db705-34c5-40d6-b7ef-c58046650cc9-util\") on node \"crc\" DevicePath \"\"" Jan 22 16:44:44 crc kubenswrapper[4758]: I0122 16:44:44.812551 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6gwb\" (UniqueName: \"kubernetes.io/projected/265db705-34c5-40d6-b7ef-c58046650cc9-kube-api-access-g6gwb\") on node \"crc\" DevicePath \"\"" Jan 22 16:44:44 crc kubenswrapper[4758]: I0122 16:44:44.812578 4758 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/265db705-34c5-40d6-b7ef-c58046650cc9-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:44:45 crc kubenswrapper[4758]: I0122 16:44:45.297079 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrdn59" event={"ID":"265db705-34c5-40d6-b7ef-c58046650cc9","Type":"ContainerDied","Data":"8dd377ca5c39252647d6907352dd27ecfbc60c02f691f19501ec7b6655eb3811"} Jan 22 16:44:45 crc kubenswrapper[4758]: I0122 16:44:45.297141 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrdn59" Jan 22 16:44:45 crc kubenswrapper[4758]: I0122 16:44:45.297147 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8dd377ca5c39252647d6907352dd27ecfbc60c02f691f19501ec7b6655eb3811" Jan 22 16:44:53 crc kubenswrapper[4758]: I0122 16:44:53.507659 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-58fc8b87c6-qmw5r"] Jan 22 16:44:53 crc kubenswrapper[4758]: E0122 16:44:53.508475 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="265db705-34c5-40d6-b7ef-c58046650cc9" containerName="extract" Jan 22 16:44:53 crc kubenswrapper[4758]: I0122 16:44:53.508491 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="265db705-34c5-40d6-b7ef-c58046650cc9" containerName="extract" Jan 22 16:44:53 crc kubenswrapper[4758]: E0122 16:44:53.508513 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="265db705-34c5-40d6-b7ef-c58046650cc9" containerName="pull" Jan 22 16:44:53 crc kubenswrapper[4758]: I0122 16:44:53.508521 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="265db705-34c5-40d6-b7ef-c58046650cc9" containerName="pull" Jan 22 16:44:53 crc kubenswrapper[4758]: E0122 16:44:53.508544 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="265db705-34c5-40d6-b7ef-c58046650cc9" containerName="util" Jan 22 16:44:53 crc kubenswrapper[4758]: I0122 16:44:53.508552 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="265db705-34c5-40d6-b7ef-c58046650cc9" containerName="util" Jan 22 16:44:53 crc kubenswrapper[4758]: I0122 16:44:53.508676 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="265db705-34c5-40d6-b7ef-c58046650cc9" containerName="extract" Jan 22 16:44:53 crc kubenswrapper[4758]: I0122 16:44:53.509229 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-58fc8b87c6-qmw5r" Jan 22 16:44:53 crc kubenswrapper[4758]: I0122 16:44:53.511659 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 22 16:44:53 crc kubenswrapper[4758]: I0122 16:44:53.511944 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 22 16:44:53 crc kubenswrapper[4758]: I0122 16:44:53.512096 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-q7gzx" Jan 22 16:44:53 crc kubenswrapper[4758]: I0122 16:44:53.512580 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 22 16:44:53 crc kubenswrapper[4758]: I0122 16:44:53.515678 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 22 16:44:53 crc kubenswrapper[4758]: I0122 16:44:53.530624 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-58fc8b87c6-qmw5r"] Jan 22 16:44:53 crc kubenswrapper[4758]: I0122 16:44:53.619350 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg2sg\" (UniqueName: \"kubernetes.io/projected/8afd29cc-2dab-460e-ad9d-f17690c15f41-kube-api-access-sg2sg\") pod \"metallb-operator-controller-manager-58fc8b87c6-qmw5r\" (UID: \"8afd29cc-2dab-460e-ad9d-f17690c15f41\") " pod="metallb-system/metallb-operator-controller-manager-58fc8b87c6-qmw5r" Jan 22 16:44:53 crc kubenswrapper[4758]: I0122 16:44:53.619411 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8afd29cc-2dab-460e-ad9d-f17690c15f41-apiservice-cert\") pod \"metallb-operator-controller-manager-58fc8b87c6-qmw5r\" (UID: \"8afd29cc-2dab-460e-ad9d-f17690c15f41\") " pod="metallb-system/metallb-operator-controller-manager-58fc8b87c6-qmw5r" Jan 22 16:44:53 crc kubenswrapper[4758]: I0122 16:44:53.619451 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8afd29cc-2dab-460e-ad9d-f17690c15f41-webhook-cert\") pod \"metallb-operator-controller-manager-58fc8b87c6-qmw5r\" (UID: \"8afd29cc-2dab-460e-ad9d-f17690c15f41\") " pod="metallb-system/metallb-operator-controller-manager-58fc8b87c6-qmw5r" Jan 22 16:44:53 crc kubenswrapper[4758]: I0122 16:44:53.720391 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg2sg\" (UniqueName: \"kubernetes.io/projected/8afd29cc-2dab-460e-ad9d-f17690c15f41-kube-api-access-sg2sg\") pod \"metallb-operator-controller-manager-58fc8b87c6-qmw5r\" (UID: \"8afd29cc-2dab-460e-ad9d-f17690c15f41\") " pod="metallb-system/metallb-operator-controller-manager-58fc8b87c6-qmw5r" Jan 22 16:44:53 crc kubenswrapper[4758]: I0122 16:44:53.720446 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8afd29cc-2dab-460e-ad9d-f17690c15f41-apiservice-cert\") pod \"metallb-operator-controller-manager-58fc8b87c6-qmw5r\" (UID: \"8afd29cc-2dab-460e-ad9d-f17690c15f41\") " pod="metallb-system/metallb-operator-controller-manager-58fc8b87c6-qmw5r" Jan 22 16:44:53 crc kubenswrapper[4758]: I0122 16:44:53.720490 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8afd29cc-2dab-460e-ad9d-f17690c15f41-webhook-cert\") pod \"metallb-operator-controller-manager-58fc8b87c6-qmw5r\" (UID: \"8afd29cc-2dab-460e-ad9d-f17690c15f41\") " pod="metallb-system/metallb-operator-controller-manager-58fc8b87c6-qmw5r" Jan 22 16:44:53 crc kubenswrapper[4758]: I0122 16:44:53.726435 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8afd29cc-2dab-460e-ad9d-f17690c15f41-webhook-cert\") pod \"metallb-operator-controller-manager-58fc8b87c6-qmw5r\" (UID: \"8afd29cc-2dab-460e-ad9d-f17690c15f41\") " pod="metallb-system/metallb-operator-controller-manager-58fc8b87c6-qmw5r" Jan 22 16:44:53 crc kubenswrapper[4758]: I0122 16:44:53.726805 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8afd29cc-2dab-460e-ad9d-f17690c15f41-apiservice-cert\") pod \"metallb-operator-controller-manager-58fc8b87c6-qmw5r\" (UID: \"8afd29cc-2dab-460e-ad9d-f17690c15f41\") " pod="metallb-system/metallb-operator-controller-manager-58fc8b87c6-qmw5r" Jan 22 16:44:53 crc kubenswrapper[4758]: I0122 16:44:53.740108 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg2sg\" (UniqueName: \"kubernetes.io/projected/8afd29cc-2dab-460e-ad9d-f17690c15f41-kube-api-access-sg2sg\") pod \"metallb-operator-controller-manager-58fc8b87c6-qmw5r\" (UID: \"8afd29cc-2dab-460e-ad9d-f17690c15f41\") " pod="metallb-system/metallb-operator-controller-manager-58fc8b87c6-qmw5r" Jan 22 16:44:53 crc kubenswrapper[4758]: I0122 16:44:53.828749 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-58fc8b87c6-qmw5r" Jan 22 16:44:53 crc kubenswrapper[4758]: I0122 16:44:53.934972 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-755c77fb5-mjxnk"] Jan 22 16:44:53 crc kubenswrapper[4758]: I0122 16:44:53.935701 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-755c77fb5-mjxnk" Jan 22 16:44:53 crc kubenswrapper[4758]: I0122 16:44:53.937835 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-qdnhd" Jan 22 16:44:53 crc kubenswrapper[4758]: I0122 16:44:53.943763 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 22 16:44:53 crc kubenswrapper[4758]: I0122 16:44:53.943957 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 22 16:44:54 crc kubenswrapper[4758]: I0122 16:44:54.012488 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-755c77fb5-mjxnk"] Jan 22 16:44:54 crc kubenswrapper[4758]: I0122 16:44:54.024232 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c95d135e-9d68-4e7f-843f-57f2419b596c-webhook-cert\") pod \"metallb-operator-webhook-server-755c77fb5-mjxnk\" (UID: \"c95d135e-9d68-4e7f-843f-57f2419b596c\") " pod="metallb-system/metallb-operator-webhook-server-755c77fb5-mjxnk" Jan 22 16:44:54 crc kubenswrapper[4758]: I0122 16:44:54.024295 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnb67\" (UniqueName: \"kubernetes.io/projected/c95d135e-9d68-4e7f-843f-57f2419b596c-kube-api-access-lnb67\") pod \"metallb-operator-webhook-server-755c77fb5-mjxnk\" (UID: \"c95d135e-9d68-4e7f-843f-57f2419b596c\") " pod="metallb-system/metallb-operator-webhook-server-755c77fb5-mjxnk" Jan 22 16:44:54 crc kubenswrapper[4758]: I0122 16:44:54.024318 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c95d135e-9d68-4e7f-843f-57f2419b596c-apiservice-cert\") pod \"metallb-operator-webhook-server-755c77fb5-mjxnk\" (UID: \"c95d135e-9d68-4e7f-843f-57f2419b596c\") " pod="metallb-system/metallb-operator-webhook-server-755c77fb5-mjxnk" Jan 22 16:44:54 crc kubenswrapper[4758]: I0122 16:44:54.125798 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnb67\" (UniqueName: \"kubernetes.io/projected/c95d135e-9d68-4e7f-843f-57f2419b596c-kube-api-access-lnb67\") pod \"metallb-operator-webhook-server-755c77fb5-mjxnk\" (UID: \"c95d135e-9d68-4e7f-843f-57f2419b596c\") " pod="metallb-system/metallb-operator-webhook-server-755c77fb5-mjxnk" Jan 22 16:44:54 crc kubenswrapper[4758]: I0122 16:44:54.127317 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c95d135e-9d68-4e7f-843f-57f2419b596c-apiservice-cert\") pod \"metallb-operator-webhook-server-755c77fb5-mjxnk\" (UID: \"c95d135e-9d68-4e7f-843f-57f2419b596c\") " pod="metallb-system/metallb-operator-webhook-server-755c77fb5-mjxnk" Jan 22 16:44:54 crc kubenswrapper[4758]: I0122 16:44:54.127509 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c95d135e-9d68-4e7f-843f-57f2419b596c-webhook-cert\") pod \"metallb-operator-webhook-server-755c77fb5-mjxnk\" (UID: \"c95d135e-9d68-4e7f-843f-57f2419b596c\") " pod="metallb-system/metallb-operator-webhook-server-755c77fb5-mjxnk" Jan 22 16:44:54 crc kubenswrapper[4758]: I0122 16:44:54.133671 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c95d135e-9d68-4e7f-843f-57f2419b596c-webhook-cert\") pod \"metallb-operator-webhook-server-755c77fb5-mjxnk\" (UID: \"c95d135e-9d68-4e7f-843f-57f2419b596c\") " pod="metallb-system/metallb-operator-webhook-server-755c77fb5-mjxnk" Jan 22 16:44:54 crc kubenswrapper[4758]: I0122 16:44:54.153440 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnb67\" (UniqueName: \"kubernetes.io/projected/c95d135e-9d68-4e7f-843f-57f2419b596c-kube-api-access-lnb67\") pod \"metallb-operator-webhook-server-755c77fb5-mjxnk\" (UID: \"c95d135e-9d68-4e7f-843f-57f2419b596c\") " pod="metallb-system/metallb-operator-webhook-server-755c77fb5-mjxnk" Jan 22 16:44:54 crc kubenswrapper[4758]: I0122 16:44:54.156049 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c95d135e-9d68-4e7f-843f-57f2419b596c-apiservice-cert\") pod \"metallb-operator-webhook-server-755c77fb5-mjxnk\" (UID: \"c95d135e-9d68-4e7f-843f-57f2419b596c\") " pod="metallb-system/metallb-operator-webhook-server-755c77fb5-mjxnk" Jan 22 16:44:54 crc kubenswrapper[4758]: I0122 16:44:54.253993 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-755c77fb5-mjxnk" Jan 22 16:44:54 crc kubenswrapper[4758]: I0122 16:44:54.437586 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-58fc8b87c6-qmw5r"] Jan 22 16:44:54 crc kubenswrapper[4758]: W0122 16:44:54.445823 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8afd29cc_2dab_460e_ad9d_f17690c15f41.slice/crio-169717e850ed3d09c1d4ea1dc291d7bae012d667c8a905e1d1dbedfa481dba55 WatchSource:0}: Error finding container 169717e850ed3d09c1d4ea1dc291d7bae012d667c8a905e1d1dbedfa481dba55: Status 404 returned error can't find the container with id 169717e850ed3d09c1d4ea1dc291d7bae012d667c8a905e1d1dbedfa481dba55 Jan 22 16:44:54 crc kubenswrapper[4758]: I0122 16:44:54.490109 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-755c77fb5-mjxnk"] Jan 22 16:44:54 crc kubenswrapper[4758]: W0122 16:44:54.500414 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc95d135e_9d68_4e7f_843f_57f2419b596c.slice/crio-a0ab61e20632bb8d41251e1a6f428a15e05a9d4a03bf21fefdd2fff919e4c49d WatchSource:0}: Error finding container a0ab61e20632bb8d41251e1a6f428a15e05a9d4a03bf21fefdd2fff919e4c49d: Status 404 returned error can't find the container with id a0ab61e20632bb8d41251e1a6f428a15e05a9d4a03bf21fefdd2fff919e4c49d Jan 22 16:44:55 crc kubenswrapper[4758]: I0122 16:44:55.382794 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-755c77fb5-mjxnk" event={"ID":"c95d135e-9d68-4e7f-843f-57f2419b596c","Type":"ContainerStarted","Data":"a0ab61e20632bb8d41251e1a6f428a15e05a9d4a03bf21fefdd2fff919e4c49d"} Jan 22 16:44:55 crc kubenswrapper[4758]: I0122 16:44:55.385192 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-58fc8b87c6-qmw5r" event={"ID":"8afd29cc-2dab-460e-ad9d-f17690c15f41","Type":"ContainerStarted","Data":"169717e850ed3d09c1d4ea1dc291d7bae012d667c8a905e1d1dbedfa481dba55"} Jan 22 16:45:00 crc kubenswrapper[4758]: I0122 16:45:00.140294 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485005-rdjt8"] Jan 22 16:45:00 crc kubenswrapper[4758]: I0122 16:45:00.142760 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485005-rdjt8" Jan 22 16:45:00 crc kubenswrapper[4758]: I0122 16:45:00.147233 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 16:45:00 crc kubenswrapper[4758]: I0122 16:45:00.148041 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 16:45:00 crc kubenswrapper[4758]: I0122 16:45:00.153057 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485005-rdjt8"] Jan 22 16:45:00 crc kubenswrapper[4758]: I0122 16:45:00.261259 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krh7s\" (UniqueName: \"kubernetes.io/projected/e688668d-0d28-4d1b-aa2a-4bba257e9093-kube-api-access-krh7s\") pod \"collect-profiles-29485005-rdjt8\" (UID: \"e688668d-0d28-4d1b-aa2a-4bba257e9093\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485005-rdjt8" Jan 22 16:45:00 crc kubenswrapper[4758]: I0122 16:45:00.261325 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e688668d-0d28-4d1b-aa2a-4bba257e9093-secret-volume\") pod \"collect-profiles-29485005-rdjt8\" (UID: \"e688668d-0d28-4d1b-aa2a-4bba257e9093\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485005-rdjt8" Jan 22 16:45:00 crc kubenswrapper[4758]: I0122 16:45:00.261376 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e688668d-0d28-4d1b-aa2a-4bba257e9093-config-volume\") pod \"collect-profiles-29485005-rdjt8\" (UID: \"e688668d-0d28-4d1b-aa2a-4bba257e9093\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485005-rdjt8" Jan 22 16:45:00 crc kubenswrapper[4758]: I0122 16:45:00.363241 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krh7s\" (UniqueName: \"kubernetes.io/projected/e688668d-0d28-4d1b-aa2a-4bba257e9093-kube-api-access-krh7s\") pod \"collect-profiles-29485005-rdjt8\" (UID: \"e688668d-0d28-4d1b-aa2a-4bba257e9093\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485005-rdjt8" Jan 22 16:45:00 crc kubenswrapper[4758]: I0122 16:45:00.363319 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e688668d-0d28-4d1b-aa2a-4bba257e9093-secret-volume\") pod \"collect-profiles-29485005-rdjt8\" (UID: \"e688668d-0d28-4d1b-aa2a-4bba257e9093\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485005-rdjt8" Jan 22 16:45:00 crc kubenswrapper[4758]: I0122 16:45:00.363378 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e688668d-0d28-4d1b-aa2a-4bba257e9093-config-volume\") pod \"collect-profiles-29485005-rdjt8\" (UID: \"e688668d-0d28-4d1b-aa2a-4bba257e9093\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485005-rdjt8" Jan 22 16:45:00 crc kubenswrapper[4758]: I0122 16:45:00.364915 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e688668d-0d28-4d1b-aa2a-4bba257e9093-config-volume\") pod \"collect-profiles-29485005-rdjt8\" (UID: \"e688668d-0d28-4d1b-aa2a-4bba257e9093\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485005-rdjt8" Jan 22 16:45:00 crc kubenswrapper[4758]: I0122 16:45:00.370395 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e688668d-0d28-4d1b-aa2a-4bba257e9093-secret-volume\") pod \"collect-profiles-29485005-rdjt8\" (UID: \"e688668d-0d28-4d1b-aa2a-4bba257e9093\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485005-rdjt8" Jan 22 16:45:00 crc kubenswrapper[4758]: I0122 16:45:00.381148 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krh7s\" (UniqueName: \"kubernetes.io/projected/e688668d-0d28-4d1b-aa2a-4bba257e9093-kube-api-access-krh7s\") pod \"collect-profiles-29485005-rdjt8\" (UID: \"e688668d-0d28-4d1b-aa2a-4bba257e9093\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485005-rdjt8" Jan 22 16:45:00 crc kubenswrapper[4758]: I0122 16:45:00.463212 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485005-rdjt8" Jan 22 16:45:02 crc kubenswrapper[4758]: I0122 16:45:02.454664 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485005-rdjt8"] Jan 22 16:45:03 crc kubenswrapper[4758]: I0122 16:45:03.448296 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-755c77fb5-mjxnk" event={"ID":"c95d135e-9d68-4e7f-843f-57f2419b596c","Type":"ContainerStarted","Data":"be784dbadf2605edc03ad03328359797c27b92afe3d9ca2c7ec88e8a1db74e17"} Jan 22 16:45:03 crc kubenswrapper[4758]: I0122 16:45:03.448646 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-755c77fb5-mjxnk" Jan 22 16:45:03 crc kubenswrapper[4758]: I0122 16:45:03.450294 4758 generic.go:334] "Generic (PLEG): container finished" podID="e688668d-0d28-4d1b-aa2a-4bba257e9093" containerID="029ea761214c3d49a4e493c6aa30b929af7662057a755eb375810f493f454371" exitCode=0 Jan 22 16:45:03 crc kubenswrapper[4758]: I0122 16:45:03.450427 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485005-rdjt8" event={"ID":"e688668d-0d28-4d1b-aa2a-4bba257e9093","Type":"ContainerDied","Data":"029ea761214c3d49a4e493c6aa30b929af7662057a755eb375810f493f454371"} Jan 22 16:45:03 crc kubenswrapper[4758]: I0122 16:45:03.450448 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485005-rdjt8" event={"ID":"e688668d-0d28-4d1b-aa2a-4bba257e9093","Type":"ContainerStarted","Data":"3cfd5cd3d3784a4714dbe274036a399e0168ca30ff02411ead70351d918db948"} Jan 22 16:45:03 crc kubenswrapper[4758]: I0122 16:45:03.452455 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-58fc8b87c6-qmw5r" event={"ID":"8afd29cc-2dab-460e-ad9d-f17690c15f41","Type":"ContainerStarted","Data":"c62d76911da0f5713e9e27fb9411fcce83f728d29a3f1dfcd100c7f9a1299640"} Jan 22 16:45:03 crc kubenswrapper[4758]: I0122 16:45:03.452829 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-58fc8b87c6-qmw5r" Jan 22 16:45:03 crc kubenswrapper[4758]: I0122 16:45:03.468036 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-755c77fb5-mjxnk" podStartSLOduration=2.742656002 podStartE2EDuration="10.468013975s" podCreationTimestamp="2026-01-22 16:44:53 +0000 UTC" firstStartedPulling="2026-01-22 16:44:54.50415125 +0000 UTC m=+915.987490535" lastFinishedPulling="2026-01-22 16:45:02.229509223 +0000 UTC m=+923.712848508" observedRunningTime="2026-01-22 16:45:03.465447765 +0000 UTC m=+924.948787060" watchObservedRunningTime="2026-01-22 16:45:03.468013975 +0000 UTC m=+924.951353260" Jan 22 16:45:03 crc kubenswrapper[4758]: I0122 16:45:03.494423 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-58fc8b87c6-qmw5r" podStartSLOduration=4.986005885 podStartE2EDuration="10.494403653s" podCreationTimestamp="2026-01-22 16:44:53 +0000 UTC" firstStartedPulling="2026-01-22 16:44:54.448510084 +0000 UTC m=+915.931849359" lastFinishedPulling="2026-01-22 16:44:59.956907832 +0000 UTC m=+921.440247127" observedRunningTime="2026-01-22 16:45:03.488660097 +0000 UTC m=+924.971999392" watchObservedRunningTime="2026-01-22 16:45:03.494403653 +0000 UTC m=+924.977742938" Jan 22 16:45:04 crc kubenswrapper[4758]: I0122 16:45:04.769954 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485005-rdjt8" Jan 22 16:45:04 crc kubenswrapper[4758]: I0122 16:45:04.958923 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krh7s\" (UniqueName: \"kubernetes.io/projected/e688668d-0d28-4d1b-aa2a-4bba257e9093-kube-api-access-krh7s\") pod \"e688668d-0d28-4d1b-aa2a-4bba257e9093\" (UID: \"e688668d-0d28-4d1b-aa2a-4bba257e9093\") " Jan 22 16:45:04 crc kubenswrapper[4758]: I0122 16:45:04.959005 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e688668d-0d28-4d1b-aa2a-4bba257e9093-config-volume\") pod \"e688668d-0d28-4d1b-aa2a-4bba257e9093\" (UID: \"e688668d-0d28-4d1b-aa2a-4bba257e9093\") " Jan 22 16:45:04 crc kubenswrapper[4758]: I0122 16:45:04.959088 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e688668d-0d28-4d1b-aa2a-4bba257e9093-secret-volume\") pod \"e688668d-0d28-4d1b-aa2a-4bba257e9093\" (UID: \"e688668d-0d28-4d1b-aa2a-4bba257e9093\") " Jan 22 16:45:04 crc kubenswrapper[4758]: I0122 16:45:04.960542 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e688668d-0d28-4d1b-aa2a-4bba257e9093-config-volume" (OuterVolumeSpecName: "config-volume") pod "e688668d-0d28-4d1b-aa2a-4bba257e9093" (UID: "e688668d-0d28-4d1b-aa2a-4bba257e9093"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:45:04 crc kubenswrapper[4758]: I0122 16:45:04.967043 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e688668d-0d28-4d1b-aa2a-4bba257e9093-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e688668d-0d28-4d1b-aa2a-4bba257e9093" (UID: "e688668d-0d28-4d1b-aa2a-4bba257e9093"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:45:04 crc kubenswrapper[4758]: I0122 16:45:04.967128 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e688668d-0d28-4d1b-aa2a-4bba257e9093-kube-api-access-krh7s" (OuterVolumeSpecName: "kube-api-access-krh7s") pod "e688668d-0d28-4d1b-aa2a-4bba257e9093" (UID: "e688668d-0d28-4d1b-aa2a-4bba257e9093"). InnerVolumeSpecName "kube-api-access-krh7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:45:05 crc kubenswrapper[4758]: I0122 16:45:05.060454 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krh7s\" (UniqueName: \"kubernetes.io/projected/e688668d-0d28-4d1b-aa2a-4bba257e9093-kube-api-access-krh7s\") on node \"crc\" DevicePath \"\"" Jan 22 16:45:05 crc kubenswrapper[4758]: I0122 16:45:05.060500 4758 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e688668d-0d28-4d1b-aa2a-4bba257e9093-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 16:45:05 crc kubenswrapper[4758]: I0122 16:45:05.060509 4758 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e688668d-0d28-4d1b-aa2a-4bba257e9093-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 16:45:05 crc kubenswrapper[4758]: I0122 16:45:05.466331 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485005-rdjt8" event={"ID":"e688668d-0d28-4d1b-aa2a-4bba257e9093","Type":"ContainerDied","Data":"3cfd5cd3d3784a4714dbe274036a399e0168ca30ff02411ead70351d918db948"} Jan 22 16:45:05 crc kubenswrapper[4758]: I0122 16:45:05.466371 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cfd5cd3d3784a4714dbe274036a399e0168ca30ff02411ead70351d918db948" Jan 22 16:45:05 crc kubenswrapper[4758]: I0122 16:45:05.466379 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485005-rdjt8" Jan 22 16:45:14 crc kubenswrapper[4758]: I0122 16:45:14.259853 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-755c77fb5-mjxnk" Jan 22 16:45:33 crc kubenswrapper[4758]: I0122 16:45:33.831458 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-58fc8b87c6-qmw5r" Jan 22 16:45:34 crc kubenswrapper[4758]: I0122 16:45:34.726640 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-qs76m"] Jan 22 16:45:34 crc kubenswrapper[4758]: E0122 16:45:34.727153 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e688668d-0d28-4d1b-aa2a-4bba257e9093" containerName="collect-profiles" Jan 22 16:45:34 crc kubenswrapper[4758]: I0122 16:45:34.727169 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e688668d-0d28-4d1b-aa2a-4bba257e9093" containerName="collect-profiles" Jan 22 16:45:34 crc kubenswrapper[4758]: I0122 16:45:34.727294 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="e688668d-0d28-4d1b-aa2a-4bba257e9093" containerName="collect-profiles" Jan 22 16:45:34 crc kubenswrapper[4758]: I0122 16:45:34.729242 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-qs76m" Jan 22 16:45:34 crc kubenswrapper[4758]: I0122 16:45:34.743305 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 22 16:45:34 crc kubenswrapper[4758]: I0122 16:45:34.744557 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 22 16:45:34 crc kubenswrapper[4758]: I0122 16:45:34.745242 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-s75rc" Jan 22 16:45:34 crc kubenswrapper[4758]: I0122 16:45:34.764808 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-np2j4"] Jan 22 16:45:34 crc kubenswrapper[4758]: I0122 16:45:34.773219 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-np2j4" Jan 22 16:45:34 crc kubenswrapper[4758]: I0122 16:45:34.778662 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 22 16:45:34 crc kubenswrapper[4758]: I0122 16:45:34.785068 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-np2j4"] Jan 22 16:45:34 crc kubenswrapper[4758]: I0122 16:45:34.835639 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-lpprz"] Jan 22 16:45:34 crc kubenswrapper[4758]: I0122 16:45:34.837261 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-lpprz" Jan 22 16:45:34 crc kubenswrapper[4758]: I0122 16:45:34.839719 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 22 16:45:34 crc kubenswrapper[4758]: I0122 16:45:34.839892 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 22 16:45:34 crc kubenswrapper[4758]: I0122 16:45:34.840604 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 22 16:45:34 crc kubenswrapper[4758]: I0122 16:45:34.844417 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-9jfxj" Jan 22 16:45:34 crc kubenswrapper[4758]: I0122 16:45:34.849975 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-k8lvt"] Jan 22 16:45:34 crc kubenswrapper[4758]: I0122 16:45:34.851552 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-k8lvt" Jan 22 16:45:34 crc kubenswrapper[4758]: I0122 16:45:34.856764 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 22 16:45:34 crc kubenswrapper[4758]: I0122 16:45:34.878446 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-k8lvt"] Jan 22 16:45:34 crc kubenswrapper[4758]: I0122 16:45:34.909900 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10-frr-startup\") pod \"frr-k8s-qs76m\" (UID: \"00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10\") " pod="metallb-system/frr-k8s-qs76m" Jan 22 16:45:34 crc kubenswrapper[4758]: I0122 16:45:34.909935 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10-metrics-certs\") pod \"frr-k8s-qs76m\" (UID: \"00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10\") " pod="metallb-system/frr-k8s-qs76m" Jan 22 16:45:34 crc kubenswrapper[4758]: I0122 16:45:34.909974 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx6d7\" (UniqueName: \"kubernetes.io/projected/4612798c-6ae6-4a07-afe6-3f3574ee467b-kube-api-access-zx6d7\") pod \"frr-k8s-webhook-server-7df86c4f6c-np2j4\" (UID: \"4612798c-6ae6-4a07-afe6-3f3574ee467b\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-np2j4" Jan 22 16:45:34 crc kubenswrapper[4758]: I0122 16:45:34.909998 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4612798c-6ae6-4a07-afe6-3f3574ee467b-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-np2j4\" (UID: \"4612798c-6ae6-4a07-afe6-3f3574ee467b\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-np2j4" Jan 22 16:45:34 crc kubenswrapper[4758]: I0122 16:45:34.910013 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10-frr-sockets\") pod \"frr-k8s-qs76m\" (UID: \"00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10\") " pod="metallb-system/frr-k8s-qs76m" Jan 22 16:45:34 crc kubenswrapper[4758]: I0122 16:45:34.910037 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10-frr-conf\") pod \"frr-k8s-qs76m\" (UID: \"00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10\") " pod="metallb-system/frr-k8s-qs76m" Jan 22 16:45:34 crc kubenswrapper[4758]: I0122 16:45:34.910074 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10-reloader\") pod \"frr-k8s-qs76m\" (UID: \"00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10\") " pod="metallb-system/frr-k8s-qs76m" Jan 22 16:45:34 crc kubenswrapper[4758]: I0122 16:45:34.910097 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ftn8\" (UniqueName: \"kubernetes.io/projected/00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10-kube-api-access-6ftn8\") pod \"frr-k8s-qs76m\" (UID: \"00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10\") " pod="metallb-system/frr-k8s-qs76m" Jan 22 16:45:34 crc kubenswrapper[4758]: I0122 16:45:34.910116 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10-metrics\") pod \"frr-k8s-qs76m\" (UID: \"00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10\") " pod="metallb-system/frr-k8s-qs76m" Jan 22 16:45:35 crc kubenswrapper[4758]: I0122 16:45:35.011480 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10-reloader\") pod \"frr-k8s-qs76m\" (UID: \"00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10\") " pod="metallb-system/frr-k8s-qs76m" Jan 22 16:45:35 crc kubenswrapper[4758]: I0122 16:45:35.011761 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ftn8\" (UniqueName: \"kubernetes.io/projected/00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10-kube-api-access-6ftn8\") pod \"frr-k8s-qs76m\" (UID: \"00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10\") " pod="metallb-system/frr-k8s-qs76m" Jan 22 16:45:35 crc kubenswrapper[4758]: I0122 16:45:35.011860 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10-metrics\") pod \"frr-k8s-qs76m\" (UID: \"00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10\") " pod="metallb-system/frr-k8s-qs76m" Jan 22 16:45:35 crc kubenswrapper[4758]: I0122 16:45:35.012133 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cc433179-ae5b-4250-80c2-97af371fdfed-memberlist\") pod \"speaker-lpprz\" (UID: \"cc433179-ae5b-4250-80c2-97af371fdfed\") " pod="metallb-system/speaker-lpprz" Jan 22 16:45:35 crc kubenswrapper[4758]: I0122 16:45:35.012355 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10-frr-startup\") pod \"frr-k8s-qs76m\" (UID: \"00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10\") " pod="metallb-system/frr-k8s-qs76m" Jan 22 16:45:35 crc kubenswrapper[4758]: I0122 16:45:35.013246 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/cc433179-ae5b-4250-80c2-97af371fdfed-metallb-excludel2\") pod \"speaker-lpprz\" (UID: \"cc433179-ae5b-4250-80c2-97af371fdfed\") " pod="metallb-system/speaker-lpprz" Jan 22 16:45:35 crc kubenswrapper[4758]: I0122 16:45:35.013379 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ba3d731b-c87e-4003-a063-9977ae4eb0a2-metrics-certs\") pod \"controller-6968d8fdc4-k8lvt\" (UID: \"ba3d731b-c87e-4003-a063-9977ae4eb0a2\") " pod="metallb-system/controller-6968d8fdc4-k8lvt" Jan 22 16:45:35 crc kubenswrapper[4758]: I0122 16:45:35.012254 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10-reloader\") pod \"frr-k8s-qs76m\" (UID: \"00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10\") " pod="metallb-system/frr-k8s-qs76m" Jan 22 16:45:35 crc kubenswrapper[4758]: I0122 16:45:35.012316 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10-metrics\") pod \"frr-k8s-qs76m\" (UID: \"00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10\") " pod="metallb-system/frr-k8s-qs76m" Jan 22 16:45:35 crc kubenswrapper[4758]: I0122 16:45:35.013216 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10-frr-startup\") pod \"frr-k8s-qs76m\" (UID: \"00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10\") " pod="metallb-system/frr-k8s-qs76m" Jan 22 16:45:35 crc kubenswrapper[4758]: I0122 16:45:35.013464 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwb9x\" (UniqueName: \"kubernetes.io/projected/cc433179-ae5b-4250-80c2-97af371fdfed-kube-api-access-hwb9x\") pod \"speaker-lpprz\" (UID: \"cc433179-ae5b-4250-80c2-97af371fdfed\") " pod="metallb-system/speaker-lpprz" Jan 22 16:45:35 crc kubenswrapper[4758]: I0122 16:45:35.014081 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10-metrics-certs\") pod \"frr-k8s-qs76m\" (UID: \"00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10\") " pod="metallb-system/frr-k8s-qs76m" Jan 22 16:45:35 crc kubenswrapper[4758]: I0122 16:45:35.015038 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ba3d731b-c87e-4003-a063-9977ae4eb0a2-cert\") pod \"controller-6968d8fdc4-k8lvt\" (UID: \"ba3d731b-c87e-4003-a063-9977ae4eb0a2\") " pod="metallb-system/controller-6968d8fdc4-k8lvt" Jan 22 16:45:35 crc kubenswrapper[4758]: I0122 16:45:35.015192 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zx6d7\" (UniqueName: \"kubernetes.io/projected/4612798c-6ae6-4a07-afe6-3f3574ee467b-kube-api-access-zx6d7\") pod \"frr-k8s-webhook-server-7df86c4f6c-np2j4\" (UID: \"4612798c-6ae6-4a07-afe6-3f3574ee467b\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-np2j4" Jan 22 16:45:35 crc kubenswrapper[4758]: I0122 16:45:35.015316 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gpvz\" (UniqueName: \"kubernetes.io/projected/ba3d731b-c87e-4003-a063-9977ae4eb0a2-kube-api-access-2gpvz\") pod \"controller-6968d8fdc4-k8lvt\" (UID: \"ba3d731b-c87e-4003-a063-9977ae4eb0a2\") " pod="metallb-system/controller-6968d8fdc4-k8lvt" Jan 22 16:45:35 crc kubenswrapper[4758]: I0122 16:45:35.015443 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4612798c-6ae6-4a07-afe6-3f3574ee467b-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-np2j4\" (UID: \"4612798c-6ae6-4a07-afe6-3f3574ee467b\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-np2j4" Jan 22 16:45:35 crc kubenswrapper[4758]: I0122 16:45:35.015551 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10-frr-sockets\") pod \"frr-k8s-qs76m\" (UID: \"00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10\") " pod="metallb-system/frr-k8s-qs76m" Jan 22 16:45:35 crc kubenswrapper[4758]: I0122 16:45:35.015650 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10-frr-conf\") pod \"frr-k8s-qs76m\" (UID: \"00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10\") " pod="metallb-system/frr-k8s-qs76m" Jan 22 16:45:35 crc kubenswrapper[4758]: E0122 16:45:35.015581 4758 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 22 16:45:35 crc kubenswrapper[4758]: I0122 16:45:35.015862 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10-frr-sockets\") pod \"frr-k8s-qs76m\" (UID: \"00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10\") " pod="metallb-system/frr-k8s-qs76m" Jan 22 16:45:35 crc kubenswrapper[4758]: E0122 16:45:35.015868 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4612798c-6ae6-4a07-afe6-3f3574ee467b-cert podName:4612798c-6ae6-4a07-afe6-3f3574ee467b nodeName:}" failed. No retries permitted until 2026-01-22 16:45:35.51583708 +0000 UTC m=+956.999176455 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4612798c-6ae6-4a07-afe6-3f3574ee467b-cert") pod "frr-k8s-webhook-server-7df86c4f6c-np2j4" (UID: "4612798c-6ae6-4a07-afe6-3f3574ee467b") : secret "frr-k8s-webhook-server-cert" not found Jan 22 16:45:35 crc kubenswrapper[4758]: I0122 16:45:35.015928 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cc433179-ae5b-4250-80c2-97af371fdfed-metrics-certs\") pod \"speaker-lpprz\" (UID: \"cc433179-ae5b-4250-80c2-97af371fdfed\") " pod="metallb-system/speaker-lpprz" Jan 22 16:45:35 crc kubenswrapper[4758]: I0122 16:45:35.015970 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10-frr-conf\") pod \"frr-k8s-qs76m\" (UID: \"00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10\") " pod="metallb-system/frr-k8s-qs76m" Jan 22 16:45:35 crc kubenswrapper[4758]: I0122 16:45:35.025524 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10-metrics-certs\") pod \"frr-k8s-qs76m\" (UID: \"00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10\") " pod="metallb-system/frr-k8s-qs76m" Jan 22 16:45:35 crc kubenswrapper[4758]: I0122 16:45:35.030247 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ftn8\" (UniqueName: \"kubernetes.io/projected/00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10-kube-api-access-6ftn8\") pod \"frr-k8s-qs76m\" (UID: \"00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10\") " pod="metallb-system/frr-k8s-qs76m" Jan 22 16:45:35 crc kubenswrapper[4758]: I0122 16:45:35.043710 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zx6d7\" (UniqueName: \"kubernetes.io/projected/4612798c-6ae6-4a07-afe6-3f3574ee467b-kube-api-access-zx6d7\") pod \"frr-k8s-webhook-server-7df86c4f6c-np2j4\" (UID: \"4612798c-6ae6-4a07-afe6-3f3574ee467b\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-np2j4" Jan 22 16:45:35 crc kubenswrapper[4758]: I0122 16:45:35.080328 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-qs76m" Jan 22 16:45:35 crc kubenswrapper[4758]: I0122 16:45:35.117113 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gpvz\" (UniqueName: \"kubernetes.io/projected/ba3d731b-c87e-4003-a063-9977ae4eb0a2-kube-api-access-2gpvz\") pod \"controller-6968d8fdc4-k8lvt\" (UID: \"ba3d731b-c87e-4003-a063-9977ae4eb0a2\") " pod="metallb-system/controller-6968d8fdc4-k8lvt" Jan 22 16:45:35 crc kubenswrapper[4758]: I0122 16:45:35.118052 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cc433179-ae5b-4250-80c2-97af371fdfed-metrics-certs\") pod \"speaker-lpprz\" (UID: \"cc433179-ae5b-4250-80c2-97af371fdfed\") " pod="metallb-system/speaker-lpprz" Jan 22 16:45:35 crc kubenswrapper[4758]: I0122 16:45:35.118285 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cc433179-ae5b-4250-80c2-97af371fdfed-memberlist\") pod \"speaker-lpprz\" (UID: \"cc433179-ae5b-4250-80c2-97af371fdfed\") " pod="metallb-system/speaker-lpprz" Jan 22 16:45:35 crc kubenswrapper[4758]: I0122 16:45:35.118396 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/cc433179-ae5b-4250-80c2-97af371fdfed-metallb-excludel2\") pod \"speaker-lpprz\" (UID: \"cc433179-ae5b-4250-80c2-97af371fdfed\") " pod="metallb-system/speaker-lpprz" Jan 22 16:45:35 crc kubenswrapper[4758]: I0122 16:45:35.118478 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwb9x\" (UniqueName: \"kubernetes.io/projected/cc433179-ae5b-4250-80c2-97af371fdfed-kube-api-access-hwb9x\") pod \"speaker-lpprz\" (UID: \"cc433179-ae5b-4250-80c2-97af371fdfed\") " pod="metallb-system/speaker-lpprz" Jan 22 16:45:35 crc kubenswrapper[4758]: I0122 16:45:35.118567 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ba3d731b-c87e-4003-a063-9977ae4eb0a2-metrics-certs\") pod \"controller-6968d8fdc4-k8lvt\" (UID: \"ba3d731b-c87e-4003-a063-9977ae4eb0a2\") " pod="metallb-system/controller-6968d8fdc4-k8lvt" Jan 22 16:45:35 crc kubenswrapper[4758]: I0122 16:45:35.118657 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ba3d731b-c87e-4003-a063-9977ae4eb0a2-cert\") pod \"controller-6968d8fdc4-k8lvt\" (UID: \"ba3d731b-c87e-4003-a063-9977ae4eb0a2\") " pod="metallb-system/controller-6968d8fdc4-k8lvt" Jan 22 16:45:35 crc kubenswrapper[4758]: E0122 16:45:35.118215 4758 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 22 16:45:35 crc kubenswrapper[4758]: E0122 16:45:35.119685 4758 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 22 16:45:35 crc kubenswrapper[4758]: E0122 16:45:35.119897 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc433179-ae5b-4250-80c2-97af371fdfed-metrics-certs podName:cc433179-ae5b-4250-80c2-97af371fdfed nodeName:}" failed. No retries permitted until 2026-01-22 16:45:35.619875607 +0000 UTC m=+957.103214892 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cc433179-ae5b-4250-80c2-97af371fdfed-metrics-certs") pod "speaker-lpprz" (UID: "cc433179-ae5b-4250-80c2-97af371fdfed") : secret "speaker-certs-secret" not found Jan 22 16:45:35 crc kubenswrapper[4758]: E0122 16:45:35.120022 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc433179-ae5b-4250-80c2-97af371fdfed-memberlist podName:cc433179-ae5b-4250-80c2-97af371fdfed nodeName:}" failed. No retries permitted until 2026-01-22 16:45:35.620012731 +0000 UTC m=+957.103352016 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/cc433179-ae5b-4250-80c2-97af371fdfed-memberlist") pod "speaker-lpprz" (UID: "cc433179-ae5b-4250-80c2-97af371fdfed") : secret "metallb-memberlist" not found Jan 22 16:45:35 crc kubenswrapper[4758]: I0122 16:45:35.119985 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/cc433179-ae5b-4250-80c2-97af371fdfed-metallb-excludel2\") pod \"speaker-lpprz\" (UID: \"cc433179-ae5b-4250-80c2-97af371fdfed\") " pod="metallb-system/speaker-lpprz" Jan 22 16:45:35 crc kubenswrapper[4758]: I0122 16:45:35.121996 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ba3d731b-c87e-4003-a063-9977ae4eb0a2-cert\") pod \"controller-6968d8fdc4-k8lvt\" (UID: \"ba3d731b-c87e-4003-a063-9977ae4eb0a2\") " pod="metallb-system/controller-6968d8fdc4-k8lvt" Jan 22 16:45:35 crc kubenswrapper[4758]: I0122 16:45:35.122915 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ba3d731b-c87e-4003-a063-9977ae4eb0a2-metrics-certs\") pod \"controller-6968d8fdc4-k8lvt\" (UID: \"ba3d731b-c87e-4003-a063-9977ae4eb0a2\") " pod="metallb-system/controller-6968d8fdc4-k8lvt" Jan 22 16:45:35 crc kubenswrapper[4758]: I0122 16:45:35.136109 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwb9x\" (UniqueName: \"kubernetes.io/projected/cc433179-ae5b-4250-80c2-97af371fdfed-kube-api-access-hwb9x\") pod \"speaker-lpprz\" (UID: \"cc433179-ae5b-4250-80c2-97af371fdfed\") " pod="metallb-system/speaker-lpprz" Jan 22 16:45:35 crc kubenswrapper[4758]: I0122 16:45:35.138690 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gpvz\" (UniqueName: \"kubernetes.io/projected/ba3d731b-c87e-4003-a063-9977ae4eb0a2-kube-api-access-2gpvz\") pod \"controller-6968d8fdc4-k8lvt\" (UID: \"ba3d731b-c87e-4003-a063-9977ae4eb0a2\") " pod="metallb-system/controller-6968d8fdc4-k8lvt" Jan 22 16:45:35 crc kubenswrapper[4758]: I0122 16:45:35.182020 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-k8lvt" Jan 22 16:45:35 crc kubenswrapper[4758]: I0122 16:45:35.578908 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4612798c-6ae6-4a07-afe6-3f3574ee467b-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-np2j4\" (UID: \"4612798c-6ae6-4a07-afe6-3f3574ee467b\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-np2j4" Jan 22 16:45:35 crc kubenswrapper[4758]: I0122 16:45:35.587799 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4612798c-6ae6-4a07-afe6-3f3574ee467b-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-np2j4\" (UID: \"4612798c-6ae6-4a07-afe6-3f3574ee467b\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-np2j4" Jan 22 16:45:35 crc kubenswrapper[4758]: I0122 16:45:35.597884 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-k8lvt"] Jan 22 16:45:35 crc kubenswrapper[4758]: I0122 16:45:35.657087 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qs76m" event={"ID":"00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10","Type":"ContainerStarted","Data":"e7853da2bec9396f8f3330bcc708fb22ebfefb2cd0d3834790abe5d22b57bd73"} Jan 22 16:45:35 crc kubenswrapper[4758]: I0122 16:45:35.660363 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-k8lvt" event={"ID":"ba3d731b-c87e-4003-a063-9977ae4eb0a2","Type":"ContainerStarted","Data":"9c55d60489d7dc59987a9d5af70e5e7bf027fb2ef8dfad3ef40001270c5ac50b"} Jan 22 16:45:35 crc kubenswrapper[4758]: I0122 16:45:35.680469 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cc433179-ae5b-4250-80c2-97af371fdfed-metrics-certs\") pod \"speaker-lpprz\" (UID: \"cc433179-ae5b-4250-80c2-97af371fdfed\") " pod="metallb-system/speaker-lpprz" Jan 22 16:45:35 crc kubenswrapper[4758]: I0122 16:45:35.680606 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cc433179-ae5b-4250-80c2-97af371fdfed-memberlist\") pod \"speaker-lpprz\" (UID: \"cc433179-ae5b-4250-80c2-97af371fdfed\") " pod="metallb-system/speaker-lpprz" Jan 22 16:45:35 crc kubenswrapper[4758]: E0122 16:45:35.680848 4758 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 22 16:45:35 crc kubenswrapper[4758]: E0122 16:45:35.680929 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc433179-ae5b-4250-80c2-97af371fdfed-memberlist podName:cc433179-ae5b-4250-80c2-97af371fdfed nodeName:}" failed. No retries permitted until 2026-01-22 16:45:36.68089406 +0000 UTC m=+958.164233345 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/cc433179-ae5b-4250-80c2-97af371fdfed-memberlist") pod "speaker-lpprz" (UID: "cc433179-ae5b-4250-80c2-97af371fdfed") : secret "metallb-memberlist" not found Jan 22 16:45:35 crc kubenswrapper[4758]: I0122 16:45:35.683627 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cc433179-ae5b-4250-80c2-97af371fdfed-metrics-certs\") pod \"speaker-lpprz\" (UID: \"cc433179-ae5b-4250-80c2-97af371fdfed\") " pod="metallb-system/speaker-lpprz" Jan 22 16:45:35 crc kubenswrapper[4758]: I0122 16:45:35.727531 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-np2j4" Jan 22 16:45:36 crc kubenswrapper[4758]: I0122 16:45:36.195000 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-np2j4"] Jan 22 16:45:36 crc kubenswrapper[4758]: I0122 16:45:36.668462 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-k8lvt" event={"ID":"ba3d731b-c87e-4003-a063-9977ae4eb0a2","Type":"ContainerStarted","Data":"c5252073f1398bc45971ab30def8b37e812c5a595074c276739f26a0926caf9a"} Jan 22 16:45:36 crc kubenswrapper[4758]: I0122 16:45:36.668506 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-k8lvt" event={"ID":"ba3d731b-c87e-4003-a063-9977ae4eb0a2","Type":"ContainerStarted","Data":"663187b3ea0c08ef7fa127145b9d9ab45d1fa42c5baeb64b13a3d657191371cd"} Jan 22 16:45:36 crc kubenswrapper[4758]: I0122 16:45:36.668970 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-k8lvt" Jan 22 16:45:36 crc kubenswrapper[4758]: I0122 16:45:36.670338 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-np2j4" event={"ID":"4612798c-6ae6-4a07-afe6-3f3574ee467b","Type":"ContainerStarted","Data":"c06b07f3b6b14c19ca965aea568dbe9762f248cac4d8ea9e46221be7386dd378"} Jan 22 16:45:36 crc kubenswrapper[4758]: I0122 16:45:36.689347 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-k8lvt" podStartSLOduration=2.68932968 podStartE2EDuration="2.68932968s" podCreationTimestamp="2026-01-22 16:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:45:36.686811931 +0000 UTC m=+958.170151226" watchObservedRunningTime="2026-01-22 16:45:36.68932968 +0000 UTC m=+958.172668965" Jan 22 16:45:36 crc kubenswrapper[4758]: I0122 16:45:36.695433 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cc433179-ae5b-4250-80c2-97af371fdfed-memberlist\") pod \"speaker-lpprz\" (UID: \"cc433179-ae5b-4250-80c2-97af371fdfed\") " pod="metallb-system/speaker-lpprz" Jan 22 16:45:36 crc kubenswrapper[4758]: I0122 16:45:36.701332 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cc433179-ae5b-4250-80c2-97af371fdfed-memberlist\") pod \"speaker-lpprz\" (UID: \"cc433179-ae5b-4250-80c2-97af371fdfed\") " pod="metallb-system/speaker-lpprz" Jan 22 16:45:36 crc kubenswrapper[4758]: I0122 16:45:36.952457 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-lpprz" Jan 22 16:45:36 crc kubenswrapper[4758]: W0122 16:45:36.975081 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc433179_ae5b_4250_80c2_97af371fdfed.slice/crio-afa7798c107db39c848b75bedff195e5186bcc3d390106f77eeb2f75557f7890 WatchSource:0}: Error finding container afa7798c107db39c848b75bedff195e5186bcc3d390106f77eeb2f75557f7890: Status 404 returned error can't find the container with id afa7798c107db39c848b75bedff195e5186bcc3d390106f77eeb2f75557f7890 Jan 22 16:45:37 crc kubenswrapper[4758]: I0122 16:45:37.682590 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-lpprz" event={"ID":"cc433179-ae5b-4250-80c2-97af371fdfed","Type":"ContainerStarted","Data":"94d80fab259bbdba24e6cb6f6b906c1c7fc7544cc57f0cf0de9ee3c67a648b6c"} Jan 22 16:45:37 crc kubenswrapper[4758]: I0122 16:45:37.682634 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-lpprz" event={"ID":"cc433179-ae5b-4250-80c2-97af371fdfed","Type":"ContainerStarted","Data":"afa7798c107db39c848b75bedff195e5186bcc3d390106f77eeb2f75557f7890"} Jan 22 16:45:38 crc kubenswrapper[4758]: I0122 16:45:38.708248 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-lpprz" event={"ID":"cc433179-ae5b-4250-80c2-97af371fdfed","Type":"ContainerStarted","Data":"bf8dd6e0c0a8ca70f9a82d08be679055f70d6e03c1bbbd00b400c13eeae3f84c"} Jan 22 16:45:38 crc kubenswrapper[4758]: I0122 16:45:38.710670 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-lpprz" Jan 22 16:45:38 crc kubenswrapper[4758]: I0122 16:45:38.735685 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-lpprz" podStartSLOduration=4.735647312 podStartE2EDuration="4.735647312s" podCreationTimestamp="2026-01-22 16:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:45:38.731686854 +0000 UTC m=+960.215026139" watchObservedRunningTime="2026-01-22 16:45:38.735647312 +0000 UTC m=+960.218986607" Jan 22 16:45:45 crc kubenswrapper[4758]: I0122 16:45:45.187990 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-k8lvt" Jan 22 16:45:48 crc kubenswrapper[4758]: I0122 16:45:48.910135 4758 generic.go:334] "Generic (PLEG): container finished" podID="00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10" containerID="dcd3c50d34cb1d9181d800b2f46b36d37ad680bb88b6ad586bdf491668d362aa" exitCode=0 Jan 22 16:45:48 crc kubenswrapper[4758]: I0122 16:45:48.912372 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qs76m" event={"ID":"00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10","Type":"ContainerDied","Data":"dcd3c50d34cb1d9181d800b2f46b36d37ad680bb88b6ad586bdf491668d362aa"} Jan 22 16:45:48 crc kubenswrapper[4758]: I0122 16:45:48.914652 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-np2j4" event={"ID":"4612798c-6ae6-4a07-afe6-3f3574ee467b","Type":"ContainerStarted","Data":"a86ae74b37544ab164be41ebf400131e9e7d915da894679621c4bbdc42ef92f9"} Jan 22 16:45:48 crc kubenswrapper[4758]: I0122 16:45:48.915314 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-np2j4" Jan 22 16:45:49 crc kubenswrapper[4758]: I0122 16:45:49.028139 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-np2j4" podStartSLOduration=3.490014357 podStartE2EDuration="15.028120114s" podCreationTimestamp="2026-01-22 16:45:34 +0000 UTC" firstStartedPulling="2026-01-22 16:45:36.203484256 +0000 UTC m=+957.686823541" lastFinishedPulling="2026-01-22 16:45:47.741590013 +0000 UTC m=+969.224929298" observedRunningTime="2026-01-22 16:45:49.023651722 +0000 UTC m=+970.506991027" watchObservedRunningTime="2026-01-22 16:45:49.028120114 +0000 UTC m=+970.511459419" Jan 22 16:45:49 crc kubenswrapper[4758]: I0122 16:45:49.922326 4758 generic.go:334] "Generic (PLEG): container finished" podID="00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10" containerID="6e7a2e6957eb2ab61de1b21ebb60af5ac954e9cf9051a2ba47cd9f3071ba695c" exitCode=0 Jan 22 16:45:49 crc kubenswrapper[4758]: I0122 16:45:49.922422 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qs76m" event={"ID":"00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10","Type":"ContainerDied","Data":"6e7a2e6957eb2ab61de1b21ebb60af5ac954e9cf9051a2ba47cd9f3071ba695c"} Jan 22 16:45:50 crc kubenswrapper[4758]: I0122 16:45:50.930597 4758 generic.go:334] "Generic (PLEG): container finished" podID="00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10" containerID="73a226af1451d617b9d14b30cfe5177eaf03643c576da01e9906bc05ea80821a" exitCode=0 Jan 22 16:45:50 crc kubenswrapper[4758]: I0122 16:45:50.930650 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qs76m" event={"ID":"00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10","Type":"ContainerDied","Data":"73a226af1451d617b9d14b30cfe5177eaf03643c576da01e9906bc05ea80821a"} Jan 22 16:45:51 crc kubenswrapper[4758]: I0122 16:45:51.273496 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5qlt8"] Jan 22 16:45:51 crc kubenswrapper[4758]: I0122 16:45:51.276700 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5qlt8" Jan 22 16:45:51 crc kubenswrapper[4758]: I0122 16:45:51.281786 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5qlt8"] Jan 22 16:45:51 crc kubenswrapper[4758]: I0122 16:45:51.369492 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cd072aa-6f55-4c19-9024-23418136f65f-utilities\") pod \"certified-operators-5qlt8\" (UID: \"9cd072aa-6f55-4c19-9024-23418136f65f\") " pod="openshift-marketplace/certified-operators-5qlt8" Jan 22 16:45:51 crc kubenswrapper[4758]: I0122 16:45:51.369635 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cd072aa-6f55-4c19-9024-23418136f65f-catalog-content\") pod \"certified-operators-5qlt8\" (UID: \"9cd072aa-6f55-4c19-9024-23418136f65f\") " pod="openshift-marketplace/certified-operators-5qlt8" Jan 22 16:45:51 crc kubenswrapper[4758]: I0122 16:45:51.369935 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc8m6\" (UniqueName: \"kubernetes.io/projected/9cd072aa-6f55-4c19-9024-23418136f65f-kube-api-access-gc8m6\") pod \"certified-operators-5qlt8\" (UID: \"9cd072aa-6f55-4c19-9024-23418136f65f\") " pod="openshift-marketplace/certified-operators-5qlt8" Jan 22 16:45:51 crc kubenswrapper[4758]: I0122 16:45:51.470829 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gc8m6\" (UniqueName: \"kubernetes.io/projected/9cd072aa-6f55-4c19-9024-23418136f65f-kube-api-access-gc8m6\") pod \"certified-operators-5qlt8\" (UID: \"9cd072aa-6f55-4c19-9024-23418136f65f\") " pod="openshift-marketplace/certified-operators-5qlt8" Jan 22 16:45:51 crc kubenswrapper[4758]: I0122 16:45:51.470883 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cd072aa-6f55-4c19-9024-23418136f65f-utilities\") pod \"certified-operators-5qlt8\" (UID: \"9cd072aa-6f55-4c19-9024-23418136f65f\") " pod="openshift-marketplace/certified-operators-5qlt8" Jan 22 16:45:51 crc kubenswrapper[4758]: I0122 16:45:51.470914 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cd072aa-6f55-4c19-9024-23418136f65f-catalog-content\") pod \"certified-operators-5qlt8\" (UID: \"9cd072aa-6f55-4c19-9024-23418136f65f\") " pod="openshift-marketplace/certified-operators-5qlt8" Jan 22 16:45:51 crc kubenswrapper[4758]: I0122 16:45:51.471422 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cd072aa-6f55-4c19-9024-23418136f65f-catalog-content\") pod \"certified-operators-5qlt8\" (UID: \"9cd072aa-6f55-4c19-9024-23418136f65f\") " pod="openshift-marketplace/certified-operators-5qlt8" Jan 22 16:45:51 crc kubenswrapper[4758]: I0122 16:45:51.471584 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cd072aa-6f55-4c19-9024-23418136f65f-utilities\") pod \"certified-operators-5qlt8\" (UID: \"9cd072aa-6f55-4c19-9024-23418136f65f\") " pod="openshift-marketplace/certified-operators-5qlt8" Jan 22 16:45:51 crc kubenswrapper[4758]: I0122 16:45:51.490910 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc8m6\" (UniqueName: \"kubernetes.io/projected/9cd072aa-6f55-4c19-9024-23418136f65f-kube-api-access-gc8m6\") pod \"certified-operators-5qlt8\" (UID: \"9cd072aa-6f55-4c19-9024-23418136f65f\") " pod="openshift-marketplace/certified-operators-5qlt8" Jan 22 16:45:51 crc kubenswrapper[4758]: I0122 16:45:51.604490 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5qlt8" Jan 22 16:45:52 crc kubenswrapper[4758]: I0122 16:45:52.051057 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5qlt8"] Jan 22 16:45:52 crc kubenswrapper[4758]: W0122 16:45:52.054603 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9cd072aa_6f55_4c19_9024_23418136f65f.slice/crio-f4aab53cd0e574f151d9451ca860e3bd238b09050fe37ef64f442d6740300a10 WatchSource:0}: Error finding container f4aab53cd0e574f151d9451ca860e3bd238b09050fe37ef64f442d6740300a10: Status 404 returned error can't find the container with id f4aab53cd0e574f151d9451ca860e3bd238b09050fe37ef64f442d6740300a10 Jan 22 16:45:52 crc kubenswrapper[4758]: I0122 16:45:52.949906 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qs76m" event={"ID":"00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10","Type":"ContainerStarted","Data":"4ef92952a4e0a41fd3d8401bc63c13e5f0163d0d7ad1ccc22fa358715824edab"} Jan 22 16:45:52 crc kubenswrapper[4758]: I0122 16:45:52.950227 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qs76m" event={"ID":"00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10","Type":"ContainerStarted","Data":"731fd1b005e1c49d2a89ab2f576026779a2e12636e2d8e90a1c31c625ac48057"} Jan 22 16:45:52 crc kubenswrapper[4758]: I0122 16:45:52.950239 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qs76m" event={"ID":"00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10","Type":"ContainerStarted","Data":"c560a166a460b2325b5d6f5d546b5af7bb30d43c7f5ef25ba32dec5d25b29a6c"} Jan 22 16:45:52 crc kubenswrapper[4758]: I0122 16:45:52.950247 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qs76m" event={"ID":"00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10","Type":"ContainerStarted","Data":"f943e7edaeed9bcf949388715b710ca7446938063c0991360e6ad2e2e48345f5"} Jan 22 16:45:52 crc kubenswrapper[4758]: I0122 16:45:52.951713 4758 generic.go:334] "Generic (PLEG): container finished" podID="9cd072aa-6f55-4c19-9024-23418136f65f" containerID="7929c3ed447f182ca1785addbf1433d789f29540b78ed6cb69bd864d8bbcbcd3" exitCode=0 Jan 22 16:45:52 crc kubenswrapper[4758]: I0122 16:45:52.951768 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5qlt8" event={"ID":"9cd072aa-6f55-4c19-9024-23418136f65f","Type":"ContainerDied","Data":"7929c3ed447f182ca1785addbf1433d789f29540b78ed6cb69bd864d8bbcbcd3"} Jan 22 16:45:52 crc kubenswrapper[4758]: I0122 16:45:52.951857 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5qlt8" event={"ID":"9cd072aa-6f55-4c19-9024-23418136f65f","Type":"ContainerStarted","Data":"f4aab53cd0e574f151d9451ca860e3bd238b09050fe37ef64f442d6740300a10"} Jan 22 16:45:53 crc kubenswrapper[4758]: I0122 16:45:53.966616 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qs76m" event={"ID":"00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10","Type":"ContainerStarted","Data":"14d0be9a51c04ff1b0ec6ebe2ebe8cb0fd3119f5ff2ba37083a9d08a8e043bd4"} Jan 22 16:45:53 crc kubenswrapper[4758]: I0122 16:45:53.967096 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qs76m" event={"ID":"00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10","Type":"ContainerStarted","Data":"efc8440464261bc54198ca65c09cedb631e96d5de3067d6faf872241e94232e2"} Jan 22 16:45:53 crc kubenswrapper[4758]: I0122 16:45:53.967134 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-qs76m" Jan 22 16:45:53 crc kubenswrapper[4758]: I0122 16:45:53.971708 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5qlt8" event={"ID":"9cd072aa-6f55-4c19-9024-23418136f65f","Type":"ContainerStarted","Data":"64852037e7ee05f698fd25172ead9730a2890b74a7a8aa024078c0e3a64abafa"} Jan 22 16:45:54 crc kubenswrapper[4758]: I0122 16:45:54.031614 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-qs76m" podStartSLOduration=7.892477306 podStartE2EDuration="20.031594057s" podCreationTimestamp="2026-01-22 16:45:34 +0000 UTC" firstStartedPulling="2026-01-22 16:45:35.578301613 +0000 UTC m=+957.061640898" lastFinishedPulling="2026-01-22 16:45:47.717418354 +0000 UTC m=+969.200757649" observedRunningTime="2026-01-22 16:45:54.010634786 +0000 UTC m=+975.493974081" watchObservedRunningTime="2026-01-22 16:45:54.031594057 +0000 UTC m=+975.514933342" Jan 22 16:45:54 crc kubenswrapper[4758]: I0122 16:45:54.984572 4758 generic.go:334] "Generic (PLEG): container finished" podID="9cd072aa-6f55-4c19-9024-23418136f65f" containerID="64852037e7ee05f698fd25172ead9730a2890b74a7a8aa024078c0e3a64abafa" exitCode=0 Jan 22 16:45:54 crc kubenswrapper[4758]: I0122 16:45:54.984701 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5qlt8" event={"ID":"9cd072aa-6f55-4c19-9024-23418136f65f","Type":"ContainerDied","Data":"64852037e7ee05f698fd25172ead9730a2890b74a7a8aa024078c0e3a64abafa"} Jan 22 16:45:55 crc kubenswrapper[4758]: I0122 16:45:55.081090 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-qs76m" Jan 22 16:45:55 crc kubenswrapper[4758]: I0122 16:45:55.159676 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-qs76m" Jan 22 16:45:56 crc kubenswrapper[4758]: I0122 16:45:56.957120 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-lpprz" Jan 22 16:45:57 crc kubenswrapper[4758]: I0122 16:45:57.004882 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5qlt8" event={"ID":"9cd072aa-6f55-4c19-9024-23418136f65f","Type":"ContainerStarted","Data":"ae0d403e54c4af2114b775c5a3e7d3d427c24ec711f1233c958c26a49eb9f1ba"} Jan 22 16:45:57 crc kubenswrapper[4758]: I0122 16:45:57.029550 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5qlt8" podStartSLOduration=3.022436749 podStartE2EDuration="6.029531269s" podCreationTimestamp="2026-01-22 16:45:51 +0000 UTC" firstStartedPulling="2026-01-22 16:45:52.953085787 +0000 UTC m=+974.436425072" lastFinishedPulling="2026-01-22 16:45:55.960180277 +0000 UTC m=+977.443519592" observedRunningTime="2026-01-22 16:45:57.024723638 +0000 UTC m=+978.508062943" watchObservedRunningTime="2026-01-22 16:45:57.029531269 +0000 UTC m=+978.512870554" Jan 22 16:45:57 crc kubenswrapper[4758]: I0122 16:45:57.052886 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cfln4"] Jan 22 16:45:57 crc kubenswrapper[4758]: I0122 16:45:57.054099 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cfln4" Jan 22 16:45:57 crc kubenswrapper[4758]: I0122 16:45:57.064339 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cfln4"] Jan 22 16:45:57 crc kubenswrapper[4758]: I0122 16:45:57.176646 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b86affa-3b24-465c-9b74-ee0c04652dd2-utilities\") pod \"redhat-marketplace-cfln4\" (UID: \"1b86affa-3b24-465c-9b74-ee0c04652dd2\") " pod="openshift-marketplace/redhat-marketplace-cfln4" Jan 22 16:45:57 crc kubenswrapper[4758]: I0122 16:45:57.176967 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b86affa-3b24-465c-9b74-ee0c04652dd2-catalog-content\") pod \"redhat-marketplace-cfln4\" (UID: \"1b86affa-3b24-465c-9b74-ee0c04652dd2\") " pod="openshift-marketplace/redhat-marketplace-cfln4" Jan 22 16:45:57 crc kubenswrapper[4758]: I0122 16:45:57.177114 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmrs8\" (UniqueName: \"kubernetes.io/projected/1b86affa-3b24-465c-9b74-ee0c04652dd2-kube-api-access-nmrs8\") pod \"redhat-marketplace-cfln4\" (UID: \"1b86affa-3b24-465c-9b74-ee0c04652dd2\") " pod="openshift-marketplace/redhat-marketplace-cfln4" Jan 22 16:45:57 crc kubenswrapper[4758]: I0122 16:45:57.278192 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmrs8\" (UniqueName: \"kubernetes.io/projected/1b86affa-3b24-465c-9b74-ee0c04652dd2-kube-api-access-nmrs8\") pod \"redhat-marketplace-cfln4\" (UID: \"1b86affa-3b24-465c-9b74-ee0c04652dd2\") " pod="openshift-marketplace/redhat-marketplace-cfln4" Jan 22 16:45:57 crc kubenswrapper[4758]: I0122 16:45:57.278524 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b86affa-3b24-465c-9b74-ee0c04652dd2-utilities\") pod \"redhat-marketplace-cfln4\" (UID: \"1b86affa-3b24-465c-9b74-ee0c04652dd2\") " pod="openshift-marketplace/redhat-marketplace-cfln4" Jan 22 16:45:57 crc kubenswrapper[4758]: I0122 16:45:57.278620 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b86affa-3b24-465c-9b74-ee0c04652dd2-catalog-content\") pod \"redhat-marketplace-cfln4\" (UID: \"1b86affa-3b24-465c-9b74-ee0c04652dd2\") " pod="openshift-marketplace/redhat-marketplace-cfln4" Jan 22 16:45:57 crc kubenswrapper[4758]: I0122 16:45:57.279070 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b86affa-3b24-465c-9b74-ee0c04652dd2-catalog-content\") pod \"redhat-marketplace-cfln4\" (UID: \"1b86affa-3b24-465c-9b74-ee0c04652dd2\") " pod="openshift-marketplace/redhat-marketplace-cfln4" Jan 22 16:45:57 crc kubenswrapper[4758]: I0122 16:45:57.279173 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b86affa-3b24-465c-9b74-ee0c04652dd2-utilities\") pod \"redhat-marketplace-cfln4\" (UID: \"1b86affa-3b24-465c-9b74-ee0c04652dd2\") " pod="openshift-marketplace/redhat-marketplace-cfln4" Jan 22 16:45:57 crc kubenswrapper[4758]: I0122 16:45:57.306959 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmrs8\" (UniqueName: \"kubernetes.io/projected/1b86affa-3b24-465c-9b74-ee0c04652dd2-kube-api-access-nmrs8\") pod \"redhat-marketplace-cfln4\" (UID: \"1b86affa-3b24-465c-9b74-ee0c04652dd2\") " pod="openshift-marketplace/redhat-marketplace-cfln4" Jan 22 16:45:57 crc kubenswrapper[4758]: I0122 16:45:57.373734 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cfln4" Jan 22 16:45:57 crc kubenswrapper[4758]: I0122 16:45:57.937569 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cfln4"] Jan 22 16:45:58 crc kubenswrapper[4758]: I0122 16:45:58.011289 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cfln4" event={"ID":"1b86affa-3b24-465c-9b74-ee0c04652dd2","Type":"ContainerStarted","Data":"5dd10d770be49e22567180de5860a36edef1d4ebab24a0fbb07672df629eb711"} Jan 22 16:45:59 crc kubenswrapper[4758]: I0122 16:45:59.019766 4758 generic.go:334] "Generic (PLEG): container finished" podID="1b86affa-3b24-465c-9b74-ee0c04652dd2" containerID="7bf2a524170dc134ba96821c6a14402f6ba26a1f82c0e288059f346490788e60" exitCode=0 Jan 22 16:45:59 crc kubenswrapper[4758]: I0122 16:45:59.019824 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cfln4" event={"ID":"1b86affa-3b24-465c-9b74-ee0c04652dd2","Type":"ContainerDied","Data":"7bf2a524170dc134ba96821c6a14402f6ba26a1f82c0e288059f346490788e60"} Jan 22 16:46:01 crc kubenswrapper[4758]: I0122 16:46:01.152752 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cfln4" event={"ID":"1b86affa-3b24-465c-9b74-ee0c04652dd2","Type":"ContainerStarted","Data":"9225049c3f4b09caff6ed5d92647111c087fb8d962e55cf88541c29162518874"} Jan 22 16:46:01 crc kubenswrapper[4758]: I0122 16:46:01.605851 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5qlt8" Jan 22 16:46:01 crc kubenswrapper[4758]: I0122 16:46:01.606176 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5qlt8" Jan 22 16:46:01 crc kubenswrapper[4758]: I0122 16:46:01.657001 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5qlt8" Jan 22 16:46:02 crc kubenswrapper[4758]: I0122 16:46:02.163803 4758 generic.go:334] "Generic (PLEG): container finished" podID="1b86affa-3b24-465c-9b74-ee0c04652dd2" containerID="9225049c3f4b09caff6ed5d92647111c087fb8d962e55cf88541c29162518874" exitCode=0 Jan 22 16:46:02 crc kubenswrapper[4758]: I0122 16:46:02.163885 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cfln4" event={"ID":"1b86affa-3b24-465c-9b74-ee0c04652dd2","Type":"ContainerDied","Data":"9225049c3f4b09caff6ed5d92647111c087fb8d962e55cf88541c29162518874"} Jan 22 16:46:02 crc kubenswrapper[4758]: I0122 16:46:02.243917 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5qlt8" Jan 22 16:46:03 crc kubenswrapper[4758]: I0122 16:46:03.191418 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cfln4" event={"ID":"1b86affa-3b24-465c-9b74-ee0c04652dd2","Type":"ContainerStarted","Data":"34781f20db16fcc09a1b7202692d6ad9108146bc2c1ce4c9632a9c7f8dccdf49"} Jan 22 16:46:03 crc kubenswrapper[4758]: I0122 16:46:03.217272 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cfln4" podStartSLOduration=2.566650663 podStartE2EDuration="6.217242375s" podCreationTimestamp="2026-01-22 16:45:57 +0000 UTC" firstStartedPulling="2026-01-22 16:45:59.02173127 +0000 UTC m=+980.505070555" lastFinishedPulling="2026-01-22 16:46:02.672322982 +0000 UTC m=+984.155662267" observedRunningTime="2026-01-22 16:46:03.212464686 +0000 UTC m=+984.695803971" watchObservedRunningTime="2026-01-22 16:46:03.217242375 +0000 UTC m=+984.700581680" Jan 22 16:46:03 crc kubenswrapper[4758]: I0122 16:46:03.852797 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-gvt49"] Jan 22 16:46:03 crc kubenswrapper[4758]: I0122 16:46:03.853653 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gvt49" Jan 22 16:46:03 crc kubenswrapper[4758]: I0122 16:46:03.856082 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-ck689" Jan 22 16:46:03 crc kubenswrapper[4758]: I0122 16:46:03.856108 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 22 16:46:03 crc kubenswrapper[4758]: I0122 16:46:03.856495 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 22 16:46:03 crc kubenswrapper[4758]: I0122 16:46:03.868196 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-gvt49"] Jan 22 16:46:03 crc kubenswrapper[4758]: I0122 16:46:03.993335 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnkbc\" (UniqueName: \"kubernetes.io/projected/c721cd63-b13a-43f8-a903-f8a996d9c478-kube-api-access-mnkbc\") pod \"openstack-operator-index-gvt49\" (UID: \"c721cd63-b13a-43f8-a903-f8a996d9c478\") " pod="openstack-operators/openstack-operator-index-gvt49" Jan 22 16:46:04 crc kubenswrapper[4758]: I0122 16:46:04.094793 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnkbc\" (UniqueName: \"kubernetes.io/projected/c721cd63-b13a-43f8-a903-f8a996d9c478-kube-api-access-mnkbc\") pod \"openstack-operator-index-gvt49\" (UID: \"c721cd63-b13a-43f8-a903-f8a996d9c478\") " pod="openstack-operators/openstack-operator-index-gvt49" Jan 22 16:46:04 crc kubenswrapper[4758]: I0122 16:46:04.113961 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnkbc\" (UniqueName: \"kubernetes.io/projected/c721cd63-b13a-43f8-a903-f8a996d9c478-kube-api-access-mnkbc\") pod \"openstack-operator-index-gvt49\" (UID: \"c721cd63-b13a-43f8-a903-f8a996d9c478\") " pod="openstack-operators/openstack-operator-index-gvt49" Jan 22 16:46:04 crc kubenswrapper[4758]: I0122 16:46:04.179304 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gvt49" Jan 22 16:46:04 crc kubenswrapper[4758]: I0122 16:46:04.767473 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-gvt49"] Jan 22 16:46:04 crc kubenswrapper[4758]: W0122 16:46:04.771851 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc721cd63_b13a_43f8_a903_f8a996d9c478.slice/crio-0ae1a9341071c5898c37a193ce6f637031adfb90a3750c9fb59ac5a331d13f36 WatchSource:0}: Error finding container 0ae1a9341071c5898c37a193ce6f637031adfb90a3750c9fb59ac5a331d13f36: Status 404 returned error can't find the container with id 0ae1a9341071c5898c37a193ce6f637031adfb90a3750c9fb59ac5a331d13f36 Jan 22 16:46:05 crc kubenswrapper[4758]: I0122 16:46:05.084568 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-qs76m" Jan 22 16:46:05 crc kubenswrapper[4758]: I0122 16:46:05.214647 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gvt49" event={"ID":"c721cd63-b13a-43f8-a903-f8a996d9c478","Type":"ContainerStarted","Data":"0ae1a9341071c5898c37a193ce6f637031adfb90a3750c9fb59ac5a331d13f36"} Jan 22 16:46:05 crc kubenswrapper[4758]: I0122 16:46:05.737415 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-np2j4" Jan 22 16:46:07 crc kubenswrapper[4758]: I0122 16:46:07.373940 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cfln4" Jan 22 16:46:07 crc kubenswrapper[4758]: I0122 16:46:07.374261 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cfln4" Jan 22 16:46:07 crc kubenswrapper[4758]: I0122 16:46:07.486063 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cfln4" Jan 22 16:46:08 crc kubenswrapper[4758]: I0122 16:46:08.306822 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cfln4" Jan 22 16:46:09 crc kubenswrapper[4758]: I0122 16:46:09.055396 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5qlt8"] Jan 22 16:46:09 crc kubenswrapper[4758]: I0122 16:46:09.056345 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5qlt8" podUID="9cd072aa-6f55-4c19-9024-23418136f65f" containerName="registry-server" containerID="cri-o://ae0d403e54c4af2114b775c5a3e7d3d427c24ec711f1233c958c26a49eb9f1ba" gracePeriod=2 Jan 22 16:46:09 crc kubenswrapper[4758]: I0122 16:46:09.253123 4758 generic.go:334] "Generic (PLEG): container finished" podID="9cd072aa-6f55-4c19-9024-23418136f65f" containerID="ae0d403e54c4af2114b775c5a3e7d3d427c24ec711f1233c958c26a49eb9f1ba" exitCode=0 Jan 22 16:46:09 crc kubenswrapper[4758]: I0122 16:46:09.253193 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5qlt8" event={"ID":"9cd072aa-6f55-4c19-9024-23418136f65f","Type":"ContainerDied","Data":"ae0d403e54c4af2114b775c5a3e7d3d427c24ec711f1233c958c26a49eb9f1ba"} Jan 22 16:46:09 crc kubenswrapper[4758]: I0122 16:46:09.255849 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gvt49" event={"ID":"c721cd63-b13a-43f8-a903-f8a996d9c478","Type":"ContainerStarted","Data":"e5fcd5d10179d79e15e308bae411ed7a97fd8d25786db12632d69712afc5a5f7"} Jan 22 16:46:09 crc kubenswrapper[4758]: I0122 16:46:09.279227 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-gvt49" podStartSLOduration=2.628459517 podStartE2EDuration="6.279047089s" podCreationTimestamp="2026-01-22 16:46:03 +0000 UTC" firstStartedPulling="2026-01-22 16:46:04.774839304 +0000 UTC m=+986.258178599" lastFinishedPulling="2026-01-22 16:46:08.425426886 +0000 UTC m=+989.908766171" observedRunningTime="2026-01-22 16:46:09.274082294 +0000 UTC m=+990.757421579" watchObservedRunningTime="2026-01-22 16:46:09.279047089 +0000 UTC m=+990.762386374" Jan 22 16:46:09 crc kubenswrapper[4758]: I0122 16:46:09.500041 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5qlt8" Jan 22 16:46:09 crc kubenswrapper[4758]: I0122 16:46:09.524137 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gc8m6\" (UniqueName: \"kubernetes.io/projected/9cd072aa-6f55-4c19-9024-23418136f65f-kube-api-access-gc8m6\") pod \"9cd072aa-6f55-4c19-9024-23418136f65f\" (UID: \"9cd072aa-6f55-4c19-9024-23418136f65f\") " Jan 22 16:46:09 crc kubenswrapper[4758]: I0122 16:46:09.524190 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cd072aa-6f55-4c19-9024-23418136f65f-catalog-content\") pod \"9cd072aa-6f55-4c19-9024-23418136f65f\" (UID: \"9cd072aa-6f55-4c19-9024-23418136f65f\") " Jan 22 16:46:09 crc kubenswrapper[4758]: I0122 16:46:09.524260 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cd072aa-6f55-4c19-9024-23418136f65f-utilities\") pod \"9cd072aa-6f55-4c19-9024-23418136f65f\" (UID: \"9cd072aa-6f55-4c19-9024-23418136f65f\") " Jan 22 16:46:09 crc kubenswrapper[4758]: I0122 16:46:09.525889 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cd072aa-6f55-4c19-9024-23418136f65f-utilities" (OuterVolumeSpecName: "utilities") pod "9cd072aa-6f55-4c19-9024-23418136f65f" (UID: "9cd072aa-6f55-4c19-9024-23418136f65f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:46:09 crc kubenswrapper[4758]: I0122 16:46:09.530552 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cd072aa-6f55-4c19-9024-23418136f65f-kube-api-access-gc8m6" (OuterVolumeSpecName: "kube-api-access-gc8m6") pod "9cd072aa-6f55-4c19-9024-23418136f65f" (UID: "9cd072aa-6f55-4c19-9024-23418136f65f"). InnerVolumeSpecName "kube-api-access-gc8m6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:46:09 crc kubenswrapper[4758]: I0122 16:46:09.533118 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gc8m6\" (UniqueName: \"kubernetes.io/projected/9cd072aa-6f55-4c19-9024-23418136f65f-kube-api-access-gc8m6\") on node \"crc\" DevicePath \"\"" Jan 22 16:46:09 crc kubenswrapper[4758]: I0122 16:46:09.533144 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cd072aa-6f55-4c19-9024-23418136f65f-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:46:09 crc kubenswrapper[4758]: I0122 16:46:09.583700 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cd072aa-6f55-4c19-9024-23418136f65f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9cd072aa-6f55-4c19-9024-23418136f65f" (UID: "9cd072aa-6f55-4c19-9024-23418136f65f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:46:09 crc kubenswrapper[4758]: I0122 16:46:09.634704 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cd072aa-6f55-4c19-9024-23418136f65f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:46:10 crc kubenswrapper[4758]: I0122 16:46:10.262126 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5qlt8" event={"ID":"9cd072aa-6f55-4c19-9024-23418136f65f","Type":"ContainerDied","Data":"f4aab53cd0e574f151d9451ca860e3bd238b09050fe37ef64f442d6740300a10"} Jan 22 16:46:10 crc kubenswrapper[4758]: I0122 16:46:10.262465 4758 scope.go:117] "RemoveContainer" containerID="ae0d403e54c4af2114b775c5a3e7d3d427c24ec711f1233c958c26a49eb9f1ba" Jan 22 16:46:10 crc kubenswrapper[4758]: I0122 16:46:10.262149 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5qlt8" Jan 22 16:46:10 crc kubenswrapper[4758]: I0122 16:46:10.276646 4758 scope.go:117] "RemoveContainer" containerID="64852037e7ee05f698fd25172ead9730a2890b74a7a8aa024078c0e3a64abafa" Jan 22 16:46:10 crc kubenswrapper[4758]: I0122 16:46:10.295012 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5qlt8"] Jan 22 16:46:10 crc kubenswrapper[4758]: I0122 16:46:10.299602 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5qlt8"] Jan 22 16:46:10 crc kubenswrapper[4758]: I0122 16:46:10.319232 4758 scope.go:117] "RemoveContainer" containerID="7929c3ed447f182ca1785addbf1433d789f29540b78ed6cb69bd864d8bbcbcd3" Jan 22 16:46:10 crc kubenswrapper[4758]: I0122 16:46:10.643612 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cfln4"] Jan 22 16:46:10 crc kubenswrapper[4758]: I0122 16:46:10.643944 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cfln4" podUID="1b86affa-3b24-465c-9b74-ee0c04652dd2" containerName="registry-server" containerID="cri-o://34781f20db16fcc09a1b7202692d6ad9108146bc2c1ce4c9632a9c7f8dccdf49" gracePeriod=2 Jan 22 16:46:10 crc kubenswrapper[4758]: I0122 16:46:10.818350 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cd072aa-6f55-4c19-9024-23418136f65f" path="/var/lib/kubelet/pods/9cd072aa-6f55-4c19-9024-23418136f65f/volumes" Jan 22 16:46:11 crc kubenswrapper[4758]: I0122 16:46:11.289770 4758 generic.go:334] "Generic (PLEG): container finished" podID="1b86affa-3b24-465c-9b74-ee0c04652dd2" containerID="34781f20db16fcc09a1b7202692d6ad9108146bc2c1ce4c9632a9c7f8dccdf49" exitCode=0 Jan 22 16:46:11 crc kubenswrapper[4758]: I0122 16:46:11.289831 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cfln4" event={"ID":"1b86affa-3b24-465c-9b74-ee0c04652dd2","Type":"ContainerDied","Data":"34781f20db16fcc09a1b7202692d6ad9108146bc2c1ce4c9632a9c7f8dccdf49"} Jan 22 16:46:11 crc kubenswrapper[4758]: I0122 16:46:11.539088 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cfln4" Jan 22 16:46:11 crc kubenswrapper[4758]: I0122 16:46:11.557969 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b86affa-3b24-465c-9b74-ee0c04652dd2-catalog-content\") pod \"1b86affa-3b24-465c-9b74-ee0c04652dd2\" (UID: \"1b86affa-3b24-465c-9b74-ee0c04652dd2\") " Jan 22 16:46:11 crc kubenswrapper[4758]: I0122 16:46:11.558099 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b86affa-3b24-465c-9b74-ee0c04652dd2-utilities\") pod \"1b86affa-3b24-465c-9b74-ee0c04652dd2\" (UID: \"1b86affa-3b24-465c-9b74-ee0c04652dd2\") " Jan 22 16:46:11 crc kubenswrapper[4758]: I0122 16:46:11.558183 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmrs8\" (UniqueName: \"kubernetes.io/projected/1b86affa-3b24-465c-9b74-ee0c04652dd2-kube-api-access-nmrs8\") pod \"1b86affa-3b24-465c-9b74-ee0c04652dd2\" (UID: \"1b86affa-3b24-465c-9b74-ee0c04652dd2\") " Jan 22 16:46:11 crc kubenswrapper[4758]: I0122 16:46:11.559176 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b86affa-3b24-465c-9b74-ee0c04652dd2-utilities" (OuterVolumeSpecName: "utilities") pod "1b86affa-3b24-465c-9b74-ee0c04652dd2" (UID: "1b86affa-3b24-465c-9b74-ee0c04652dd2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:46:11 crc kubenswrapper[4758]: I0122 16:46:11.563554 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b86affa-3b24-465c-9b74-ee0c04652dd2-kube-api-access-nmrs8" (OuterVolumeSpecName: "kube-api-access-nmrs8") pod "1b86affa-3b24-465c-9b74-ee0c04652dd2" (UID: "1b86affa-3b24-465c-9b74-ee0c04652dd2"). InnerVolumeSpecName "kube-api-access-nmrs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:46:11 crc kubenswrapper[4758]: I0122 16:46:11.595628 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b86affa-3b24-465c-9b74-ee0c04652dd2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b86affa-3b24-465c-9b74-ee0c04652dd2" (UID: "1b86affa-3b24-465c-9b74-ee0c04652dd2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:46:11 crc kubenswrapper[4758]: I0122 16:46:11.659431 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b86affa-3b24-465c-9b74-ee0c04652dd2-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:46:11 crc kubenswrapper[4758]: I0122 16:46:11.659471 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmrs8\" (UniqueName: \"kubernetes.io/projected/1b86affa-3b24-465c-9b74-ee0c04652dd2-kube-api-access-nmrs8\") on node \"crc\" DevicePath \"\"" Jan 22 16:46:11 crc kubenswrapper[4758]: I0122 16:46:11.659485 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b86affa-3b24-465c-9b74-ee0c04652dd2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:46:12 crc kubenswrapper[4758]: I0122 16:46:12.298861 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cfln4" event={"ID":"1b86affa-3b24-465c-9b74-ee0c04652dd2","Type":"ContainerDied","Data":"5dd10d770be49e22567180de5860a36edef1d4ebab24a0fbb07672df629eb711"} Jan 22 16:46:12 crc kubenswrapper[4758]: I0122 16:46:12.298912 4758 scope.go:117] "RemoveContainer" containerID="34781f20db16fcc09a1b7202692d6ad9108146bc2c1ce4c9632a9c7f8dccdf49" Jan 22 16:46:12 crc kubenswrapper[4758]: I0122 16:46:12.298955 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cfln4" Jan 22 16:46:12 crc kubenswrapper[4758]: I0122 16:46:12.312937 4758 scope.go:117] "RemoveContainer" containerID="9225049c3f4b09caff6ed5d92647111c087fb8d962e55cf88541c29162518874" Jan 22 16:46:12 crc kubenswrapper[4758]: I0122 16:46:12.330301 4758 scope.go:117] "RemoveContainer" containerID="7bf2a524170dc134ba96821c6a14402f6ba26a1f82c0e288059f346490788e60" Jan 22 16:46:12 crc kubenswrapper[4758]: I0122 16:46:12.336298 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cfln4"] Jan 22 16:46:12 crc kubenswrapper[4758]: I0122 16:46:12.344001 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cfln4"] Jan 22 16:46:12 crc kubenswrapper[4758]: I0122 16:46:12.817651 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b86affa-3b24-465c-9b74-ee0c04652dd2" path="/var/lib/kubelet/pods/1b86affa-3b24-465c-9b74-ee0c04652dd2/volumes" Jan 22 16:46:13 crc kubenswrapper[4758]: I0122 16:46:13.836971 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:46:13 crc kubenswrapper[4758]: I0122 16:46:13.837028 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:46:13 crc kubenswrapper[4758]: I0122 16:46:13.854667 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4h4ds"] Jan 22 16:46:13 crc kubenswrapper[4758]: E0122 16:46:13.854982 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b86affa-3b24-465c-9b74-ee0c04652dd2" containerName="extract-content" Jan 22 16:46:13 crc kubenswrapper[4758]: I0122 16:46:13.855000 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b86affa-3b24-465c-9b74-ee0c04652dd2" containerName="extract-content" Jan 22 16:46:13 crc kubenswrapper[4758]: E0122 16:46:13.855013 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b86affa-3b24-465c-9b74-ee0c04652dd2" containerName="extract-utilities" Jan 22 16:46:13 crc kubenswrapper[4758]: I0122 16:46:13.855023 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b86affa-3b24-465c-9b74-ee0c04652dd2" containerName="extract-utilities" Jan 22 16:46:13 crc kubenswrapper[4758]: E0122 16:46:13.855032 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cd072aa-6f55-4c19-9024-23418136f65f" containerName="extract-content" Jan 22 16:46:13 crc kubenswrapper[4758]: I0122 16:46:13.855041 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cd072aa-6f55-4c19-9024-23418136f65f" containerName="extract-content" Jan 22 16:46:13 crc kubenswrapper[4758]: E0122 16:46:13.855055 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b86affa-3b24-465c-9b74-ee0c04652dd2" containerName="registry-server" Jan 22 16:46:13 crc kubenswrapper[4758]: I0122 16:46:13.855062 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b86affa-3b24-465c-9b74-ee0c04652dd2" containerName="registry-server" Jan 22 16:46:13 crc kubenswrapper[4758]: E0122 16:46:13.855077 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cd072aa-6f55-4c19-9024-23418136f65f" containerName="registry-server" Jan 22 16:46:13 crc kubenswrapper[4758]: I0122 16:46:13.855086 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cd072aa-6f55-4c19-9024-23418136f65f" containerName="registry-server" Jan 22 16:46:13 crc kubenswrapper[4758]: E0122 16:46:13.855094 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cd072aa-6f55-4c19-9024-23418136f65f" containerName="extract-utilities" Jan 22 16:46:13 crc kubenswrapper[4758]: I0122 16:46:13.855101 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cd072aa-6f55-4c19-9024-23418136f65f" containerName="extract-utilities" Jan 22 16:46:13 crc kubenswrapper[4758]: I0122 16:46:13.855263 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b86affa-3b24-465c-9b74-ee0c04652dd2" containerName="registry-server" Jan 22 16:46:13 crc kubenswrapper[4758]: I0122 16:46:13.855275 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cd072aa-6f55-4c19-9024-23418136f65f" containerName="registry-server" Jan 22 16:46:13 crc kubenswrapper[4758]: I0122 16:46:13.856167 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4h4ds" Jan 22 16:46:13 crc kubenswrapper[4758]: I0122 16:46:13.869917 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4h4ds"] Jan 22 16:46:13 crc kubenswrapper[4758]: I0122 16:46:13.888086 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af754b71-e3af-432b-b8bc-923ce8ca8b3d-catalog-content\") pod \"community-operators-4h4ds\" (UID: \"af754b71-e3af-432b-b8bc-923ce8ca8b3d\") " pod="openshift-marketplace/community-operators-4h4ds" Jan 22 16:46:13 crc kubenswrapper[4758]: I0122 16:46:13.888222 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af754b71-e3af-432b-b8bc-923ce8ca8b3d-utilities\") pod \"community-operators-4h4ds\" (UID: \"af754b71-e3af-432b-b8bc-923ce8ca8b3d\") " pod="openshift-marketplace/community-operators-4h4ds" Jan 22 16:46:13 crc kubenswrapper[4758]: I0122 16:46:13.888267 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjpfj\" (UniqueName: \"kubernetes.io/projected/af754b71-e3af-432b-b8bc-923ce8ca8b3d-kube-api-access-cjpfj\") pod \"community-operators-4h4ds\" (UID: \"af754b71-e3af-432b-b8bc-923ce8ca8b3d\") " pod="openshift-marketplace/community-operators-4h4ds" Jan 22 16:46:13 crc kubenswrapper[4758]: I0122 16:46:13.989402 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af754b71-e3af-432b-b8bc-923ce8ca8b3d-utilities\") pod \"community-operators-4h4ds\" (UID: \"af754b71-e3af-432b-b8bc-923ce8ca8b3d\") " pod="openshift-marketplace/community-operators-4h4ds" Jan 22 16:46:13 crc kubenswrapper[4758]: I0122 16:46:13.989466 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjpfj\" (UniqueName: \"kubernetes.io/projected/af754b71-e3af-432b-b8bc-923ce8ca8b3d-kube-api-access-cjpfj\") pod \"community-operators-4h4ds\" (UID: \"af754b71-e3af-432b-b8bc-923ce8ca8b3d\") " pod="openshift-marketplace/community-operators-4h4ds" Jan 22 16:46:13 crc kubenswrapper[4758]: I0122 16:46:13.989508 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af754b71-e3af-432b-b8bc-923ce8ca8b3d-catalog-content\") pod \"community-operators-4h4ds\" (UID: \"af754b71-e3af-432b-b8bc-923ce8ca8b3d\") " pod="openshift-marketplace/community-operators-4h4ds" Jan 22 16:46:13 crc kubenswrapper[4758]: I0122 16:46:13.990247 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af754b71-e3af-432b-b8bc-923ce8ca8b3d-catalog-content\") pod \"community-operators-4h4ds\" (UID: \"af754b71-e3af-432b-b8bc-923ce8ca8b3d\") " pod="openshift-marketplace/community-operators-4h4ds" Jan 22 16:46:13 crc kubenswrapper[4758]: I0122 16:46:13.990343 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af754b71-e3af-432b-b8bc-923ce8ca8b3d-utilities\") pod \"community-operators-4h4ds\" (UID: \"af754b71-e3af-432b-b8bc-923ce8ca8b3d\") " pod="openshift-marketplace/community-operators-4h4ds" Jan 22 16:46:14 crc kubenswrapper[4758]: I0122 16:46:14.015067 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjpfj\" (UniqueName: \"kubernetes.io/projected/af754b71-e3af-432b-b8bc-923ce8ca8b3d-kube-api-access-cjpfj\") pod \"community-operators-4h4ds\" (UID: \"af754b71-e3af-432b-b8bc-923ce8ca8b3d\") " pod="openshift-marketplace/community-operators-4h4ds" Jan 22 16:46:14 crc kubenswrapper[4758]: I0122 16:46:14.179940 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-gvt49" Jan 22 16:46:14 crc kubenswrapper[4758]: I0122 16:46:14.179980 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-gvt49" Jan 22 16:46:14 crc kubenswrapper[4758]: I0122 16:46:14.216859 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4h4ds" Jan 22 16:46:14 crc kubenswrapper[4758]: I0122 16:46:14.252212 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-gvt49" Jan 22 16:46:14 crc kubenswrapper[4758]: I0122 16:46:14.368556 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-gvt49" Jan 22 16:46:14 crc kubenswrapper[4758]: I0122 16:46:14.729958 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4h4ds"] Jan 22 16:46:15 crc kubenswrapper[4758]: I0122 16:46:15.325447 4758 generic.go:334] "Generic (PLEG): container finished" podID="af754b71-e3af-432b-b8bc-923ce8ca8b3d" containerID="5e2ffad723f17425583976481cad5bd53f48d3a4601f5f7f0f7946e92e366da1" exitCode=0 Jan 22 16:46:15 crc kubenswrapper[4758]: I0122 16:46:15.325504 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4h4ds" event={"ID":"af754b71-e3af-432b-b8bc-923ce8ca8b3d","Type":"ContainerDied","Data":"5e2ffad723f17425583976481cad5bd53f48d3a4601f5f7f0f7946e92e366da1"} Jan 22 16:46:15 crc kubenswrapper[4758]: I0122 16:46:15.325807 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4h4ds" event={"ID":"af754b71-e3af-432b-b8bc-923ce8ca8b3d","Type":"ContainerStarted","Data":"160b4cc2e2014b3068d65630da1a2a0d51e1f8ab06fbee5e56f289bafedb974a"} Jan 22 16:46:17 crc kubenswrapper[4758]: I0122 16:46:17.890328 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26ckcb6j"] Jan 22 16:46:17 crc kubenswrapper[4758]: I0122 16:46:17.892086 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26ckcb6j" Jan 22 16:46:17 crc kubenswrapper[4758]: I0122 16:46:17.897579 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-2xkpq" Jan 22 16:46:17 crc kubenswrapper[4758]: I0122 16:46:17.902450 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26ckcb6j"] Jan 22 16:46:17 crc kubenswrapper[4758]: I0122 16:46:17.950460 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4b41ab64-3525-4cfb-a7b6-1d3a59959fd2-bundle\") pod \"e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26ckcb6j\" (UID: \"4b41ab64-3525-4cfb-a7b6-1d3a59959fd2\") " pod="openstack-operators/e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26ckcb6j" Jan 22 16:46:17 crc kubenswrapper[4758]: I0122 16:46:17.950549 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4b41ab64-3525-4cfb-a7b6-1d3a59959fd2-util\") pod \"e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26ckcb6j\" (UID: \"4b41ab64-3525-4cfb-a7b6-1d3a59959fd2\") " pod="openstack-operators/e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26ckcb6j" Jan 22 16:46:17 crc kubenswrapper[4758]: I0122 16:46:17.950572 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdxmm\" (UniqueName: \"kubernetes.io/projected/4b41ab64-3525-4cfb-a7b6-1d3a59959fd2-kube-api-access-hdxmm\") pod \"e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26ckcb6j\" (UID: \"4b41ab64-3525-4cfb-a7b6-1d3a59959fd2\") " pod="openstack-operators/e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26ckcb6j" Jan 22 16:46:18 crc kubenswrapper[4758]: I0122 16:46:18.051459 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4b41ab64-3525-4cfb-a7b6-1d3a59959fd2-bundle\") pod \"e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26ckcb6j\" (UID: \"4b41ab64-3525-4cfb-a7b6-1d3a59959fd2\") " pod="openstack-operators/e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26ckcb6j" Jan 22 16:46:18 crc kubenswrapper[4758]: I0122 16:46:18.051550 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4b41ab64-3525-4cfb-a7b6-1d3a59959fd2-util\") pod \"e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26ckcb6j\" (UID: \"4b41ab64-3525-4cfb-a7b6-1d3a59959fd2\") " pod="openstack-operators/e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26ckcb6j" Jan 22 16:46:18 crc kubenswrapper[4758]: I0122 16:46:18.051575 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdxmm\" (UniqueName: \"kubernetes.io/projected/4b41ab64-3525-4cfb-a7b6-1d3a59959fd2-kube-api-access-hdxmm\") pod \"e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26ckcb6j\" (UID: \"4b41ab64-3525-4cfb-a7b6-1d3a59959fd2\") " pod="openstack-operators/e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26ckcb6j" Jan 22 16:46:18 crc kubenswrapper[4758]: I0122 16:46:18.052119 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4b41ab64-3525-4cfb-a7b6-1d3a59959fd2-bundle\") pod \"e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26ckcb6j\" (UID: \"4b41ab64-3525-4cfb-a7b6-1d3a59959fd2\") " pod="openstack-operators/e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26ckcb6j" Jan 22 16:46:18 crc kubenswrapper[4758]: I0122 16:46:18.052152 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4b41ab64-3525-4cfb-a7b6-1d3a59959fd2-util\") pod \"e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26ckcb6j\" (UID: \"4b41ab64-3525-4cfb-a7b6-1d3a59959fd2\") " pod="openstack-operators/e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26ckcb6j" Jan 22 16:46:18 crc kubenswrapper[4758]: I0122 16:46:18.078087 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdxmm\" (UniqueName: \"kubernetes.io/projected/4b41ab64-3525-4cfb-a7b6-1d3a59959fd2-kube-api-access-hdxmm\") pod \"e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26ckcb6j\" (UID: \"4b41ab64-3525-4cfb-a7b6-1d3a59959fd2\") " pod="openstack-operators/e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26ckcb6j" Jan 22 16:46:18 crc kubenswrapper[4758]: I0122 16:46:18.235044 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26ckcb6j" Jan 22 16:46:18 crc kubenswrapper[4758]: I0122 16:46:18.356150 4758 generic.go:334] "Generic (PLEG): container finished" podID="af754b71-e3af-432b-b8bc-923ce8ca8b3d" containerID="5020f3d4cf7799b42841b1a4aeb0c834f9f28dbdd704ba9580e3aa0fdfd35420" exitCode=0 Jan 22 16:46:18 crc kubenswrapper[4758]: I0122 16:46:18.356197 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4h4ds" event={"ID":"af754b71-e3af-432b-b8bc-923ce8ca8b3d","Type":"ContainerDied","Data":"5020f3d4cf7799b42841b1a4aeb0c834f9f28dbdd704ba9580e3aa0fdfd35420"} Jan 22 16:46:18 crc kubenswrapper[4758]: I0122 16:46:18.363198 4758 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 16:46:18 crc kubenswrapper[4758]: I0122 16:46:18.548666 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26ckcb6j"] Jan 22 16:46:18 crc kubenswrapper[4758]: W0122 16:46:18.558843 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b41ab64_3525_4cfb_a7b6_1d3a59959fd2.slice/crio-c532b9cd245af9314b01d57b4ff184e32ec9e00b83ace06d594757952a00552f WatchSource:0}: Error finding container c532b9cd245af9314b01d57b4ff184e32ec9e00b83ace06d594757952a00552f: Status 404 returned error can't find the container with id c532b9cd245af9314b01d57b4ff184e32ec9e00b83ace06d594757952a00552f Jan 22 16:46:19 crc kubenswrapper[4758]: I0122 16:46:19.363565 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26ckcb6j" event={"ID":"4b41ab64-3525-4cfb-a7b6-1d3a59959fd2","Type":"ContainerStarted","Data":"4808104e3b79eb2f8d84a621ff9c4a39f7b68366e8344c5ff3ec2c7b7c026865"} Jan 22 16:46:19 crc kubenswrapper[4758]: I0122 16:46:19.363871 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26ckcb6j" event={"ID":"4b41ab64-3525-4cfb-a7b6-1d3a59959fd2","Type":"ContainerStarted","Data":"c532b9cd245af9314b01d57b4ff184e32ec9e00b83ace06d594757952a00552f"} Jan 22 16:46:20 crc kubenswrapper[4758]: I0122 16:46:20.372571 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4h4ds" event={"ID":"af754b71-e3af-432b-b8bc-923ce8ca8b3d","Type":"ContainerStarted","Data":"3f40c7b6863b69f871d6034df04b86321def4537a018a8035f8a5443977ec643"} Jan 22 16:46:20 crc kubenswrapper[4758]: I0122 16:46:20.374924 4758 generic.go:334] "Generic (PLEG): container finished" podID="4b41ab64-3525-4cfb-a7b6-1d3a59959fd2" containerID="4808104e3b79eb2f8d84a621ff9c4a39f7b68366e8344c5ff3ec2c7b7c026865" exitCode=0 Jan 22 16:46:20 crc kubenswrapper[4758]: I0122 16:46:20.374969 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26ckcb6j" event={"ID":"4b41ab64-3525-4cfb-a7b6-1d3a59959fd2","Type":"ContainerDied","Data":"4808104e3b79eb2f8d84a621ff9c4a39f7b68366e8344c5ff3ec2c7b7c026865"} Jan 22 16:46:20 crc kubenswrapper[4758]: I0122 16:46:20.400602 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4h4ds" podStartSLOduration=3.10323571 podStartE2EDuration="7.400583456s" podCreationTimestamp="2026-01-22 16:46:13 +0000 UTC" firstStartedPulling="2026-01-22 16:46:15.328309041 +0000 UTC m=+996.811648346" lastFinishedPulling="2026-01-22 16:46:19.625656797 +0000 UTC m=+1001.108996092" observedRunningTime="2026-01-22 16:46:20.396530766 +0000 UTC m=+1001.879870051" watchObservedRunningTime="2026-01-22 16:46:20.400583456 +0000 UTC m=+1001.883922741" Jan 22 16:46:23 crc kubenswrapper[4758]: I0122 16:46:23.405012 4758 generic.go:334] "Generic (PLEG): container finished" podID="4b41ab64-3525-4cfb-a7b6-1d3a59959fd2" containerID="eafd9da3d077ef6f683e2eb1f89b26ade71ee6b4fb7de0497c77af2c702c5fab" exitCode=0 Jan 22 16:46:23 crc kubenswrapper[4758]: I0122 16:46:23.405129 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26ckcb6j" event={"ID":"4b41ab64-3525-4cfb-a7b6-1d3a59959fd2","Type":"ContainerDied","Data":"eafd9da3d077ef6f683e2eb1f89b26ade71ee6b4fb7de0497c77af2c702c5fab"} Jan 22 16:46:24 crc kubenswrapper[4758]: I0122 16:46:24.217116 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4h4ds" Jan 22 16:46:24 crc kubenswrapper[4758]: I0122 16:46:24.217265 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4h4ds" Jan 22 16:46:24 crc kubenswrapper[4758]: I0122 16:46:24.270170 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4h4ds" Jan 22 16:46:24 crc kubenswrapper[4758]: I0122 16:46:24.455473 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4h4ds" Jan 22 16:46:26 crc kubenswrapper[4758]: I0122 16:46:26.047677 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4h4ds"] Jan 22 16:46:26 crc kubenswrapper[4758]: I0122 16:46:26.426842 4758 generic.go:334] "Generic (PLEG): container finished" podID="4b41ab64-3525-4cfb-a7b6-1d3a59959fd2" containerID="ea60522bb5aaf7117810a921af7baf2a60ef6f6dbec1fc256ab5cbdc6b65c815" exitCode=0 Jan 22 16:46:26 crc kubenswrapper[4758]: I0122 16:46:26.427970 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26ckcb6j" event={"ID":"4b41ab64-3525-4cfb-a7b6-1d3a59959fd2","Type":"ContainerDied","Data":"ea60522bb5aaf7117810a921af7baf2a60ef6f6dbec1fc256ab5cbdc6b65c815"} Jan 22 16:46:27 crc kubenswrapper[4758]: I0122 16:46:27.438528 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4h4ds" podUID="af754b71-e3af-432b-b8bc-923ce8ca8b3d" containerName="registry-server" containerID="cri-o://3f40c7b6863b69f871d6034df04b86321def4537a018a8035f8a5443977ec643" gracePeriod=2 Jan 22 16:46:27 crc kubenswrapper[4758]: I0122 16:46:27.702199 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26ckcb6j" Jan 22 16:46:27 crc kubenswrapper[4758]: I0122 16:46:27.882607 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4b41ab64-3525-4cfb-a7b6-1d3a59959fd2-bundle\") pod \"4b41ab64-3525-4cfb-a7b6-1d3a59959fd2\" (UID: \"4b41ab64-3525-4cfb-a7b6-1d3a59959fd2\") " Jan 22 16:46:27 crc kubenswrapper[4758]: I0122 16:46:27.882652 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdxmm\" (UniqueName: \"kubernetes.io/projected/4b41ab64-3525-4cfb-a7b6-1d3a59959fd2-kube-api-access-hdxmm\") pod \"4b41ab64-3525-4cfb-a7b6-1d3a59959fd2\" (UID: \"4b41ab64-3525-4cfb-a7b6-1d3a59959fd2\") " Jan 22 16:46:27 crc kubenswrapper[4758]: I0122 16:46:27.882758 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4b41ab64-3525-4cfb-a7b6-1d3a59959fd2-util\") pod \"4b41ab64-3525-4cfb-a7b6-1d3a59959fd2\" (UID: \"4b41ab64-3525-4cfb-a7b6-1d3a59959fd2\") " Jan 22 16:46:27 crc kubenswrapper[4758]: I0122 16:46:27.884954 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b41ab64-3525-4cfb-a7b6-1d3a59959fd2-bundle" (OuterVolumeSpecName: "bundle") pod "4b41ab64-3525-4cfb-a7b6-1d3a59959fd2" (UID: "4b41ab64-3525-4cfb-a7b6-1d3a59959fd2"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:46:27 crc kubenswrapper[4758]: I0122 16:46:27.890701 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b41ab64-3525-4cfb-a7b6-1d3a59959fd2-kube-api-access-hdxmm" (OuterVolumeSpecName: "kube-api-access-hdxmm") pod "4b41ab64-3525-4cfb-a7b6-1d3a59959fd2" (UID: "4b41ab64-3525-4cfb-a7b6-1d3a59959fd2"). InnerVolumeSpecName "kube-api-access-hdxmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:46:27 crc kubenswrapper[4758]: I0122 16:46:27.899202 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b41ab64-3525-4cfb-a7b6-1d3a59959fd2-util" (OuterVolumeSpecName: "util") pod "4b41ab64-3525-4cfb-a7b6-1d3a59959fd2" (UID: "4b41ab64-3525-4cfb-a7b6-1d3a59959fd2"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:46:27 crc kubenswrapper[4758]: I0122 16:46:27.986625 4758 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4b41ab64-3525-4cfb-a7b6-1d3a59959fd2-util\") on node \"crc\" DevicePath \"\"" Jan 22 16:46:27 crc kubenswrapper[4758]: I0122 16:46:27.986671 4758 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4b41ab64-3525-4cfb-a7b6-1d3a59959fd2-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:46:27 crc kubenswrapper[4758]: I0122 16:46:27.986685 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdxmm\" (UniqueName: \"kubernetes.io/projected/4b41ab64-3525-4cfb-a7b6-1d3a59959fd2-kube-api-access-hdxmm\") on node \"crc\" DevicePath \"\"" Jan 22 16:46:28 crc kubenswrapper[4758]: I0122 16:46:28.446975 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26ckcb6j" event={"ID":"4b41ab64-3525-4cfb-a7b6-1d3a59959fd2","Type":"ContainerDied","Data":"c532b9cd245af9314b01d57b4ff184e32ec9e00b83ace06d594757952a00552f"} Jan 22 16:46:28 crc kubenswrapper[4758]: I0122 16:46:28.447026 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c532b9cd245af9314b01d57b4ff184e32ec9e00b83ace06d594757952a00552f" Jan 22 16:46:28 crc kubenswrapper[4758]: I0122 16:46:28.447023 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26ckcb6j" Jan 22 16:46:28 crc kubenswrapper[4758]: I0122 16:46:28.449701 4758 generic.go:334] "Generic (PLEG): container finished" podID="af754b71-e3af-432b-b8bc-923ce8ca8b3d" containerID="3f40c7b6863b69f871d6034df04b86321def4537a018a8035f8a5443977ec643" exitCode=0 Jan 22 16:46:28 crc kubenswrapper[4758]: I0122 16:46:28.449751 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4h4ds" event={"ID":"af754b71-e3af-432b-b8bc-923ce8ca8b3d","Type":"ContainerDied","Data":"3f40c7b6863b69f871d6034df04b86321def4537a018a8035f8a5443977ec643"} Jan 22 16:46:28 crc kubenswrapper[4758]: I0122 16:46:28.500774 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4h4ds" Jan 22 16:46:28 crc kubenswrapper[4758]: I0122 16:46:28.594950 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af754b71-e3af-432b-b8bc-923ce8ca8b3d-catalog-content\") pod \"af754b71-e3af-432b-b8bc-923ce8ca8b3d\" (UID: \"af754b71-e3af-432b-b8bc-923ce8ca8b3d\") " Jan 22 16:46:28 crc kubenswrapper[4758]: I0122 16:46:28.595121 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjpfj\" (UniqueName: \"kubernetes.io/projected/af754b71-e3af-432b-b8bc-923ce8ca8b3d-kube-api-access-cjpfj\") pod \"af754b71-e3af-432b-b8bc-923ce8ca8b3d\" (UID: \"af754b71-e3af-432b-b8bc-923ce8ca8b3d\") " Jan 22 16:46:28 crc kubenswrapper[4758]: I0122 16:46:28.595346 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af754b71-e3af-432b-b8bc-923ce8ca8b3d-utilities\") pod \"af754b71-e3af-432b-b8bc-923ce8ca8b3d\" (UID: \"af754b71-e3af-432b-b8bc-923ce8ca8b3d\") " Jan 22 16:46:28 crc kubenswrapper[4758]: I0122 16:46:28.596826 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af754b71-e3af-432b-b8bc-923ce8ca8b3d-utilities" (OuterVolumeSpecName: "utilities") pod "af754b71-e3af-432b-b8bc-923ce8ca8b3d" (UID: "af754b71-e3af-432b-b8bc-923ce8ca8b3d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:46:28 crc kubenswrapper[4758]: I0122 16:46:28.600931 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af754b71-e3af-432b-b8bc-923ce8ca8b3d-kube-api-access-cjpfj" (OuterVolumeSpecName: "kube-api-access-cjpfj") pod "af754b71-e3af-432b-b8bc-923ce8ca8b3d" (UID: "af754b71-e3af-432b-b8bc-923ce8ca8b3d"). InnerVolumeSpecName "kube-api-access-cjpfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:46:28 crc kubenswrapper[4758]: I0122 16:46:28.697027 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af754b71-e3af-432b-b8bc-923ce8ca8b3d-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:46:28 crc kubenswrapper[4758]: I0122 16:46:28.697064 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjpfj\" (UniqueName: \"kubernetes.io/projected/af754b71-e3af-432b-b8bc-923ce8ca8b3d-kube-api-access-cjpfj\") on node \"crc\" DevicePath \"\"" Jan 22 16:46:28 crc kubenswrapper[4758]: I0122 16:46:28.896644 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af754b71-e3af-432b-b8bc-923ce8ca8b3d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "af754b71-e3af-432b-b8bc-923ce8ca8b3d" (UID: "af754b71-e3af-432b-b8bc-923ce8ca8b3d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:46:28 crc kubenswrapper[4758]: I0122 16:46:28.899502 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af754b71-e3af-432b-b8bc-923ce8ca8b3d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:46:29 crc kubenswrapper[4758]: I0122 16:46:29.460836 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4h4ds" event={"ID":"af754b71-e3af-432b-b8bc-923ce8ca8b3d","Type":"ContainerDied","Data":"160b4cc2e2014b3068d65630da1a2a0d51e1f8ab06fbee5e56f289bafedb974a"} Jan 22 16:46:29 crc kubenswrapper[4758]: I0122 16:46:29.460890 4758 scope.go:117] "RemoveContainer" containerID="3f40c7b6863b69f871d6034df04b86321def4537a018a8035f8a5443977ec643" Jan 22 16:46:29 crc kubenswrapper[4758]: I0122 16:46:29.460899 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4h4ds" Jan 22 16:46:29 crc kubenswrapper[4758]: I0122 16:46:29.487921 4758 scope.go:117] "RemoveContainer" containerID="5020f3d4cf7799b42841b1a4aeb0c834f9f28dbdd704ba9580e3aa0fdfd35420" Jan 22 16:46:29 crc kubenswrapper[4758]: I0122 16:46:29.495090 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4h4ds"] Jan 22 16:46:29 crc kubenswrapper[4758]: I0122 16:46:29.500609 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4h4ds"] Jan 22 16:46:29 crc kubenswrapper[4758]: I0122 16:46:29.538205 4758 scope.go:117] "RemoveContainer" containerID="5e2ffad723f17425583976481cad5bd53f48d3a4601f5f7f0f7946e92e366da1" Jan 22 16:46:30 crc kubenswrapper[4758]: I0122 16:46:30.822538 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af754b71-e3af-432b-b8bc-923ce8ca8b3d" path="/var/lib/kubelet/pods/af754b71-e3af-432b-b8bc-923ce8ca8b3d/volumes" Jan 22 16:46:33 crc kubenswrapper[4758]: I0122 16:46:33.529139 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-b7565899b-vlqs7"] Jan 22 16:46:33 crc kubenswrapper[4758]: E0122 16:46:33.529434 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af754b71-e3af-432b-b8bc-923ce8ca8b3d" containerName="extract-content" Jan 22 16:46:33 crc kubenswrapper[4758]: I0122 16:46:33.529447 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="af754b71-e3af-432b-b8bc-923ce8ca8b3d" containerName="extract-content" Jan 22 16:46:33 crc kubenswrapper[4758]: E0122 16:46:33.529456 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af754b71-e3af-432b-b8bc-923ce8ca8b3d" containerName="extract-utilities" Jan 22 16:46:33 crc kubenswrapper[4758]: I0122 16:46:33.529462 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="af754b71-e3af-432b-b8bc-923ce8ca8b3d" containerName="extract-utilities" Jan 22 16:46:33 crc kubenswrapper[4758]: E0122 16:46:33.529479 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af754b71-e3af-432b-b8bc-923ce8ca8b3d" containerName="registry-server" Jan 22 16:46:33 crc kubenswrapper[4758]: I0122 16:46:33.529487 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="af754b71-e3af-432b-b8bc-923ce8ca8b3d" containerName="registry-server" Jan 22 16:46:33 crc kubenswrapper[4758]: E0122 16:46:33.529502 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b41ab64-3525-4cfb-a7b6-1d3a59959fd2" containerName="extract" Jan 22 16:46:33 crc kubenswrapper[4758]: I0122 16:46:33.529508 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b41ab64-3525-4cfb-a7b6-1d3a59959fd2" containerName="extract" Jan 22 16:46:33 crc kubenswrapper[4758]: E0122 16:46:33.529520 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b41ab64-3525-4cfb-a7b6-1d3a59959fd2" containerName="util" Jan 22 16:46:33 crc kubenswrapper[4758]: I0122 16:46:33.529526 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b41ab64-3525-4cfb-a7b6-1d3a59959fd2" containerName="util" Jan 22 16:46:33 crc kubenswrapper[4758]: E0122 16:46:33.529537 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b41ab64-3525-4cfb-a7b6-1d3a59959fd2" containerName="pull" Jan 22 16:46:33 crc kubenswrapper[4758]: I0122 16:46:33.529543 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b41ab64-3525-4cfb-a7b6-1d3a59959fd2" containerName="pull" Jan 22 16:46:33 crc kubenswrapper[4758]: I0122 16:46:33.529652 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="af754b71-e3af-432b-b8bc-923ce8ca8b3d" containerName="registry-server" Jan 22 16:46:33 crc kubenswrapper[4758]: I0122 16:46:33.529663 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b41ab64-3525-4cfb-a7b6-1d3a59959fd2" containerName="extract" Jan 22 16:46:33 crc kubenswrapper[4758]: I0122 16:46:33.530183 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-b7565899b-vlqs7" Jan 22 16:46:33 crc kubenswrapper[4758]: I0122 16:46:33.532626 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-8t2s8" Jan 22 16:46:33 crc kubenswrapper[4758]: I0122 16:46:33.538824 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2glmv\" (UniqueName: \"kubernetes.io/projected/4801e5d3-a66d-4856-bfc2-95dfebf6f442-kube-api-access-2glmv\") pod \"openstack-operator-controller-init-b7565899b-vlqs7\" (UID: \"4801e5d3-a66d-4856-bfc2-95dfebf6f442\") " pod="openstack-operators/openstack-operator-controller-init-b7565899b-vlqs7" Jan 22 16:46:33 crc kubenswrapper[4758]: I0122 16:46:33.548574 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-b7565899b-vlqs7"] Jan 22 16:46:33 crc kubenswrapper[4758]: I0122 16:46:33.640867 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2glmv\" (UniqueName: \"kubernetes.io/projected/4801e5d3-a66d-4856-bfc2-95dfebf6f442-kube-api-access-2glmv\") pod \"openstack-operator-controller-init-b7565899b-vlqs7\" (UID: \"4801e5d3-a66d-4856-bfc2-95dfebf6f442\") " pod="openstack-operators/openstack-operator-controller-init-b7565899b-vlqs7" Jan 22 16:46:33 crc kubenswrapper[4758]: I0122 16:46:33.665334 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2glmv\" (UniqueName: \"kubernetes.io/projected/4801e5d3-a66d-4856-bfc2-95dfebf6f442-kube-api-access-2glmv\") pod \"openstack-operator-controller-init-b7565899b-vlqs7\" (UID: \"4801e5d3-a66d-4856-bfc2-95dfebf6f442\") " pod="openstack-operators/openstack-operator-controller-init-b7565899b-vlqs7" Jan 22 16:46:33 crc kubenswrapper[4758]: I0122 16:46:33.850818 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-b7565899b-vlqs7" Jan 22 16:46:34 crc kubenswrapper[4758]: I0122 16:46:34.337778 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-b7565899b-vlqs7"] Jan 22 16:46:34 crc kubenswrapper[4758]: I0122 16:46:34.495077 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-b7565899b-vlqs7" event={"ID":"4801e5d3-a66d-4856-bfc2-95dfebf6f442","Type":"ContainerStarted","Data":"72842c4504b4b27766b78669565690c6fc54db9e02e22e2bde5f42d92917a548"} Jan 22 16:46:42 crc kubenswrapper[4758]: I0122 16:46:42.701707 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-b7565899b-vlqs7" event={"ID":"4801e5d3-a66d-4856-bfc2-95dfebf6f442","Type":"ContainerStarted","Data":"34e5ed2937b7a59087b73abe476686d3020b33f60f84c8d5a883a13c7960304d"} Jan 22 16:46:42 crc kubenswrapper[4758]: I0122 16:46:42.702269 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-b7565899b-vlqs7" Jan 22 16:46:42 crc kubenswrapper[4758]: I0122 16:46:42.743509 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-b7565899b-vlqs7" podStartSLOduration=2.440155967 podStartE2EDuration="9.743494638s" podCreationTimestamp="2026-01-22 16:46:33 +0000 UTC" firstStartedPulling="2026-01-22 16:46:34.35178645 +0000 UTC m=+1015.835125735" lastFinishedPulling="2026-01-22 16:46:41.655125121 +0000 UTC m=+1023.138464406" observedRunningTime="2026-01-22 16:46:42.742319306 +0000 UTC m=+1024.225658611" watchObservedRunningTime="2026-01-22 16:46:42.743494638 +0000 UTC m=+1024.226833923" Jan 22 16:46:43 crc kubenswrapper[4758]: I0122 16:46:43.837379 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:46:43 crc kubenswrapper[4758]: I0122 16:46:43.837693 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:46:53 crc kubenswrapper[4758]: I0122 16:46:53.865818 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-b7565899b-vlqs7" Jan 22 16:47:13 crc kubenswrapper[4758]: I0122 16:47:13.837144 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:47:13 crc kubenswrapper[4758]: I0122 16:47:13.837843 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:47:13 crc kubenswrapper[4758]: I0122 16:47:13.837913 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" Jan 22 16:47:13 crc kubenswrapper[4758]: I0122 16:47:13.838707 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4e70c152f84eff4ec2f397a05d06e518ec83c49b8fe5a577f81aa8dda8239367"} pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 16:47:13 crc kubenswrapper[4758]: I0122 16:47:13.838806 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" containerID="cri-o://4e70c152f84eff4ec2f397a05d06e518ec83c49b8fe5a577f81aa8dda8239367" gracePeriod=600 Jan 22 16:47:14 crc kubenswrapper[4758]: I0122 16:47:14.027215 4758 generic.go:334] "Generic (PLEG): container finished" podID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerID="4e70c152f84eff4ec2f397a05d06e518ec83c49b8fe5a577f81aa8dda8239367" exitCode=0 Jan 22 16:47:14 crc kubenswrapper[4758]: I0122 16:47:14.027403 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerDied","Data":"4e70c152f84eff4ec2f397a05d06e518ec83c49b8fe5a577f81aa8dda8239367"} Jan 22 16:47:14 crc kubenswrapper[4758]: I0122 16:47:14.027701 4758 scope.go:117] "RemoveContainer" containerID="d0b336b68370ee625e40b6f05f78d3e38cf1d61c80e48d4c0f21417f2aeb9ed4" Jan 22 16:47:15 crc kubenswrapper[4758]: I0122 16:47:15.038298 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerStarted","Data":"b601f6fca756de859a726aaa8ab0d3554a8d02de3dc2055608cf851a04506590"} Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.126484 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-s8q8p"] Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.128502 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-s8q8p" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.130821 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-9zqsl" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.144494 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-s8q8p"] Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.152720 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-2mr2s"] Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.154009 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-2mr2s" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.157230 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-pdg6h" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.165035 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-tlt96"] Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.165903 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-tlt96" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.168282 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-brw4q" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.210632 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-2mr2s"] Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.245314 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-tlt96"] Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.246874 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt7rw\" (UniqueName: \"kubernetes.io/projected/901f347a-3b10-4392-8247-41a859112544-kube-api-access-jt7rw\") pod \"designate-operator-controller-manager-b45d7bf98-2mr2s\" (UID: \"901f347a-3b10-4392-8247-41a859112544\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-2mr2s" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.246997 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l45jf\" (UniqueName: \"kubernetes.io/projected/e7fdd2cd-e517-46b5-acb3-22b59b7f132f-kube-api-access-l45jf\") pod \"cinder-operator-controller-manager-69cf5d4557-tlt96\" (UID: \"e7fdd2cd-e517-46b5-acb3-22b59b7f132f\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-tlt96" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.247159 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v56km\" (UniqueName: \"kubernetes.io/projected/c3e0f5c7-10cb-441c-9516-f6de8fe29757-kube-api-access-v56km\") pod \"barbican-operator-controller-manager-59dd8b7cbf-s8q8p\" (UID: \"c3e0f5c7-10cb-441c-9516-f6de8fe29757\") " pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-s8q8p" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.285759 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-skwtp"] Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.286760 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-skwtp" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.295337 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-p9vjx" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.305680 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-skwtp"] Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.313770 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-2fkhp"] Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.314586 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2fkhp" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.320896 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-8x67n" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.340643 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-2fkhp"] Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.355879 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-zkfzz"] Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.357094 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-zkfzz" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.365067 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-zpd54" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.366934 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-54ccf4f85d-sb974"] Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.367698 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-sb974" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.368939 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v56km\" (UniqueName: \"kubernetes.io/projected/c3e0f5c7-10cb-441c-9516-f6de8fe29757-kube-api-access-v56km\") pod \"barbican-operator-controller-manager-59dd8b7cbf-s8q8p\" (UID: \"c3e0f5c7-10cb-441c-9516-f6de8fe29757\") " pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-s8q8p" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.369120 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt7rw\" (UniqueName: \"kubernetes.io/projected/901f347a-3b10-4392-8247-41a859112544-kube-api-access-jt7rw\") pod \"designate-operator-controller-manager-b45d7bf98-2mr2s\" (UID: \"901f347a-3b10-4392-8247-41a859112544\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-2mr2s" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.369237 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l45jf\" (UniqueName: \"kubernetes.io/projected/e7fdd2cd-e517-46b5-acb3-22b59b7f132f-kube-api-access-l45jf\") pod \"cinder-operator-controller-manager-69cf5d4557-tlt96\" (UID: \"e7fdd2cd-e517-46b5-acb3-22b59b7f132f\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-tlt96" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.373306 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-zkfzz"] Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.374784 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.376898 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-f7gls" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.385658 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-54ccf4f85d-sb974"] Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.391214 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-gd568"] Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.392072 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-gd568" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.396400 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-zfvmv" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.407865 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-gd568"] Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.414761 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-dfb5n"] Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.415541 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-dfb5n" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.416001 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v56km\" (UniqueName: \"kubernetes.io/projected/c3e0f5c7-10cb-441c-9516-f6de8fe29757-kube-api-access-v56km\") pod \"barbican-operator-controller-manager-59dd8b7cbf-s8q8p\" (UID: \"c3e0f5c7-10cb-441c-9516-f6de8fe29757\") " pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-s8q8p" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.421735 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-lk2r2" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.422777 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt7rw\" (UniqueName: \"kubernetes.io/projected/901f347a-3b10-4392-8247-41a859112544-kube-api-access-jt7rw\") pod \"designate-operator-controller-manager-b45d7bf98-2mr2s\" (UID: \"901f347a-3b10-4392-8247-41a859112544\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-2mr2s" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.424545 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l45jf\" (UniqueName: \"kubernetes.io/projected/e7fdd2cd-e517-46b5-acb3-22b59b7f132f-kube-api-access-l45jf\") pod \"cinder-operator-controller-manager-69cf5d4557-tlt96\" (UID: \"e7fdd2cd-e517-46b5-acb3-22b59b7f132f\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-tlt96" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.431660 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-dfb5n"] Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.450162 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-s8q8p" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.470437 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/35a3fafd-45ea-465d-90ef-36148a60685e-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-sb974\" (UID: \"35a3fafd-45ea-465d-90ef-36148a60685e\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-sb974" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.470520 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jrkb\" (UniqueName: \"kubernetes.io/projected/35a3fafd-45ea-465d-90ef-36148a60685e-kube-api-access-2jrkb\") pod \"infra-operator-controller-manager-54ccf4f85d-sb974\" (UID: \"35a3fafd-45ea-465d-90ef-36148a60685e\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-sb974" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.470539 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhnr2\" (UniqueName: \"kubernetes.io/projected/fa976a5e-7cd9-402f-9792-015ca1488d1f-kube-api-access-nhnr2\") pod \"glance-operator-controller-manager-78fdd796fd-skwtp\" (UID: \"fa976a5e-7cd9-402f-9792-015ca1488d1f\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-skwtp" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.470595 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wh2x\" (UniqueName: \"kubernetes.io/projected/25848d11-6830-45f8-aff0-0082594b5f3f-kube-api-access-9wh2x\") pod \"horizon-operator-controller-manager-77d5c5b54f-zkfzz\" (UID: \"25848d11-6830-45f8-aff0-0082594b5f3f\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-zkfzz" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.470614 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9wtn\" (UniqueName: \"kubernetes.io/projected/659f7d3e-5518-4d19-bb54-e39295a667d2-kube-api-access-g9wtn\") pod \"heat-operator-controller-manager-594c8c9d5d-2fkhp\" (UID: \"659f7d3e-5518-4d19-bb54-e39295a667d2\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2fkhp" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.537206 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-2mr2s" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.571874 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jrkb\" (UniqueName: \"kubernetes.io/projected/35a3fafd-45ea-465d-90ef-36148a60685e-kube-api-access-2jrkb\") pod \"infra-operator-controller-manager-54ccf4f85d-sb974\" (UID: \"35a3fafd-45ea-465d-90ef-36148a60685e\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-sb974" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.572216 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhnr2\" (UniqueName: \"kubernetes.io/projected/fa976a5e-7cd9-402f-9792-015ca1488d1f-kube-api-access-nhnr2\") pod \"glance-operator-controller-manager-78fdd796fd-skwtp\" (UID: \"fa976a5e-7cd9-402f-9792-015ca1488d1f\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-skwtp" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.572375 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wh2x\" (UniqueName: \"kubernetes.io/projected/25848d11-6830-45f8-aff0-0082594b5f3f-kube-api-access-9wh2x\") pod \"horizon-operator-controller-manager-77d5c5b54f-zkfzz\" (UID: \"25848d11-6830-45f8-aff0-0082594b5f3f\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-zkfzz" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.572527 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9wtn\" (UniqueName: \"kubernetes.io/projected/659f7d3e-5518-4d19-bb54-e39295a667d2-kube-api-access-g9wtn\") pod \"heat-operator-controller-manager-594c8c9d5d-2fkhp\" (UID: \"659f7d3e-5518-4d19-bb54-e39295a667d2\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2fkhp" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.572659 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g68t\" (UniqueName: \"kubernetes.io/projected/78689fee-3fe7-47d2-866d-6465d23378ea-kube-api-access-9g68t\") pod \"keystone-operator-controller-manager-b8b6d4659-dfb5n\" (UID: \"78689fee-3fe7-47d2-866d-6465d23378ea\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-dfb5n" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.572806 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/35a3fafd-45ea-465d-90ef-36148a60685e-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-sb974\" (UID: \"35a3fafd-45ea-465d-90ef-36148a60685e\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-sb974" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.572987 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgvt4\" (UniqueName: \"kubernetes.io/projected/e8d5a5c6-b15b-490d-aab9-7fc63e9f30f7-kube-api-access-dgvt4\") pod \"ironic-operator-controller-manager-69d6c9f5b8-gd568\" (UID: \"e8d5a5c6-b15b-490d-aab9-7fc63e9f30f7\") " pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-gd568" Jan 22 16:47:24 crc kubenswrapper[4758]: E0122 16:47:24.572909 4758 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 16:47:24 crc kubenswrapper[4758]: E0122 16:47:24.573199 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35a3fafd-45ea-465d-90ef-36148a60685e-cert podName:35a3fafd-45ea-465d-90ef-36148a60685e nodeName:}" failed. No retries permitted until 2026-01-22 16:47:25.073176256 +0000 UTC m=+1066.556515541 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/35a3fafd-45ea-465d-90ef-36148a60685e-cert") pod "infra-operator-controller-manager-54ccf4f85d-sb974" (UID: "35a3fafd-45ea-465d-90ef-36148a60685e") : secret "infra-operator-webhook-server-cert" not found Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.610160 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-tlt96" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.681002 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9g68t\" (UniqueName: \"kubernetes.io/projected/78689fee-3fe7-47d2-866d-6465d23378ea-kube-api-access-9g68t\") pod \"keystone-operator-controller-manager-b8b6d4659-dfb5n\" (UID: \"78689fee-3fe7-47d2-866d-6465d23378ea\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-dfb5n" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.681056 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgvt4\" (UniqueName: \"kubernetes.io/projected/e8d5a5c6-b15b-490d-aab9-7fc63e9f30f7-kube-api-access-dgvt4\") pod \"ironic-operator-controller-manager-69d6c9f5b8-gd568\" (UID: \"e8d5a5c6-b15b-490d-aab9-7fc63e9f30f7\") " pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-gd568" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.681965 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhnr2\" (UniqueName: \"kubernetes.io/projected/fa976a5e-7cd9-402f-9792-015ca1488d1f-kube-api-access-nhnr2\") pod \"glance-operator-controller-manager-78fdd796fd-skwtp\" (UID: \"fa976a5e-7cd9-402f-9792-015ca1488d1f\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-skwtp" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.684810 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wh2x\" (UniqueName: \"kubernetes.io/projected/25848d11-6830-45f8-aff0-0082594b5f3f-kube-api-access-9wh2x\") pod \"horizon-operator-controller-manager-77d5c5b54f-zkfzz\" (UID: \"25848d11-6830-45f8-aff0-0082594b5f3f\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-zkfzz" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.698088 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-zkfzz" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.705303 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9wtn\" (UniqueName: \"kubernetes.io/projected/659f7d3e-5518-4d19-bb54-e39295a667d2-kube-api-access-g9wtn\") pod \"heat-operator-controller-manager-594c8c9d5d-2fkhp\" (UID: \"659f7d3e-5518-4d19-bb54-e39295a667d2\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2fkhp" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.710938 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-2qp8f"] Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.723808 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-2qp8f" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.723726 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g68t\" (UniqueName: \"kubernetes.io/projected/78689fee-3fe7-47d2-866d-6465d23378ea-kube-api-access-9g68t\") pod \"keystone-operator-controller-manager-b8b6d4659-dfb5n\" (UID: \"78689fee-3fe7-47d2-866d-6465d23378ea\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-dfb5n" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.734569 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-2w6mb" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.734867 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jrkb\" (UniqueName: \"kubernetes.io/projected/35a3fafd-45ea-465d-90ef-36148a60685e-kube-api-access-2jrkb\") pod \"infra-operator-controller-manager-54ccf4f85d-sb974\" (UID: \"35a3fafd-45ea-465d-90ef-36148a60685e\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-sb974" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.735433 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgvt4\" (UniqueName: \"kubernetes.io/projected/e8d5a5c6-b15b-490d-aab9-7fc63e9f30f7-kube-api-access-dgvt4\") pod \"ironic-operator-controller-manager-69d6c9f5b8-gd568\" (UID: \"e8d5a5c6-b15b-490d-aab9-7fc63e9f30f7\") " pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-gd568" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.780122 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-d2nmz"] Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.781058 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-d2nmz" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.792201 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-d798m" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.848375 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-2qp8f"] Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.848424 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5d8f59fb49-7tzm4"] Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.849453 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-d2nmz"] Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.849558 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-7tzm4" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.857281 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-s6gv4" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.866669 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5d8f59fb49-7tzm4"] Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.885432 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8vsz\" (UniqueName: \"kubernetes.io/projected/d67bb459-81fe-48a2-ac8a-cb4441bb35bb-kube-api-access-w8vsz\") pod \"mariadb-operator-controller-manager-c87fff755-d2nmz\" (UID: \"d67bb459-81fe-48a2-ac8a-cb4441bb35bb\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-d2nmz" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.885522 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xw4q\" (UniqueName: \"kubernetes.io/projected/5ade5af9-f79e-4285-841c-0f08e88cca47-kube-api-access-7xw4q\") pod \"manila-operator-controller-manager-78c6999f6f-2qp8f\" (UID: \"5ade5af9-f79e-4285-841c-0f08e88cca47\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-2qp8f" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.886981 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-zfcl5"] Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.889040 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-zfcl5" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.892398 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-gmg82" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.892585 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-jr994"] Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.893636 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jr994" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.899724 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-g7xdx" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.905605 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-zfcl5"] Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.910813 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-jr994"] Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.920055 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-lb8mx"] Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.920329 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-skwtp" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.921325 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lb8mx" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.926100 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-2fs5z" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.928690 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-lb8mx"] Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.939630 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2fkhp" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.948235 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wxd6d"] Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.957803 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-dfb5n" Jan 22 16:47:24 crc kubenswrapper[4758]: I0122 16:47:24.976672 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-gd568" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.017774 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wxd6d" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.022734 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8vsz\" (UniqueName: \"kubernetes.io/projected/d67bb459-81fe-48a2-ac8a-cb4441bb35bb-kube-api-access-w8vsz\") pod \"mariadb-operator-controller-manager-c87fff755-d2nmz\" (UID: \"d67bb459-81fe-48a2-ac8a-cb4441bb35bb\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-d2nmz" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.022874 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngl4w\" (UniqueName: \"kubernetes.io/projected/c73a71b4-f1fd-4a6c-9832-ce9b48a5f220-kube-api-access-ngl4w\") pod \"neutron-operator-controller-manager-5d8f59fb49-7tzm4\" (UID: \"c73a71b4-f1fd-4a6c-9832-ce9b48a5f220\") " pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-7tzm4" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.022916 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xw4q\" (UniqueName: \"kubernetes.io/projected/5ade5af9-f79e-4285-841c-0f08e88cca47-kube-api-access-7xw4q\") pod \"manila-operator-controller-manager-78c6999f6f-2qp8f\" (UID: \"5ade5af9-f79e-4285-841c-0f08e88cca47\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-2qp8f" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.028759 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-4jthc"] Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.031859 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.032499 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-pz96z" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.038459 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-4jthc" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.041644 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-qcqlv" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.059978 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wxd6d"] Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.064175 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xw4q\" (UniqueName: \"kubernetes.io/projected/5ade5af9-f79e-4285-841c-0f08e88cca47-kube-api-access-7xw4q\") pod \"manila-operator-controller-manager-78c6999f6f-2qp8f\" (UID: \"5ade5af9-f79e-4285-841c-0f08e88cca47\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-2qp8f" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.088600 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-2qp8f" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.095810 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8vsz\" (UniqueName: \"kubernetes.io/projected/d67bb459-81fe-48a2-ac8a-cb4441bb35bb-kube-api-access-w8vsz\") pod \"mariadb-operator-controller-manager-c87fff755-d2nmz\" (UID: \"d67bb459-81fe-48a2-ac8a-cb4441bb35bb\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-d2nmz" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.121016 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-d2nmz" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.177609 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-4jthc"] Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.196890 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x965x\" (UniqueName: \"kubernetes.io/projected/f5135718-a42b-4089-922b-9fba781fe6db-kube-api-access-x965x\") pod \"ovn-operator-controller-manager-55db956ddc-lb8mx\" (UID: \"f5135718-a42b-4089-922b-9fba781fe6db\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lb8mx" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.197003 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sns2g\" (UniqueName: \"kubernetes.io/projected/7d2439ad-1ca6-4c24-9d15-e04f0f89aedf-kube-api-access-sns2g\") pod \"nova-operator-controller-manager-6b8bc8d87d-zfcl5\" (UID: \"7d2439ad-1ca6-4c24-9d15-e04f0f89aedf\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-zfcl5" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.197058 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngl4w\" (UniqueName: \"kubernetes.io/projected/c73a71b4-f1fd-4a6c-9832-ce9b48a5f220-kube-api-access-ngl4w\") pod \"neutron-operator-controller-manager-5d8f59fb49-7tzm4\" (UID: \"c73a71b4-f1fd-4a6c-9832-ce9b48a5f220\") " pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-7tzm4" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.197126 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rstd\" (UniqueName: \"kubernetes.io/projected/cdd1962b-fbf0-480c-b5e2-e28ee6988046-kube-api-access-4rstd\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854wxd6d\" (UID: \"cdd1962b-fbf0-480c-b5e2-e28ee6988046\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wxd6d" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.197157 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9dpc\" (UniqueName: \"kubernetes.io/projected/16d19f40-45e9-4f1a-b953-e5c68ca014f3-kube-api-access-z9dpc\") pod \"octavia-operator-controller-manager-7bd9774b6-jr994\" (UID: \"16d19f40-45e9-4f1a-b953-e5c68ca014f3\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jr994" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.197284 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/35a3fafd-45ea-465d-90ef-36148a60685e-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-sb974\" (UID: \"35a3fafd-45ea-465d-90ef-36148a60685e\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-sb974" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.197332 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcwxn\" (UniqueName: \"kubernetes.io/projected/19b4b900-d90f-4e59-b082-61f058f5882b-kube-api-access-wcwxn\") pod \"placement-operator-controller-manager-5d646b7d76-4jthc\" (UID: \"19b4b900-d90f-4e59-b082-61f058f5882b\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-4jthc" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.197359 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cdd1962b-fbf0-480c-b5e2-e28ee6988046-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854wxd6d\" (UID: \"cdd1962b-fbf0-480c-b5e2-e28ee6988046\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wxd6d" Jan 22 16:47:25 crc kubenswrapper[4758]: E0122 16:47:25.198000 4758 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 16:47:25 crc kubenswrapper[4758]: E0122 16:47:25.198069 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35a3fafd-45ea-465d-90ef-36148a60685e-cert podName:35a3fafd-45ea-465d-90ef-36148a60685e nodeName:}" failed. No retries permitted until 2026-01-22 16:47:26.198040936 +0000 UTC m=+1067.681380221 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/35a3fafd-45ea-465d-90ef-36148a60685e-cert") pod "infra-operator-controller-manager-54ccf4f85d-sb974" (UID: "35a3fafd-45ea-465d-90ef-36148a60685e") : secret "infra-operator-webhook-server-cert" not found Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.248595 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-4rlkk"] Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.249585 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4rlkk" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.256803 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-nzrzh" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.256983 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-59n7w"] Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.257969 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-59n7w" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.273800 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-dbtnp" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.281437 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-4rlkk"] Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.284946 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngl4w\" (UniqueName: \"kubernetes.io/projected/c73a71b4-f1fd-4a6c-9832-ce9b48a5f220-kube-api-access-ngl4w\") pod \"neutron-operator-controller-manager-5d8f59fb49-7tzm4\" (UID: \"c73a71b4-f1fd-4a6c-9832-ce9b48a5f220\") " pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-7tzm4" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.288762 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-85b8fd6746-9vvd6"] Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.290233 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-85b8fd6746-9vvd6" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.298217 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-85b8fd6746-9vvd6"] Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.299075 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcwxn\" (UniqueName: \"kubernetes.io/projected/19b4b900-d90f-4e59-b082-61f058f5882b-kube-api-access-wcwxn\") pod \"placement-operator-controller-manager-5d646b7d76-4jthc\" (UID: \"19b4b900-d90f-4e59-b082-61f058f5882b\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-4jthc" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.299103 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cdd1962b-fbf0-480c-b5e2-e28ee6988046-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854wxd6d\" (UID: \"cdd1962b-fbf0-480c-b5e2-e28ee6988046\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wxd6d" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.299154 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x965x\" (UniqueName: \"kubernetes.io/projected/f5135718-a42b-4089-922b-9fba781fe6db-kube-api-access-x965x\") pod \"ovn-operator-controller-manager-55db956ddc-lb8mx\" (UID: \"f5135718-a42b-4089-922b-9fba781fe6db\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lb8mx" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.299192 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sns2g\" (UniqueName: \"kubernetes.io/projected/7d2439ad-1ca6-4c24-9d15-e04f0f89aedf-kube-api-access-sns2g\") pod \"nova-operator-controller-manager-6b8bc8d87d-zfcl5\" (UID: \"7d2439ad-1ca6-4c24-9d15-e04f0f89aedf\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-zfcl5" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.299241 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rstd\" (UniqueName: \"kubernetes.io/projected/cdd1962b-fbf0-480c-b5e2-e28ee6988046-kube-api-access-4rstd\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854wxd6d\" (UID: \"cdd1962b-fbf0-480c-b5e2-e28ee6988046\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wxd6d" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.299263 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9dpc\" (UniqueName: \"kubernetes.io/projected/16d19f40-45e9-4f1a-b953-e5c68ca014f3-kube-api-access-z9dpc\") pod \"octavia-operator-controller-manager-7bd9774b6-jr994\" (UID: \"16d19f40-45e9-4f1a-b953-e5c68ca014f3\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jr994" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.299429 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-s6bn2" Jan 22 16:47:25 crc kubenswrapper[4758]: E0122 16:47:25.307320 4758 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 16:47:25 crc kubenswrapper[4758]: E0122 16:47:25.307393 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cdd1962b-fbf0-480c-b5e2-e28ee6988046-cert podName:cdd1962b-fbf0-480c-b5e2-e28ee6988046 nodeName:}" failed. No retries permitted until 2026-01-22 16:47:25.807362386 +0000 UTC m=+1067.290701671 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cdd1962b-fbf0-480c-b5e2-e28ee6988046-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854wxd6d" (UID: "cdd1962b-fbf0-480c-b5e2-e28ee6988046") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.354457 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-59n7w"] Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.358844 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcwxn\" (UniqueName: \"kubernetes.io/projected/19b4b900-d90f-4e59-b082-61f058f5882b-kube-api-access-wcwxn\") pod \"placement-operator-controller-manager-5d646b7d76-4jthc\" (UID: \"19b4b900-d90f-4e59-b082-61f058f5882b\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-4jthc" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.359249 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rstd\" (UniqueName: \"kubernetes.io/projected/cdd1962b-fbf0-480c-b5e2-e28ee6988046-kube-api-access-4rstd\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854wxd6d\" (UID: \"cdd1962b-fbf0-480c-b5e2-e28ee6988046\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wxd6d" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.359302 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sns2g\" (UniqueName: \"kubernetes.io/projected/7d2439ad-1ca6-4c24-9d15-e04f0f89aedf-kube-api-access-sns2g\") pod \"nova-operator-controller-manager-6b8bc8d87d-zfcl5\" (UID: \"7d2439ad-1ca6-4c24-9d15-e04f0f89aedf\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-zfcl5" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.366521 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x965x\" (UniqueName: \"kubernetes.io/projected/f5135718-a42b-4089-922b-9fba781fe6db-kube-api-access-x965x\") pod \"ovn-operator-controller-manager-55db956ddc-lb8mx\" (UID: \"f5135718-a42b-4089-922b-9fba781fe6db\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lb8mx" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.366605 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-2xj52"] Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.367840 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2xj52" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.372363 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9dpc\" (UniqueName: \"kubernetes.io/projected/16d19f40-45e9-4f1a-b953-e5c68ca014f3-kube-api-access-z9dpc\") pod \"octavia-operator-controller-manager-7bd9774b6-jr994\" (UID: \"16d19f40-45e9-4f1a-b953-e5c68ca014f3\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jr994" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.372475 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-nwvvt" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.387030 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-2xj52"] Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.397501 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jr994" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.397576 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lb8mx" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.411058 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w8jv\" (UniqueName: \"kubernetes.io/projected/d4c5d14c-33e9-4cb0-9ff4-9747c2cd3c13-kube-api-access-5w8jv\") pod \"telemetry-operator-controller-manager-85cd9769bb-59n7w\" (UID: \"d4c5d14c-33e9-4cb0-9ff4-9747c2cd3c13\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-59n7w" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.411130 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57r78\" (UniqueName: \"kubernetes.io/projected/71c16ac1-3276-4096-93c5-d10765320713-kube-api-access-57r78\") pod \"watcher-operator-controller-manager-85b8fd6746-9vvd6\" (UID: \"71c16ac1-3276-4096-93c5-d10765320713\") " pod="openstack-operators/watcher-operator-controller-manager-85b8fd6746-9vvd6" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.411269 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb6x6\" (UniqueName: \"kubernetes.io/projected/40845474-36a2-48c0-a0df-af5deb2a31fd-kube-api-access-kb6x6\") pod \"swift-operator-controller-manager-547cbdb99f-4rlkk\" (UID: \"40845474-36a2-48c0-a0df-af5deb2a31fd\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4rlkk" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.420815 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-675f79667-vjvtj"] Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.422040 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-675f79667-vjvtj" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.424664 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-675f79667-vjvtj"] Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.425469 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-4q6rk" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.427112 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.427326 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.456067 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cb5t8"] Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.457075 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cb5t8" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.459608 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cb5t8"] Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.461867 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-bsxhx" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.493436 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-7tzm4" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.513371 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w8jv\" (UniqueName: \"kubernetes.io/projected/d4c5d14c-33e9-4cb0-9ff4-9747c2cd3c13-kube-api-access-5w8jv\") pod \"telemetry-operator-controller-manager-85cd9769bb-59n7w\" (UID: \"d4c5d14c-33e9-4cb0-9ff4-9747c2cd3c13\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-59n7w" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.513433 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57r78\" (UniqueName: \"kubernetes.io/projected/71c16ac1-3276-4096-93c5-d10765320713-kube-api-access-57r78\") pod \"watcher-operator-controller-manager-85b8fd6746-9vvd6\" (UID: \"71c16ac1-3276-4096-93c5-d10765320713\") " pod="openstack-operators/watcher-operator-controller-manager-85b8fd6746-9vvd6" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.513530 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xzk8\" (UniqueName: \"kubernetes.io/projected/644142ed-c937-406d-9fd5-3fe856879a92-kube-api-access-9xzk8\") pod \"test-operator-controller-manager-69797bbcbd-2xj52\" (UID: \"644142ed-c937-406d-9fd5-3fe856879a92\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2xj52" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.513574 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb6x6\" (UniqueName: \"kubernetes.io/projected/40845474-36a2-48c0-a0df-af5deb2a31fd-kube-api-access-kb6x6\") pod \"swift-operator-controller-manager-547cbdb99f-4rlkk\" (UID: \"40845474-36a2-48c0-a0df-af5deb2a31fd\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4rlkk" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.529086 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-4jthc" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.535873 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb6x6\" (UniqueName: \"kubernetes.io/projected/40845474-36a2-48c0-a0df-af5deb2a31fd-kube-api-access-kb6x6\") pod \"swift-operator-controller-manager-547cbdb99f-4rlkk\" (UID: \"40845474-36a2-48c0-a0df-af5deb2a31fd\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4rlkk" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.536634 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57r78\" (UniqueName: \"kubernetes.io/projected/71c16ac1-3276-4096-93c5-d10765320713-kube-api-access-57r78\") pod \"watcher-operator-controller-manager-85b8fd6746-9vvd6\" (UID: \"71c16ac1-3276-4096-93c5-d10765320713\") " pod="openstack-operators/watcher-operator-controller-manager-85b8fd6746-9vvd6" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.543820 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-zfcl5" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.611287 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w8jv\" (UniqueName: \"kubernetes.io/projected/d4c5d14c-33e9-4cb0-9ff4-9747c2cd3c13-kube-api-access-5w8jv\") pod \"telemetry-operator-controller-manager-85cd9769bb-59n7w\" (UID: \"d4c5d14c-33e9-4cb0-9ff4-9747c2cd3c13\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-59n7w" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.619480 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrc2c\" (UniqueName: \"kubernetes.io/projected/c4847ca7-5057-4d6d-80c5-f74c7d633e83-kube-api-access-qrc2c\") pod \"openstack-operator-controller-manager-675f79667-vjvtj\" (UID: \"c4847ca7-5057-4d6d-80c5-f74c7d633e83\") " pod="openstack-operators/openstack-operator-controller-manager-675f79667-vjvtj" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.619768 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7qn4\" (UniqueName: \"kubernetes.io/projected/26d5529a-b270-40fc-9faa-037435dd2f80-kube-api-access-t7qn4\") pod \"rabbitmq-cluster-operator-manager-668c99d594-cb5t8\" (UID: \"26d5529a-b270-40fc-9faa-037435dd2f80\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cb5t8" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.619900 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c4847ca7-5057-4d6d-80c5-f74c7d633e83-webhook-certs\") pod \"openstack-operator-controller-manager-675f79667-vjvtj\" (UID: \"c4847ca7-5057-4d6d-80c5-f74c7d633e83\") " pod="openstack-operators/openstack-operator-controller-manager-675f79667-vjvtj" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.620020 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xzk8\" (UniqueName: \"kubernetes.io/projected/644142ed-c937-406d-9fd5-3fe856879a92-kube-api-access-9xzk8\") pod \"test-operator-controller-manager-69797bbcbd-2xj52\" (UID: \"644142ed-c937-406d-9fd5-3fe856879a92\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2xj52" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.620153 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c4847ca7-5057-4d6d-80c5-f74c7d633e83-metrics-certs\") pod \"openstack-operator-controller-manager-675f79667-vjvtj\" (UID: \"c4847ca7-5057-4d6d-80c5-f74c7d633e83\") " pod="openstack-operators/openstack-operator-controller-manager-675f79667-vjvtj" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.651261 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xzk8\" (UniqueName: \"kubernetes.io/projected/644142ed-c937-406d-9fd5-3fe856879a92-kube-api-access-9xzk8\") pod \"test-operator-controller-manager-69797bbcbd-2xj52\" (UID: \"644142ed-c937-406d-9fd5-3fe856879a92\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2xj52" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.721765 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7qn4\" (UniqueName: \"kubernetes.io/projected/26d5529a-b270-40fc-9faa-037435dd2f80-kube-api-access-t7qn4\") pod \"rabbitmq-cluster-operator-manager-668c99d594-cb5t8\" (UID: \"26d5529a-b270-40fc-9faa-037435dd2f80\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cb5t8" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.721802 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c4847ca7-5057-4d6d-80c5-f74c7d633e83-webhook-certs\") pod \"openstack-operator-controller-manager-675f79667-vjvtj\" (UID: \"c4847ca7-5057-4d6d-80c5-f74c7d633e83\") " pod="openstack-operators/openstack-operator-controller-manager-675f79667-vjvtj" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.721847 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c4847ca7-5057-4d6d-80c5-f74c7d633e83-metrics-certs\") pod \"openstack-operator-controller-manager-675f79667-vjvtj\" (UID: \"c4847ca7-5057-4d6d-80c5-f74c7d633e83\") " pod="openstack-operators/openstack-operator-controller-manager-675f79667-vjvtj" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.721889 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrc2c\" (UniqueName: \"kubernetes.io/projected/c4847ca7-5057-4d6d-80c5-f74c7d633e83-kube-api-access-qrc2c\") pod \"openstack-operator-controller-manager-675f79667-vjvtj\" (UID: \"c4847ca7-5057-4d6d-80c5-f74c7d633e83\") " pod="openstack-operators/openstack-operator-controller-manager-675f79667-vjvtj" Jan 22 16:47:25 crc kubenswrapper[4758]: E0122 16:47:25.722414 4758 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 16:47:25 crc kubenswrapper[4758]: E0122 16:47:25.722453 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4847ca7-5057-4d6d-80c5-f74c7d633e83-webhook-certs podName:c4847ca7-5057-4d6d-80c5-f74c7d633e83 nodeName:}" failed. No retries permitted until 2026-01-22 16:47:26.222440464 +0000 UTC m=+1067.705779749 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c4847ca7-5057-4d6d-80c5-f74c7d633e83-webhook-certs") pod "openstack-operator-controller-manager-675f79667-vjvtj" (UID: "c4847ca7-5057-4d6d-80c5-f74c7d633e83") : secret "webhook-server-cert" not found Jan 22 16:47:25 crc kubenswrapper[4758]: E0122 16:47:25.722585 4758 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 16:47:25 crc kubenswrapper[4758]: E0122 16:47:25.722610 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4847ca7-5057-4d6d-80c5-f74c7d633e83-metrics-certs podName:c4847ca7-5057-4d6d-80c5-f74c7d633e83 nodeName:}" failed. No retries permitted until 2026-01-22 16:47:26.222601629 +0000 UTC m=+1067.705940914 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c4847ca7-5057-4d6d-80c5-f74c7d633e83-metrics-certs") pod "openstack-operator-controller-manager-675f79667-vjvtj" (UID: "c4847ca7-5057-4d6d-80c5-f74c7d633e83") : secret "metrics-server-cert" not found Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.724690 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4rlkk" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.738344 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-59n7w" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.757180 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrc2c\" (UniqueName: \"kubernetes.io/projected/c4847ca7-5057-4d6d-80c5-f74c7d633e83-kube-api-access-qrc2c\") pod \"openstack-operator-controller-manager-675f79667-vjvtj\" (UID: \"c4847ca7-5057-4d6d-80c5-f74c7d633e83\") " pod="openstack-operators/openstack-operator-controller-manager-675f79667-vjvtj" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.788491 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7qn4\" (UniqueName: \"kubernetes.io/projected/26d5529a-b270-40fc-9faa-037435dd2f80-kube-api-access-t7qn4\") pod \"rabbitmq-cluster-operator-manager-668c99d594-cb5t8\" (UID: \"26d5529a-b270-40fc-9faa-037435dd2f80\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cb5t8" Jan 22 16:47:25 crc kubenswrapper[4758]: I0122 16:47:25.824810 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cdd1962b-fbf0-480c-b5e2-e28ee6988046-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854wxd6d\" (UID: \"cdd1962b-fbf0-480c-b5e2-e28ee6988046\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wxd6d" Jan 22 16:47:25 crc kubenswrapper[4758]: E0122 16:47:25.825048 4758 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 16:47:25 crc kubenswrapper[4758]: E0122 16:47:25.825111 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cdd1962b-fbf0-480c-b5e2-e28ee6988046-cert podName:cdd1962b-fbf0-480c-b5e2-e28ee6988046 nodeName:}" failed. No retries permitted until 2026-01-22 16:47:26.825094185 +0000 UTC m=+1068.308433470 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cdd1962b-fbf0-480c-b5e2-e28ee6988046-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854wxd6d" (UID: "cdd1962b-fbf0-480c-b5e2-e28ee6988046") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 16:47:26 crc kubenswrapper[4758]: I0122 16:47:26.038619 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-85b8fd6746-9vvd6" Jan 22 16:47:26 crc kubenswrapper[4758]: I0122 16:47:26.052692 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2xj52" Jan 22 16:47:26 crc kubenswrapper[4758]: I0122 16:47:26.096942 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cb5t8" Jan 22 16:47:26 crc kubenswrapper[4758]: I0122 16:47:26.332950 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/35a3fafd-45ea-465d-90ef-36148a60685e-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-sb974\" (UID: \"35a3fafd-45ea-465d-90ef-36148a60685e\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-sb974" Jan 22 16:47:26 crc kubenswrapper[4758]: I0122 16:47:26.333032 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c4847ca7-5057-4d6d-80c5-f74c7d633e83-webhook-certs\") pod \"openstack-operator-controller-manager-675f79667-vjvtj\" (UID: \"c4847ca7-5057-4d6d-80c5-f74c7d633e83\") " pod="openstack-operators/openstack-operator-controller-manager-675f79667-vjvtj" Jan 22 16:47:26 crc kubenswrapper[4758]: I0122 16:47:26.333108 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c4847ca7-5057-4d6d-80c5-f74c7d633e83-metrics-certs\") pod \"openstack-operator-controller-manager-675f79667-vjvtj\" (UID: \"c4847ca7-5057-4d6d-80c5-f74c7d633e83\") " pod="openstack-operators/openstack-operator-controller-manager-675f79667-vjvtj" Jan 22 16:47:26 crc kubenswrapper[4758]: E0122 16:47:26.333209 4758 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 16:47:26 crc kubenswrapper[4758]: E0122 16:47:26.333296 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35a3fafd-45ea-465d-90ef-36148a60685e-cert podName:35a3fafd-45ea-465d-90ef-36148a60685e nodeName:}" failed. No retries permitted until 2026-01-22 16:47:28.333273874 +0000 UTC m=+1069.816613159 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/35a3fafd-45ea-465d-90ef-36148a60685e-cert") pod "infra-operator-controller-manager-54ccf4f85d-sb974" (UID: "35a3fafd-45ea-465d-90ef-36148a60685e") : secret "infra-operator-webhook-server-cert" not found Jan 22 16:47:26 crc kubenswrapper[4758]: E0122 16:47:26.333305 4758 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 16:47:26 crc kubenswrapper[4758]: E0122 16:47:26.333364 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4847ca7-5057-4d6d-80c5-f74c7d633e83-metrics-certs podName:c4847ca7-5057-4d6d-80c5-f74c7d633e83 nodeName:}" failed. No retries permitted until 2026-01-22 16:47:27.333345616 +0000 UTC m=+1068.816684951 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c4847ca7-5057-4d6d-80c5-f74c7d633e83-metrics-certs") pod "openstack-operator-controller-manager-675f79667-vjvtj" (UID: "c4847ca7-5057-4d6d-80c5-f74c7d633e83") : secret "metrics-server-cert" not found Jan 22 16:47:26 crc kubenswrapper[4758]: E0122 16:47:26.333955 4758 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 16:47:26 crc kubenswrapper[4758]: E0122 16:47:26.333991 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4847ca7-5057-4d6d-80c5-f74c7d633e83-webhook-certs podName:c4847ca7-5057-4d6d-80c5-f74c7d633e83 nodeName:}" failed. No retries permitted until 2026-01-22 16:47:27.333979982 +0000 UTC m=+1068.817319337 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c4847ca7-5057-4d6d-80c5-f74c7d633e83-webhook-certs") pod "openstack-operator-controller-manager-675f79667-vjvtj" (UID: "c4847ca7-5057-4d6d-80c5-f74c7d633e83") : secret "webhook-server-cert" not found Jan 22 16:47:26 crc kubenswrapper[4758]: I0122 16:47:26.859408 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cdd1962b-fbf0-480c-b5e2-e28ee6988046-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854wxd6d\" (UID: \"cdd1962b-fbf0-480c-b5e2-e28ee6988046\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wxd6d" Jan 22 16:47:26 crc kubenswrapper[4758]: E0122 16:47:26.859640 4758 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 16:47:26 crc kubenswrapper[4758]: E0122 16:47:26.859905 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cdd1962b-fbf0-480c-b5e2-e28ee6988046-cert podName:cdd1962b-fbf0-480c-b5e2-e28ee6988046 nodeName:}" failed. No retries permitted until 2026-01-22 16:47:28.859878363 +0000 UTC m=+1070.343217638 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cdd1962b-fbf0-480c-b5e2-e28ee6988046-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854wxd6d" (UID: "cdd1962b-fbf0-480c-b5e2-e28ee6988046") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 16:47:27 crc kubenswrapper[4758]: I0122 16:47:27.203991 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-tlt96"] Jan 22 16:47:27 crc kubenswrapper[4758]: I0122 16:47:27.319666 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-zkfzz"] Jan 22 16:47:27 crc kubenswrapper[4758]: I0122 16:47:27.329487 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-2mr2s"] Jan 22 16:47:27 crc kubenswrapper[4758]: I0122 16:47:27.374900 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-dfb5n"] Jan 22 16:47:27 crc kubenswrapper[4758]: I0122 16:47:27.377474 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c4847ca7-5057-4d6d-80c5-f74c7d633e83-webhook-certs\") pod \"openstack-operator-controller-manager-675f79667-vjvtj\" (UID: \"c4847ca7-5057-4d6d-80c5-f74c7d633e83\") " pod="openstack-operators/openstack-operator-controller-manager-675f79667-vjvtj" Jan 22 16:47:27 crc kubenswrapper[4758]: I0122 16:47:27.377571 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c4847ca7-5057-4d6d-80c5-f74c7d633e83-metrics-certs\") pod \"openstack-operator-controller-manager-675f79667-vjvtj\" (UID: \"c4847ca7-5057-4d6d-80c5-f74c7d633e83\") " pod="openstack-operators/openstack-operator-controller-manager-675f79667-vjvtj" Jan 22 16:47:27 crc kubenswrapper[4758]: E0122 16:47:27.377642 4758 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 16:47:27 crc kubenswrapper[4758]: E0122 16:47:27.377691 4758 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 16:47:27 crc kubenswrapper[4758]: E0122 16:47:27.377725 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4847ca7-5057-4d6d-80c5-f74c7d633e83-webhook-certs podName:c4847ca7-5057-4d6d-80c5-f74c7d633e83 nodeName:}" failed. No retries permitted until 2026-01-22 16:47:29.377705074 +0000 UTC m=+1070.861044359 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c4847ca7-5057-4d6d-80c5-f74c7d633e83-webhook-certs") pod "openstack-operator-controller-manager-675f79667-vjvtj" (UID: "c4847ca7-5057-4d6d-80c5-f74c7d633e83") : secret "webhook-server-cert" not found Jan 22 16:47:27 crc kubenswrapper[4758]: E0122 16:47:27.377819 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4847ca7-5057-4d6d-80c5-f74c7d633e83-metrics-certs podName:c4847ca7-5057-4d6d-80c5-f74c7d633e83 nodeName:}" failed. No retries permitted until 2026-01-22 16:47:29.377734025 +0000 UTC m=+1070.861073310 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c4847ca7-5057-4d6d-80c5-f74c7d633e83-metrics-certs") pod "openstack-operator-controller-manager-675f79667-vjvtj" (UID: "c4847ca7-5057-4d6d-80c5-f74c7d633e83") : secret "metrics-server-cert" not found Jan 22 16:47:27 crc kubenswrapper[4758]: I0122 16:47:27.404275 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-tlt96" event={"ID":"e7fdd2cd-e517-46b5-acb3-22b59b7f132f","Type":"ContainerStarted","Data":"5e21f231dc5d83979eee5bf82b0d36a536be9121e1daf14143acc885a7658153"} Jan 22 16:47:27 crc kubenswrapper[4758]: I0122 16:47:27.411342 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-2mr2s" event={"ID":"901f347a-3b10-4392-8247-41a859112544","Type":"ContainerStarted","Data":"2ebf99c830d13c22fb7d7830185d51870cc4c8aac95d7bb04bdb6d70a662c033"} Jan 22 16:47:27 crc kubenswrapper[4758]: I0122 16:47:27.442521 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-skwtp"] Jan 22 16:47:27 crc kubenswrapper[4758]: I0122 16:47:27.453800 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-s8q8p"] Jan 22 16:47:27 crc kubenswrapper[4758]: I0122 16:47:27.467533 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-2fkhp"] Jan 22 16:47:27 crc kubenswrapper[4758]: I0122 16:47:27.523870 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5d8f59fb49-7tzm4"] Jan 22 16:47:27 crc kubenswrapper[4758]: I0122 16:47:27.571105 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-gd568"] Jan 22 16:47:27 crc kubenswrapper[4758]: I0122 16:47:27.592055 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-d2nmz"] Jan 22 16:47:27 crc kubenswrapper[4758]: I0122 16:47:27.597797 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-4jthc"] Jan 22 16:47:27 crc kubenswrapper[4758]: I0122 16:47:27.623703 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-jr994"] Jan 22 16:47:27 crc kubenswrapper[4758]: W0122 16:47:27.634198 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd67bb459_81fe_48a2_ac8a_cb4441bb35bb.slice/crio-508555c33bf802fd57bc65057573ef0b576542e9cacebff395d7bdfef03eda6f WatchSource:0}: Error finding container 508555c33bf802fd57bc65057573ef0b576542e9cacebff395d7bdfef03eda6f: Status 404 returned error can't find the container with id 508555c33bf802fd57bc65057573ef0b576542e9cacebff395d7bdfef03eda6f Jan 22 16:47:27 crc kubenswrapper[4758]: E0122 16:47:27.670690 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z9dpc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7bd9774b6-jr994_openstack-operators(16d19f40-45e9-4f1a-b953-e5c68ca014f3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 16:47:27 crc kubenswrapper[4758]: E0122 16:47:27.673114 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jr994" podUID="16d19f40-45e9-4f1a-b953-e5c68ca014f3" Jan 22 16:47:27 crc kubenswrapper[4758]: W0122 16:47:27.752396 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5135718_a42b_4089_922b_9fba781fe6db.slice/crio-882f9e410c0b5edef3c631cf8987ee764d8a07a50fa0f5d8a0a55ce67dcbdfb6 WatchSource:0}: Error finding container 882f9e410c0b5edef3c631cf8987ee764d8a07a50fa0f5d8a0a55ce67dcbdfb6: Status 404 returned error can't find the container with id 882f9e410c0b5edef3c631cf8987ee764d8a07a50fa0f5d8a0a55ce67dcbdfb6 Jan 22 16:47:27 crc kubenswrapper[4758]: I0122 16:47:27.756159 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-lb8mx"] Jan 22 16:47:27 crc kubenswrapper[4758]: I0122 16:47:27.818652 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-2qp8f"] Jan 22 16:47:27 crc kubenswrapper[4758]: E0122 16:47:27.820643 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7xw4q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-78c6999f6f-2qp8f_openstack-operators(5ade5af9-f79e-4285-841c-0f08e88cca47): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 16:47:27 crc kubenswrapper[4758]: E0122 16:47:27.822545 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-2qp8f" podUID="5ade5af9-f79e-4285-841c-0f08e88cca47" Jan 22 16:47:27 crc kubenswrapper[4758]: I0122 16:47:27.825817 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-4rlkk"] Jan 22 16:47:27 crc kubenswrapper[4758]: W0122 16:47:27.831576 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40845474_36a2_48c0_a0df_af5deb2a31fd.slice/crio-3ace1ad59d7ec1a9b80b0a3e14367c1c697c42c0e7224aa4e0427f541d7c041b WatchSource:0}: Error finding container 3ace1ad59d7ec1a9b80b0a3e14367c1c697c42c0e7224aa4e0427f541d7c041b: Status 404 returned error can't find the container with id 3ace1ad59d7ec1a9b80b0a3e14367c1c697c42c0e7224aa4e0427f541d7c041b Jan 22 16:47:27 crc kubenswrapper[4758]: E0122 16:47:27.834433 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kb6x6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-4rlkk_openstack-operators(40845474-36a2-48c0-a0df-af5deb2a31fd): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 16:47:27 crc kubenswrapper[4758]: E0122 16:47:27.836307 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4rlkk" podUID="40845474-36a2-48c0-a0df-af5deb2a31fd" Jan 22 16:47:27 crc kubenswrapper[4758]: I0122 16:47:27.857644 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-zfcl5"] Jan 22 16:47:27 crc kubenswrapper[4758]: W0122 16:47:27.858539 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d2439ad_1ca6_4c24_9d15_e04f0f89aedf.slice/crio-f6e7be447745d6039d6a5905884b51d6cfcd080d733c3f92e28672d9f3d5e934 WatchSource:0}: Error finding container f6e7be447745d6039d6a5905884b51d6cfcd080d733c3f92e28672d9f3d5e934: Status 404 returned error can't find the container with id f6e7be447745d6039d6a5905884b51d6cfcd080d733c3f92e28672d9f3d5e934 Jan 22 16:47:27 crc kubenswrapper[4758]: E0122 16:47:27.861958 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sns2g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-6b8bc8d87d-zfcl5_openstack-operators(7d2439ad-1ca6-4c24-9d15-e04f0f89aedf): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 16:47:27 crc kubenswrapper[4758]: E0122 16:47:27.863421 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-zfcl5" podUID="7d2439ad-1ca6-4c24-9d15-e04f0f89aedf" Jan 22 16:47:27 crc kubenswrapper[4758]: I0122 16:47:27.865952 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-59n7w"] Jan 22 16:47:27 crc kubenswrapper[4758]: E0122 16:47:27.873091 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5w8jv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-85cd9769bb-59n7w_openstack-operators(d4c5d14c-33e9-4cb0-9ff4-9747c2cd3c13): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 16:47:27 crc kubenswrapper[4758]: E0122 16:47:27.874196 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-59n7w" podUID="d4c5d14c-33e9-4cb0-9ff4-9747c2cd3c13" Jan 22 16:47:27 crc kubenswrapper[4758]: I0122 16:47:27.925014 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cb5t8"] Jan 22 16:47:27 crc kubenswrapper[4758]: W0122 16:47:27.938178 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26d5529a_b270_40fc_9faa_037435dd2f80.slice/crio-9fc69981db44b61650c8ff7e89e19bdb410f0fa7951dc0cf1a31ec04bd191eac WatchSource:0}: Error finding container 9fc69981db44b61650c8ff7e89e19bdb410f0fa7951dc0cf1a31ec04bd191eac: Status 404 returned error can't find the container with id 9fc69981db44b61650c8ff7e89e19bdb410f0fa7951dc0cf1a31ec04bd191eac Jan 22 16:47:27 crc kubenswrapper[4758]: I0122 16:47:27.956191 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-2xj52"] Jan 22 16:47:27 crc kubenswrapper[4758]: W0122 16:47:27.957986 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod644142ed_c937_406d_9fd5_3fe856879a92.slice/crio-e961565aa672cc8acbd944214079c9f5a07c05c4ca215200625d773163ee2279 WatchSource:0}: Error finding container e961565aa672cc8acbd944214079c9f5a07c05c4ca215200625d773163ee2279: Status 404 returned error can't find the container with id e961565aa672cc8acbd944214079c9f5a07c05c4ca215200625d773163ee2279 Jan 22 16:47:27 crc kubenswrapper[4758]: W0122 16:47:27.959577 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71c16ac1_3276_4096_93c5_d10765320713.slice/crio-3cfcbb156ad565f9c755d2890929e301b9f250f496b97f25dc83b50dc4caf485 WatchSource:0}: Error finding container 3cfcbb156ad565f9c755d2890929e301b9f250f496b97f25dc83b50dc4caf485: Status 404 returned error can't find the container with id 3cfcbb156ad565f9c755d2890929e301b9f250f496b97f25dc83b50dc4caf485 Jan 22 16:47:27 crc kubenswrapper[4758]: I0122 16:47:27.961338 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-85b8fd6746-9vvd6"] Jan 22 16:47:27 crc kubenswrapper[4758]: E0122 16:47:27.962204 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9xzk8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-2xj52_openstack-operators(644142ed-c937-406d-9fd5-3fe856879a92): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 16:47:27 crc kubenswrapper[4758]: E0122 16:47:27.962648 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.196:5001/openstack-k8s-operators/watcher-operator:66a2a7ca52c97ab09e74ddf1b8f1663bf04650c3,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-57r78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-85b8fd6746-9vvd6_openstack-operators(71c16ac1-3276-4096-93c5-d10765320713): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 16:47:27 crc kubenswrapper[4758]: E0122 16:47:27.963621 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2xj52" podUID="644142ed-c937-406d-9fd5-3fe856879a92" Jan 22 16:47:27 crc kubenswrapper[4758]: E0122 16:47:27.964106 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-85b8fd6746-9vvd6" podUID="71c16ac1-3276-4096-93c5-d10765320713" Jan 22 16:47:28 crc kubenswrapper[4758]: I0122 16:47:28.403868 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/35a3fafd-45ea-465d-90ef-36148a60685e-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-sb974\" (UID: \"35a3fafd-45ea-465d-90ef-36148a60685e\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-sb974" Jan 22 16:47:28 crc kubenswrapper[4758]: E0122 16:47:28.404102 4758 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 16:47:28 crc kubenswrapper[4758]: E0122 16:47:28.404397 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35a3fafd-45ea-465d-90ef-36148a60685e-cert podName:35a3fafd-45ea-465d-90ef-36148a60685e nodeName:}" failed. No retries permitted until 2026-01-22 16:47:32.404377922 +0000 UTC m=+1073.887717207 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/35a3fafd-45ea-465d-90ef-36148a60685e-cert") pod "infra-operator-controller-manager-54ccf4f85d-sb974" (UID: "35a3fafd-45ea-465d-90ef-36148a60685e") : secret "infra-operator-webhook-server-cert" not found Jan 22 16:47:28 crc kubenswrapper[4758]: I0122 16:47:28.422472 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-gd568" event={"ID":"e8d5a5c6-b15b-490d-aab9-7fc63e9f30f7","Type":"ContainerStarted","Data":"99a97cfcd595cf9169bda53f5def4062001cfe6c71ccc7ad7695d9edb2ebcfa8"} Jan 22 16:47:28 crc kubenswrapper[4758]: I0122 16:47:28.423719 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-59n7w" event={"ID":"d4c5d14c-33e9-4cb0-9ff4-9747c2cd3c13","Type":"ContainerStarted","Data":"45aa956537c9bff949c23ebe497d1aa791daf7601b2da3ec6197587a71edceb1"} Jan 22 16:47:28 crc kubenswrapper[4758]: I0122 16:47:28.425372 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-7tzm4" event={"ID":"c73a71b4-f1fd-4a6c-9832-ce9b48a5f220","Type":"ContainerStarted","Data":"05f3ecf79841ae03c9c1d48636c22d566b89f8d29cbe3ad04649553c01120da7"} Jan 22 16:47:28 crc kubenswrapper[4758]: E0122 16:47:28.428191 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-59n7w" podUID="d4c5d14c-33e9-4cb0-9ff4-9747c2cd3c13" Jan 22 16:47:28 crc kubenswrapper[4758]: I0122 16:47:28.436702 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-4jthc" event={"ID":"19b4b900-d90f-4e59-b082-61f058f5882b","Type":"ContainerStarted","Data":"d3c94d061d76388b290748e6d4aa6062d0781a8993339499e2822ee95c5f754a"} Jan 22 16:47:28 crc kubenswrapper[4758]: I0122 16:47:28.441711 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cb5t8" event={"ID":"26d5529a-b270-40fc-9faa-037435dd2f80","Type":"ContainerStarted","Data":"9fc69981db44b61650c8ff7e89e19bdb410f0fa7951dc0cf1a31ec04bd191eac"} Jan 22 16:47:28 crc kubenswrapper[4758]: I0122 16:47:28.458167 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lb8mx" event={"ID":"f5135718-a42b-4089-922b-9fba781fe6db","Type":"ContainerStarted","Data":"882f9e410c0b5edef3c631cf8987ee764d8a07a50fa0f5d8a0a55ce67dcbdfb6"} Jan 22 16:47:28 crc kubenswrapper[4758]: I0122 16:47:28.462571 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2xj52" event={"ID":"644142ed-c937-406d-9fd5-3fe856879a92","Type":"ContainerStarted","Data":"e961565aa672cc8acbd944214079c9f5a07c05c4ca215200625d773163ee2279"} Jan 22 16:47:28 crc kubenswrapper[4758]: I0122 16:47:28.464764 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4rlkk" event={"ID":"40845474-36a2-48c0-a0df-af5deb2a31fd","Type":"ContainerStarted","Data":"3ace1ad59d7ec1a9b80b0a3e14367c1c697c42c0e7224aa4e0427f541d7c041b"} Jan 22 16:47:28 crc kubenswrapper[4758]: E0122 16:47:28.467221 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2xj52" podUID="644142ed-c937-406d-9fd5-3fe856879a92" Jan 22 16:47:28 crc kubenswrapper[4758]: E0122 16:47:28.467973 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4rlkk" podUID="40845474-36a2-48c0-a0df-af5deb2a31fd" Jan 22 16:47:28 crc kubenswrapper[4758]: I0122 16:47:28.468901 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-zkfzz" event={"ID":"25848d11-6830-45f8-aff0-0082594b5f3f","Type":"ContainerStarted","Data":"a0847c013f619c57d1490e51c52f34ae016da326599881f5ccb721b2557cb443"} Jan 22 16:47:28 crc kubenswrapper[4758]: I0122 16:47:28.470559 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-d2nmz" event={"ID":"d67bb459-81fe-48a2-ac8a-cb4441bb35bb","Type":"ContainerStarted","Data":"508555c33bf802fd57bc65057573ef0b576542e9cacebff395d7bdfef03eda6f"} Jan 22 16:47:28 crc kubenswrapper[4758]: I0122 16:47:28.472104 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-zfcl5" event={"ID":"7d2439ad-1ca6-4c24-9d15-e04f0f89aedf","Type":"ContainerStarted","Data":"f6e7be447745d6039d6a5905884b51d6cfcd080d733c3f92e28672d9f3d5e934"} Jan 22 16:47:28 crc kubenswrapper[4758]: E0122 16:47:28.486560 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831\\\"\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-zfcl5" podUID="7d2439ad-1ca6-4c24-9d15-e04f0f89aedf" Jan 22 16:47:28 crc kubenswrapper[4758]: I0122 16:47:28.527426 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jr994" event={"ID":"16d19f40-45e9-4f1a-b953-e5c68ca014f3","Type":"ContainerStarted","Data":"d18cd6a3c2a0ed3147a7d313f52d09b83517ebd17ef36be796d84224fbdff09c"} Jan 22 16:47:28 crc kubenswrapper[4758]: E0122 16:47:28.534211 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jr994" podUID="16d19f40-45e9-4f1a-b953-e5c68ca014f3" Jan 22 16:47:28 crc kubenswrapper[4758]: I0122 16:47:28.536668 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-85b8fd6746-9vvd6" event={"ID":"71c16ac1-3276-4096-93c5-d10765320713","Type":"ContainerStarted","Data":"3cfcbb156ad565f9c755d2890929e301b9f250f496b97f25dc83b50dc4caf485"} Jan 22 16:47:28 crc kubenswrapper[4758]: E0122 16:47:28.537917 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.196:5001/openstack-k8s-operators/watcher-operator:66a2a7ca52c97ab09e74ddf1b8f1663bf04650c3\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-85b8fd6746-9vvd6" podUID="71c16ac1-3276-4096-93c5-d10765320713" Jan 22 16:47:28 crc kubenswrapper[4758]: I0122 16:47:28.541870 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2fkhp" event={"ID":"659f7d3e-5518-4d19-bb54-e39295a667d2","Type":"ContainerStarted","Data":"262638dca36011cd61a298087bb59000efd1391b3da50f0185986af41816a5b0"} Jan 22 16:47:28 crc kubenswrapper[4758]: I0122 16:47:28.547709 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-2qp8f" event={"ID":"5ade5af9-f79e-4285-841c-0f08e88cca47","Type":"ContainerStarted","Data":"b3beb89ebd59fdc1b81ba9c33976e9aac346713e4cad01c5955efbc9f09ff9b3"} Jan 22 16:47:28 crc kubenswrapper[4758]: E0122 16:47:28.550997 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-2qp8f" podUID="5ade5af9-f79e-4285-841c-0f08e88cca47" Jan 22 16:47:28 crc kubenswrapper[4758]: I0122 16:47:28.551495 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-dfb5n" event={"ID":"78689fee-3fe7-47d2-866d-6465d23378ea","Type":"ContainerStarted","Data":"17f15e511a07bb8b680bec2688fff9aa65c23de120bd6d661cab7c1bc215f0cc"} Jan 22 16:47:28 crc kubenswrapper[4758]: I0122 16:47:28.561147 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-skwtp" event={"ID":"fa976a5e-7cd9-402f-9792-015ca1488d1f","Type":"ContainerStarted","Data":"e57ac543091dd74b94fcaa7dd71c904e7c7714fc29e2e098c7792f075cd1ae11"} Jan 22 16:47:28 crc kubenswrapper[4758]: I0122 16:47:28.562728 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-s8q8p" event={"ID":"c3e0f5c7-10cb-441c-9516-f6de8fe29757","Type":"ContainerStarted","Data":"f4749e4dd3a8dfd3ea8e121934072f201d02acd93264249114938150c461cf38"} Jan 22 16:47:28 crc kubenswrapper[4758]: I0122 16:47:28.911571 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cdd1962b-fbf0-480c-b5e2-e28ee6988046-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854wxd6d\" (UID: \"cdd1962b-fbf0-480c-b5e2-e28ee6988046\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wxd6d" Jan 22 16:47:28 crc kubenswrapper[4758]: E0122 16:47:28.913051 4758 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 16:47:28 crc kubenswrapper[4758]: E0122 16:47:28.913108 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cdd1962b-fbf0-480c-b5e2-e28ee6988046-cert podName:cdd1962b-fbf0-480c-b5e2-e28ee6988046 nodeName:}" failed. No retries permitted until 2026-01-22 16:47:32.913090975 +0000 UTC m=+1074.396430260 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cdd1962b-fbf0-480c-b5e2-e28ee6988046-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854wxd6d" (UID: "cdd1962b-fbf0-480c-b5e2-e28ee6988046") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 16:47:29 crc kubenswrapper[4758]: I0122 16:47:29.480141 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c4847ca7-5057-4d6d-80c5-f74c7d633e83-webhook-certs\") pod \"openstack-operator-controller-manager-675f79667-vjvtj\" (UID: \"c4847ca7-5057-4d6d-80c5-f74c7d633e83\") " pod="openstack-operators/openstack-operator-controller-manager-675f79667-vjvtj" Jan 22 16:47:29 crc kubenswrapper[4758]: I0122 16:47:29.480238 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c4847ca7-5057-4d6d-80c5-f74c7d633e83-metrics-certs\") pod \"openstack-operator-controller-manager-675f79667-vjvtj\" (UID: \"c4847ca7-5057-4d6d-80c5-f74c7d633e83\") " pod="openstack-operators/openstack-operator-controller-manager-675f79667-vjvtj" Jan 22 16:47:29 crc kubenswrapper[4758]: E0122 16:47:29.480330 4758 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 16:47:29 crc kubenswrapper[4758]: E0122 16:47:29.480403 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4847ca7-5057-4d6d-80c5-f74c7d633e83-webhook-certs podName:c4847ca7-5057-4d6d-80c5-f74c7d633e83 nodeName:}" failed. No retries permitted until 2026-01-22 16:47:33.480384779 +0000 UTC m=+1074.963724064 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c4847ca7-5057-4d6d-80c5-f74c7d633e83-webhook-certs") pod "openstack-operator-controller-manager-675f79667-vjvtj" (UID: "c4847ca7-5057-4d6d-80c5-f74c7d633e83") : secret "webhook-server-cert" not found Jan 22 16:47:29 crc kubenswrapper[4758]: E0122 16:47:29.485460 4758 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 16:47:29 crc kubenswrapper[4758]: E0122 16:47:29.485539 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4847ca7-5057-4d6d-80c5-f74c7d633e83-metrics-certs podName:c4847ca7-5057-4d6d-80c5-f74c7d633e83 nodeName:}" failed. No retries permitted until 2026-01-22 16:47:33.485520819 +0000 UTC m=+1074.968860114 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c4847ca7-5057-4d6d-80c5-f74c7d633e83-metrics-certs") pod "openstack-operator-controller-manager-675f79667-vjvtj" (UID: "c4847ca7-5057-4d6d-80c5-f74c7d633e83") : secret "metrics-server-cert" not found Jan 22 16:47:29 crc kubenswrapper[4758]: E0122 16:47:29.667125 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-59n7w" podUID="d4c5d14c-33e9-4cb0-9ff4-9747c2cd3c13" Jan 22 16:47:29 crc kubenswrapper[4758]: E0122 16:47:29.667588 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831\\\"\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-zfcl5" podUID="7d2439ad-1ca6-4c24-9d15-e04f0f89aedf" Jan 22 16:47:29 crc kubenswrapper[4758]: E0122 16:47:29.667592 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4rlkk" podUID="40845474-36a2-48c0-a0df-af5deb2a31fd" Jan 22 16:47:29 crc kubenswrapper[4758]: E0122 16:47:29.667756 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-2qp8f" podUID="5ade5af9-f79e-4285-841c-0f08e88cca47" Jan 22 16:47:29 crc kubenswrapper[4758]: E0122 16:47:29.667874 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jr994" podUID="16d19f40-45e9-4f1a-b953-e5c68ca014f3" Jan 22 16:47:29 crc kubenswrapper[4758]: E0122 16:47:29.667982 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.196:5001/openstack-k8s-operators/watcher-operator:66a2a7ca52c97ab09e74ddf1b8f1663bf04650c3\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-85b8fd6746-9vvd6" podUID="71c16ac1-3276-4096-93c5-d10765320713" Jan 22 16:47:29 crc kubenswrapper[4758]: E0122 16:47:29.668126 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2xj52" podUID="644142ed-c937-406d-9fd5-3fe856879a92" Jan 22 16:47:32 crc kubenswrapper[4758]: I0122 16:47:32.461790 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/35a3fafd-45ea-465d-90ef-36148a60685e-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-sb974\" (UID: \"35a3fafd-45ea-465d-90ef-36148a60685e\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-sb974" Jan 22 16:47:32 crc kubenswrapper[4758]: E0122 16:47:32.462038 4758 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 16:47:32 crc kubenswrapper[4758]: E0122 16:47:32.462403 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35a3fafd-45ea-465d-90ef-36148a60685e-cert podName:35a3fafd-45ea-465d-90ef-36148a60685e nodeName:}" failed. No retries permitted until 2026-01-22 16:47:40.462373839 +0000 UTC m=+1081.945713124 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/35a3fafd-45ea-465d-90ef-36148a60685e-cert") pod "infra-operator-controller-manager-54ccf4f85d-sb974" (UID: "35a3fafd-45ea-465d-90ef-36148a60685e") : secret "infra-operator-webhook-server-cert" not found Jan 22 16:47:32 crc kubenswrapper[4758]: I0122 16:47:32.973117 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cdd1962b-fbf0-480c-b5e2-e28ee6988046-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854wxd6d\" (UID: \"cdd1962b-fbf0-480c-b5e2-e28ee6988046\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wxd6d" Jan 22 16:47:32 crc kubenswrapper[4758]: E0122 16:47:32.973999 4758 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 16:47:32 crc kubenswrapper[4758]: E0122 16:47:32.974046 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cdd1962b-fbf0-480c-b5e2-e28ee6988046-cert podName:cdd1962b-fbf0-480c-b5e2-e28ee6988046 nodeName:}" failed. No retries permitted until 2026-01-22 16:47:40.974032303 +0000 UTC m=+1082.457371588 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cdd1962b-fbf0-480c-b5e2-e28ee6988046-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854wxd6d" (UID: "cdd1962b-fbf0-480c-b5e2-e28ee6988046") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 16:47:33 crc kubenswrapper[4758]: I0122 16:47:33.480625 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c4847ca7-5057-4d6d-80c5-f74c7d633e83-webhook-certs\") pod \"openstack-operator-controller-manager-675f79667-vjvtj\" (UID: \"c4847ca7-5057-4d6d-80c5-f74c7d633e83\") " pod="openstack-operators/openstack-operator-controller-manager-675f79667-vjvtj" Jan 22 16:47:33 crc kubenswrapper[4758]: E0122 16:47:33.480783 4758 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 16:47:33 crc kubenswrapper[4758]: E0122 16:47:33.480847 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4847ca7-5057-4d6d-80c5-f74c7d633e83-webhook-certs podName:c4847ca7-5057-4d6d-80c5-f74c7d633e83 nodeName:}" failed. No retries permitted until 2026-01-22 16:47:41.480828843 +0000 UTC m=+1082.964168128 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c4847ca7-5057-4d6d-80c5-f74c7d633e83-webhook-certs") pod "openstack-operator-controller-manager-675f79667-vjvtj" (UID: "c4847ca7-5057-4d6d-80c5-f74c7d633e83") : secret "webhook-server-cert" not found Jan 22 16:47:33 crc kubenswrapper[4758]: I0122 16:47:33.581945 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c4847ca7-5057-4d6d-80c5-f74c7d633e83-metrics-certs\") pod \"openstack-operator-controller-manager-675f79667-vjvtj\" (UID: \"c4847ca7-5057-4d6d-80c5-f74c7d633e83\") " pod="openstack-operators/openstack-operator-controller-manager-675f79667-vjvtj" Jan 22 16:47:33 crc kubenswrapper[4758]: E0122 16:47:33.582092 4758 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 16:47:33 crc kubenswrapper[4758]: E0122 16:47:33.582139 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4847ca7-5057-4d6d-80c5-f74c7d633e83-metrics-certs podName:c4847ca7-5057-4d6d-80c5-f74c7d633e83 nodeName:}" failed. No retries permitted until 2026-01-22 16:47:41.582125037 +0000 UTC m=+1083.065464312 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c4847ca7-5057-4d6d-80c5-f74c7d633e83-metrics-certs") pod "openstack-operator-controller-manager-675f79667-vjvtj" (UID: "c4847ca7-5057-4d6d-80c5-f74c7d633e83") : secret "metrics-server-cert" not found Jan 22 16:47:40 crc kubenswrapper[4758]: I0122 16:47:40.519849 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/35a3fafd-45ea-465d-90ef-36148a60685e-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-sb974\" (UID: \"35a3fafd-45ea-465d-90ef-36148a60685e\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-sb974" Jan 22 16:47:40 crc kubenswrapper[4758]: E0122 16:47:40.520080 4758 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 16:47:40 crc kubenswrapper[4758]: E0122 16:47:40.520868 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35a3fafd-45ea-465d-90ef-36148a60685e-cert podName:35a3fafd-45ea-465d-90ef-36148a60685e nodeName:}" failed. No retries permitted until 2026-01-22 16:47:56.520834752 +0000 UTC m=+1098.004174247 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/35a3fafd-45ea-465d-90ef-36148a60685e-cert") pod "infra-operator-controller-manager-54ccf4f85d-sb974" (UID: "35a3fafd-45ea-465d-90ef-36148a60685e") : secret "infra-operator-webhook-server-cert" not found Jan 22 16:47:41 crc kubenswrapper[4758]: I0122 16:47:41.026277 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cdd1962b-fbf0-480c-b5e2-e28ee6988046-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854wxd6d\" (UID: \"cdd1962b-fbf0-480c-b5e2-e28ee6988046\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wxd6d" Jan 22 16:47:41 crc kubenswrapper[4758]: I0122 16:47:41.037838 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cdd1962b-fbf0-480c-b5e2-e28ee6988046-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854wxd6d\" (UID: \"cdd1962b-fbf0-480c-b5e2-e28ee6988046\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wxd6d" Jan 22 16:47:41 crc kubenswrapper[4758]: I0122 16:47:41.090830 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-pz96z" Jan 22 16:47:41 crc kubenswrapper[4758]: I0122 16:47:41.098064 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wxd6d" Jan 22 16:47:41 crc kubenswrapper[4758]: I0122 16:47:41.534178 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c4847ca7-5057-4d6d-80c5-f74c7d633e83-webhook-certs\") pod \"openstack-operator-controller-manager-675f79667-vjvtj\" (UID: \"c4847ca7-5057-4d6d-80c5-f74c7d633e83\") " pod="openstack-operators/openstack-operator-controller-manager-675f79667-vjvtj" Jan 22 16:47:41 crc kubenswrapper[4758]: E0122 16:47:41.534364 4758 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 16:47:41 crc kubenswrapper[4758]: E0122 16:47:41.534453 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4847ca7-5057-4d6d-80c5-f74c7d633e83-webhook-certs podName:c4847ca7-5057-4d6d-80c5-f74c7d633e83 nodeName:}" failed. No retries permitted until 2026-01-22 16:47:57.534432364 +0000 UTC m=+1099.017771649 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c4847ca7-5057-4d6d-80c5-f74c7d633e83-webhook-certs") pod "openstack-operator-controller-manager-675f79667-vjvtj" (UID: "c4847ca7-5057-4d6d-80c5-f74c7d633e83") : secret "webhook-server-cert" not found Jan 22 16:47:41 crc kubenswrapper[4758]: E0122 16:47:41.601933 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:b57d65d2a968705b9067192a7cb33bd4a12489db87e1d05de78c076f2062cab4" Jan 22 16:47:41 crc kubenswrapper[4758]: E0122 16:47:41.602121 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:b57d65d2a968705b9067192a7cb33bd4a12489db87e1d05de78c076f2062cab4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ngl4w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5d8f59fb49-7tzm4_openstack-operators(c73a71b4-f1fd-4a6c-9832-ce9b48a5f220): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 16:47:41 crc kubenswrapper[4758]: E0122 16:47:41.603429 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-7tzm4" podUID="c73a71b4-f1fd-4a6c-9832-ce9b48a5f220" Jan 22 16:47:41 crc kubenswrapper[4758]: I0122 16:47:41.635931 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c4847ca7-5057-4d6d-80c5-f74c7d633e83-metrics-certs\") pod \"openstack-operator-controller-manager-675f79667-vjvtj\" (UID: \"c4847ca7-5057-4d6d-80c5-f74c7d633e83\") " pod="openstack-operators/openstack-operator-controller-manager-675f79667-vjvtj" Jan 22 16:47:41 crc kubenswrapper[4758]: I0122 16:47:41.643110 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c4847ca7-5057-4d6d-80c5-f74c7d633e83-metrics-certs\") pod \"openstack-operator-controller-manager-675f79667-vjvtj\" (UID: \"c4847ca7-5057-4d6d-80c5-f74c7d633e83\") " pod="openstack-operators/openstack-operator-controller-manager-675f79667-vjvtj" Jan 22 16:47:41 crc kubenswrapper[4758]: E0122 16:47:41.765619 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:b57d65d2a968705b9067192a7cb33bd4a12489db87e1d05de78c076f2062cab4\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-7tzm4" podUID="c73a71b4-f1fd-4a6c-9832-ce9b48a5f220" Jan 22 16:47:42 crc kubenswrapper[4758]: E0122 16:47:42.423657 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf" Jan 22 16:47:42 crc kubenswrapper[4758]: E0122 16:47:42.424194 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x965x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-55db956ddc-lb8mx_openstack-operators(f5135718-a42b-4089-922b-9fba781fe6db): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 16:47:42 crc kubenswrapper[4758]: E0122 16:47:42.425824 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lb8mx" podUID="f5135718-a42b-4089-922b-9fba781fe6db" Jan 22 16:47:42 crc kubenswrapper[4758]: E0122 16:47:42.779109 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lb8mx" podUID="f5135718-a42b-4089-922b-9fba781fe6db" Jan 22 16:47:55 crc kubenswrapper[4758]: E0122 16:47:55.928322 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 22 16:47:55 crc kubenswrapper[4758]: E0122 16:47:55.929089 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9g68t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-dfb5n_openstack-operators(78689fee-3fe7-47d2-866d-6465d23378ea): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 16:47:55 crc kubenswrapper[4758]: E0122 16:47:55.930368 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-dfb5n" podUID="78689fee-3fe7-47d2-866d-6465d23378ea" Jan 22 16:47:56 crc kubenswrapper[4758]: I0122 16:47:56.548305 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/35a3fafd-45ea-465d-90ef-36148a60685e-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-sb974\" (UID: \"35a3fafd-45ea-465d-90ef-36148a60685e\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-sb974" Jan 22 16:47:56 crc kubenswrapper[4758]: I0122 16:47:56.573002 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/35a3fafd-45ea-465d-90ef-36148a60685e-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-sb974\" (UID: \"35a3fafd-45ea-465d-90ef-36148a60685e\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-sb974" Jan 22 16:47:56 crc kubenswrapper[4758]: I0122 16:47:56.748888 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-f7gls" Jan 22 16:47:56 crc kubenswrapper[4758]: I0122 16:47:56.757326 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-sb974" Jan 22 16:47:56 crc kubenswrapper[4758]: E0122 16:47:56.881582 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-dfb5n" podUID="78689fee-3fe7-47d2-866d-6465d23378ea" Jan 22 16:47:57 crc kubenswrapper[4758]: I0122 16:47:57.562578 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c4847ca7-5057-4d6d-80c5-f74c7d633e83-webhook-certs\") pod \"openstack-operator-controller-manager-675f79667-vjvtj\" (UID: \"c4847ca7-5057-4d6d-80c5-f74c7d633e83\") " pod="openstack-operators/openstack-operator-controller-manager-675f79667-vjvtj" Jan 22 16:47:57 crc kubenswrapper[4758]: I0122 16:47:57.575641 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c4847ca7-5057-4d6d-80c5-f74c7d633e83-webhook-certs\") pod \"openstack-operator-controller-manager-675f79667-vjvtj\" (UID: \"c4847ca7-5057-4d6d-80c5-f74c7d633e83\") " pod="openstack-operators/openstack-operator-controller-manager-675f79667-vjvtj" Jan 22 16:47:57 crc kubenswrapper[4758]: I0122 16:47:57.586571 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-4q6rk" Jan 22 16:47:57 crc kubenswrapper[4758]: I0122 16:47:57.594927 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-675f79667-vjvtj" Jan 22 16:47:58 crc kubenswrapper[4758]: E0122 16:47:58.483131 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71" Jan 22 16:47:58 crc kubenswrapper[4758]: E0122 16:47:58.483667 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w8vsz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-c87fff755-d2nmz_openstack-operators(d67bb459-81fe-48a2-ac8a-cb4441bb35bb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 16:47:58 crc kubenswrapper[4758]: E0122 16:47:58.484891 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-d2nmz" podUID="d67bb459-81fe-48a2-ac8a-cb4441bb35bb" Jan 22 16:47:58 crc kubenswrapper[4758]: E0122 16:47:58.895831 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-d2nmz" podUID="d67bb459-81fe-48a2-ac8a-cb4441bb35bb" Jan 22 16:47:59 crc kubenswrapper[4758]: E0122 16:47:59.631697 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0" Jan 22 16:47:59 crc kubenswrapper[4758]: E0122 16:47:59.631906 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wcwxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5d646b7d76-4jthc_openstack-operators(19b4b900-d90f-4e59-b082-61f058f5882b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 16:47:59 crc kubenswrapper[4758]: E0122 16:47:59.633283 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-4jthc" podUID="19b4b900-d90f-4e59-b082-61f058f5882b" Jan 22 16:47:59 crc kubenswrapper[4758]: E0122 16:47:59.902960 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-4jthc" podUID="19b4b900-d90f-4e59-b082-61f058f5882b" Jan 22 16:48:00 crc kubenswrapper[4758]: E0122 16:48:00.157711 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 22 16:48:00 crc kubenswrapper[4758]: E0122 16:48:00.157940 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t7qn4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-cb5t8_openstack-operators(26d5529a-b270-40fc-9faa-037435dd2f80): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 16:48:00 crc kubenswrapper[4758]: E0122 16:48:00.159112 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cb5t8" podUID="26d5529a-b270-40fc-9faa-037435dd2f80" Jan 22 16:48:00 crc kubenswrapper[4758]: E0122 16:48:00.621035 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d" Jan 22 16:48:00 crc kubenswrapper[4758]: E0122 16:48:00.621662 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9xzk8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-2xj52_openstack-operators(644142ed-c937-406d-9fd5-3fe856879a92): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 16:48:00 crc kubenswrapper[4758]: E0122 16:48:00.624014 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2xj52" podUID="644142ed-c937-406d-9fd5-3fe856879a92" Jan 22 16:48:00 crc kubenswrapper[4758]: E0122 16:48:00.910023 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cb5t8" podUID="26d5529a-b270-40fc-9faa-037435dd2f80" Jan 22 16:48:01 crc kubenswrapper[4758]: E0122 16:48:01.078412 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922" Jan 22 16:48:01 crc kubenswrapper[4758]: E0122 16:48:01.078617 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kb6x6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-4rlkk_openstack-operators(40845474-36a2-48c0-a0df-af5deb2a31fd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 16:48:01 crc kubenswrapper[4758]: E0122 16:48:01.079873 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4rlkk" podUID="40845474-36a2-48c0-a0df-af5deb2a31fd" Jan 22 16:48:01 crc kubenswrapper[4758]: E0122 16:48:01.525880 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8" Jan 22 16:48:01 crc kubenswrapper[4758]: E0122 16:48:01.526057 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7xw4q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-78c6999f6f-2qp8f_openstack-operators(5ade5af9-f79e-4285-841c-0f08e88cca47): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 16:48:01 crc kubenswrapper[4758]: E0122 16:48:01.527231 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-2qp8f" podUID="5ade5af9-f79e-4285-841c-0f08e88cca47" Jan 22 16:48:04 crc kubenswrapper[4758]: I0122 16:48:04.332962 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wxd6d"] Jan 22 16:48:05 crc kubenswrapper[4758]: I0122 16:48:04.968042 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-59n7w" event={"ID":"d4c5d14c-33e9-4cb0-9ff4-9747c2cd3c13","Type":"ContainerStarted","Data":"cf45c93385b847cb95046805b7d0579501a8fde4e96aec554951f00da0293ebc"} Jan 22 16:48:05 crc kubenswrapper[4758]: I0122 16:48:05.037150 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-59n7w" Jan 22 16:48:05 crc kubenswrapper[4758]: I0122 16:48:05.054662 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-zkfzz" event={"ID":"25848d11-6830-45f8-aff0-0082594b5f3f","Type":"ContainerStarted","Data":"76577c5221b29a65d8db3dbdf6da6b58ef6868ad173ac9ff49414491ca910328"} Jan 22 16:48:05 crc kubenswrapper[4758]: I0122 16:48:05.055376 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-zkfzz" Jan 22 16:48:05 crc kubenswrapper[4758]: I0122 16:48:05.060009 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2fkhp" event={"ID":"659f7d3e-5518-4d19-bb54-e39295a667d2","Type":"ContainerStarted","Data":"5e4cfe8dee549f90ddd7da44b917a696b4ad8b9811a62376b4463b33d409636a"} Jan 22 16:48:05 crc kubenswrapper[4758]: I0122 16:48:05.060322 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2fkhp" Jan 22 16:48:05 crc kubenswrapper[4758]: I0122 16:48:05.066898 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jr994" event={"ID":"16d19f40-45e9-4f1a-b953-e5c68ca014f3","Type":"ContainerStarted","Data":"1d61b57ea732060a674fca3da40faafd12a801a2feede3f87bc0a9c8194f85bb"} Jan 22 16:48:05 crc kubenswrapper[4758]: I0122 16:48:05.067620 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jr994" Jan 22 16:48:05 crc kubenswrapper[4758]: I0122 16:48:05.076164 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-54ccf4f85d-sb974"] Jan 22 16:48:05 crc kubenswrapper[4758]: I0122 16:48:05.079163 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-675f79667-vjvtj"] Jan 22 16:48:05 crc kubenswrapper[4758]: I0122 16:48:05.088705 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-59n7w" podStartSLOduration=4.80035511 podStartE2EDuration="41.088680312s" podCreationTimestamp="2026-01-22 16:47:24 +0000 UTC" firstStartedPulling="2026-01-22 16:47:27.872960691 +0000 UTC m=+1069.356299976" lastFinishedPulling="2026-01-22 16:48:04.161285853 +0000 UTC m=+1105.644625178" observedRunningTime="2026-01-22 16:48:05.084409606 +0000 UTC m=+1106.567748891" watchObservedRunningTime="2026-01-22 16:48:05.088680312 +0000 UTC m=+1106.572019607" Jan 22 16:48:05 crc kubenswrapper[4758]: I0122 16:48:05.093012 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-2mr2s" event={"ID":"901f347a-3b10-4392-8247-41a859112544","Type":"ContainerStarted","Data":"f795c930a8e12fda9c2045dccf29f2f5cfba9ae856a5150b6b7f51bce50b4ae6"} Jan 22 16:48:05 crc kubenswrapper[4758]: I0122 16:48:05.093195 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-2mr2s" Jan 22 16:48:05 crc kubenswrapper[4758]: I0122 16:48:05.105518 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-s8q8p" event={"ID":"c3e0f5c7-10cb-441c-9516-f6de8fe29757","Type":"ContainerStarted","Data":"1489735902e42f8c37aa85aefd23353e063ce3e0f78177639e0df6c46ddeb829"} Jan 22 16:48:05 crc kubenswrapper[4758]: I0122 16:48:05.106010 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-s8q8p" Jan 22 16:48:05 crc kubenswrapper[4758]: I0122 16:48:05.106291 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2fkhp" podStartSLOduration=7.107755611 podStartE2EDuration="41.106273511s" podCreationTimestamp="2026-01-22 16:47:24 +0000 UTC" firstStartedPulling="2026-01-22 16:47:27.520468693 +0000 UTC m=+1069.003807978" lastFinishedPulling="2026-01-22 16:48:01.518986593 +0000 UTC m=+1103.002325878" observedRunningTime="2026-01-22 16:48:05.105844118 +0000 UTC m=+1106.589183413" watchObservedRunningTime="2026-01-22 16:48:05.106273511 +0000 UTC m=+1106.589612796" Jan 22 16:48:05 crc kubenswrapper[4758]: I0122 16:48:05.117994 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-tlt96" event={"ID":"e7fdd2cd-e517-46b5-acb3-22b59b7f132f","Type":"ContainerStarted","Data":"810588e7840d9ff4f9a2fccf0bebff7066b6141e074eff2931aa110dff601661"} Jan 22 16:48:05 crc kubenswrapper[4758]: I0122 16:48:05.118389 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-tlt96" Jan 22 16:48:05 crc kubenswrapper[4758]: I0122 16:48:05.120237 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wxd6d" event={"ID":"cdd1962b-fbf0-480c-b5e2-e28ee6988046","Type":"ContainerStarted","Data":"0d424f12f913c9284ff434a38dd081a44586d5b4b125d836421864030fed3a4a"} Jan 22 16:48:05 crc kubenswrapper[4758]: I0122 16:48:05.124012 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-zfcl5" event={"ID":"7d2439ad-1ca6-4c24-9d15-e04f0f89aedf","Type":"ContainerStarted","Data":"f3cef0682a195659f7b5e3123741938c84f23055a202fd57fcc714b2d9d731c7"} Jan 22 16:48:05 crc kubenswrapper[4758]: I0122 16:48:05.124358 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-zfcl5" Jan 22 16:48:05 crc kubenswrapper[4758]: I0122 16:48:05.134391 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jr994" podStartSLOduration=4.568967204 podStartE2EDuration="41.134375584s" podCreationTimestamp="2026-01-22 16:47:24 +0000 UTC" firstStartedPulling="2026-01-22 16:47:27.67049553 +0000 UTC m=+1069.153834815" lastFinishedPulling="2026-01-22 16:48:04.23590391 +0000 UTC m=+1105.719243195" observedRunningTime="2026-01-22 16:48:05.133202082 +0000 UTC m=+1106.616541377" watchObservedRunningTime="2026-01-22 16:48:05.134375584 +0000 UTC m=+1106.617714869" Jan 22 16:48:05 crc kubenswrapper[4758]: I0122 16:48:05.134886 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-skwtp" event={"ID":"fa976a5e-7cd9-402f-9792-015ca1488d1f","Type":"ContainerStarted","Data":"ce5016f114838dcaca7cc66b44c49904276b6456085e1179fe6e8e2419474ace"} Jan 22 16:48:05 crc kubenswrapper[4758]: I0122 16:48:05.135807 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-skwtp" Jan 22 16:48:05 crc kubenswrapper[4758]: I0122 16:48:05.146953 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-7tzm4" event={"ID":"c73a71b4-f1fd-4a6c-9832-ce9b48a5f220","Type":"ContainerStarted","Data":"98763afcc5b175076c7ccd2ff919e441b44b7eef4344c4bb01c274b2de476b81"} Jan 22 16:48:05 crc kubenswrapper[4758]: I0122 16:48:05.148054 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-7tzm4" Jan 22 16:48:05 crc kubenswrapper[4758]: I0122 16:48:05.160203 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-85b8fd6746-9vvd6" event={"ID":"71c16ac1-3276-4096-93c5-d10765320713","Type":"ContainerStarted","Data":"b7623be75913161b201b9b3a55bc1959c9b6136ccdad6e64a3461f0147694c7c"} Jan 22 16:48:05 crc kubenswrapper[4758]: I0122 16:48:05.160466 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-85b8fd6746-9vvd6" Jan 22 16:48:05 crc kubenswrapper[4758]: I0122 16:48:05.177455 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lb8mx" event={"ID":"f5135718-a42b-4089-922b-9fba781fe6db","Type":"ContainerStarted","Data":"09f5beedb93e30a4b68e826f33ffdbcfe408d643e4a6667b28b1a56cfbd08bc2"} Jan 22 16:48:05 crc kubenswrapper[4758]: I0122 16:48:05.177805 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lb8mx" Jan 22 16:48:05 crc kubenswrapper[4758]: I0122 16:48:05.186447 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-gd568" event={"ID":"e8d5a5c6-b15b-490d-aab9-7fc63e9f30f7","Type":"ContainerStarted","Data":"240edd2f680249409c003b4f15a98966b1e1d8f25dbe8d8d91e622618a7b238d"} Jan 22 16:48:05 crc kubenswrapper[4758]: I0122 16:48:05.187202 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-gd568" Jan 22 16:48:05 crc kubenswrapper[4758]: I0122 16:48:05.187907 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-7tzm4" podStartSLOduration=4.753528278 podStartE2EDuration="41.187885518s" podCreationTimestamp="2026-01-22 16:47:24 +0000 UTC" firstStartedPulling="2026-01-22 16:47:27.574198823 +0000 UTC m=+1069.057538108" lastFinishedPulling="2026-01-22 16:48:04.008556033 +0000 UTC m=+1105.491895348" observedRunningTime="2026-01-22 16:48:05.184233299 +0000 UTC m=+1106.667572584" watchObservedRunningTime="2026-01-22 16:48:05.187885518 +0000 UTC m=+1106.671224803" Jan 22 16:48:05 crc kubenswrapper[4758]: I0122 16:48:05.195424 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-zkfzz" podStartSLOduration=7.159820706 podStartE2EDuration="41.195403492s" podCreationTimestamp="2026-01-22 16:47:24 +0000 UTC" firstStartedPulling="2026-01-22 16:47:27.484048394 +0000 UTC m=+1068.967387679" lastFinishedPulling="2026-01-22 16:48:01.51963118 +0000 UTC m=+1103.002970465" observedRunningTime="2026-01-22 16:48:05.160063572 +0000 UTC m=+1106.643402857" watchObservedRunningTime="2026-01-22 16:48:05.195403492 +0000 UTC m=+1106.678742777" Jan 22 16:48:05 crc kubenswrapper[4758]: I0122 16:48:05.230106 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-zfcl5" podStartSLOduration=4.859332814 podStartE2EDuration="41.230091115s" podCreationTimestamp="2026-01-22 16:47:24 +0000 UTC" firstStartedPulling="2026-01-22 16:47:27.861756827 +0000 UTC m=+1069.345096112" lastFinishedPulling="2026-01-22 16:48:04.232515128 +0000 UTC m=+1105.715854413" observedRunningTime="2026-01-22 16:48:05.211300775 +0000 UTC m=+1106.694640060" watchObservedRunningTime="2026-01-22 16:48:05.230091115 +0000 UTC m=+1106.713430400" Jan 22 16:48:05 crc kubenswrapper[4758]: I0122 16:48:05.232278 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-2mr2s" podStartSLOduration=7.044731809 podStartE2EDuration="41.232270765s" podCreationTimestamp="2026-01-22 16:47:24 +0000 UTC" firstStartedPulling="2026-01-22 16:47:27.331468578 +0000 UTC m=+1068.814807863" lastFinishedPulling="2026-01-22 16:48:01.519007534 +0000 UTC m=+1103.002346819" observedRunningTime="2026-01-22 16:48:05.229255873 +0000 UTC m=+1106.712595148" watchObservedRunningTime="2026-01-22 16:48:05.232270765 +0000 UTC m=+1106.715610050" Jan 22 16:48:05 crc kubenswrapper[4758]: I0122 16:48:05.262387 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-s8q8p" podStartSLOduration=7.23213881 podStartE2EDuration="41.262368212s" podCreationTimestamp="2026-01-22 16:47:24 +0000 UTC" firstStartedPulling="2026-01-22 16:47:27.48868805 +0000 UTC m=+1068.972027335" lastFinishedPulling="2026-01-22 16:48:01.518917452 +0000 UTC m=+1103.002256737" observedRunningTime="2026-01-22 16:48:05.259369711 +0000 UTC m=+1106.742708996" watchObservedRunningTime="2026-01-22 16:48:05.262368212 +0000 UTC m=+1106.745707497" Jan 22 16:48:05 crc kubenswrapper[4758]: I0122 16:48:05.310326 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-tlt96" podStartSLOduration=7.552537816 podStartE2EDuration="41.310306714s" podCreationTimestamp="2026-01-22 16:47:24 +0000 UTC" firstStartedPulling="2026-01-22 16:47:27.307924098 +0000 UTC m=+1068.791263383" lastFinishedPulling="2026-01-22 16:48:01.065692996 +0000 UTC m=+1102.549032281" observedRunningTime="2026-01-22 16:48:05.309897103 +0000 UTC m=+1106.793236398" watchObservedRunningTime="2026-01-22 16:48:05.310306714 +0000 UTC m=+1106.793645999" Jan 22 16:48:05 crc kubenswrapper[4758]: I0122 16:48:05.315358 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-skwtp" podStartSLOduration=7.307330553 podStartE2EDuration="41.315338531s" podCreationTimestamp="2026-01-22 16:47:24 +0000 UTC" firstStartedPulling="2026-01-22 16:47:27.510911894 +0000 UTC m=+1068.994251179" lastFinishedPulling="2026-01-22 16:48:01.518919872 +0000 UTC m=+1103.002259157" observedRunningTime="2026-01-22 16:48:05.277506233 +0000 UTC m=+1106.760845518" watchObservedRunningTime="2026-01-22 16:48:05.315338531 +0000 UTC m=+1106.798677816" Jan 22 16:48:05 crc kubenswrapper[4758]: I0122 16:48:05.621089 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lb8mx" podStartSLOduration=5.370668347 podStartE2EDuration="41.621063368s" podCreationTimestamp="2026-01-22 16:47:24 +0000 UTC" firstStartedPulling="2026-01-22 16:47:27.756925348 +0000 UTC m=+1069.240264633" lastFinishedPulling="2026-01-22 16:48:04.007320369 +0000 UTC m=+1105.490659654" observedRunningTime="2026-01-22 16:48:05.609295359 +0000 UTC m=+1107.092634704" watchObservedRunningTime="2026-01-22 16:48:05.621063368 +0000 UTC m=+1107.104402663" Jan 22 16:48:05 crc kubenswrapper[4758]: I0122 16:48:05.922173 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-85b8fd6746-9vvd6" podStartSLOduration=5.556606161 podStartE2EDuration="41.92215569s" podCreationTimestamp="2026-01-22 16:47:24 +0000 UTC" firstStartedPulling="2026-01-22 16:47:27.962482574 +0000 UTC m=+1069.445821859" lastFinishedPulling="2026-01-22 16:48:04.328032103 +0000 UTC m=+1105.811371388" observedRunningTime="2026-01-22 16:48:05.911060099 +0000 UTC m=+1107.394399384" watchObservedRunningTime="2026-01-22 16:48:05.92215569 +0000 UTC m=+1107.405494975" Jan 22 16:48:05 crc kubenswrapper[4758]: I0122 16:48:05.996621 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-gd568" podStartSLOduration=8.103617551 podStartE2EDuration="41.996602774s" podCreationTimestamp="2026-01-22 16:47:24 +0000 UTC" firstStartedPulling="2026-01-22 16:47:27.625901878 +0000 UTC m=+1069.109241163" lastFinishedPulling="2026-01-22 16:48:01.518887101 +0000 UTC m=+1103.002226386" observedRunningTime="2026-01-22 16:48:05.994019183 +0000 UTC m=+1107.477358468" watchObservedRunningTime="2026-01-22 16:48:05.996602774 +0000 UTC m=+1107.479942059" Jan 22 16:48:06 crc kubenswrapper[4758]: I0122 16:48:06.292935 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-675f79667-vjvtj" event={"ID":"c4847ca7-5057-4d6d-80c5-f74c7d633e83","Type":"ContainerStarted","Data":"7308af29c5d418456639ec19b8ae89b374cefa9362e0fb4e0f7a39c32ed934c0"} Jan 22 16:48:06 crc kubenswrapper[4758]: I0122 16:48:06.292990 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-675f79667-vjvtj" event={"ID":"c4847ca7-5057-4d6d-80c5-f74c7d633e83","Type":"ContainerStarted","Data":"e2d99b03e7aeb47daee944092b61fb1991a931a2589ae73d577aaf9e1b01f495"} Jan 22 16:48:06 crc kubenswrapper[4758]: I0122 16:48:06.294082 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-675f79667-vjvtj" Jan 22 16:48:06 crc kubenswrapper[4758]: I0122 16:48:06.343053 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-sb974" event={"ID":"35a3fafd-45ea-465d-90ef-36148a60685e","Type":"ContainerStarted","Data":"263163443c58779e9c54b8f55f53bc4728c70233c531fa2f8bdbb2fecdf8bcfa"} Jan 22 16:48:06 crc kubenswrapper[4758]: I0122 16:48:06.396254 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-675f79667-vjvtj" podStartSLOduration=41.396236343 podStartE2EDuration="41.396236343s" podCreationTimestamp="2026-01-22 16:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:48:06.383562208 +0000 UTC m=+1107.866901493" watchObservedRunningTime="2026-01-22 16:48:06.396236343 +0000 UTC m=+1107.879575628" Jan 22 16:48:09 crc kubenswrapper[4758]: I0122 16:48:09.486156 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-dfb5n" event={"ID":"78689fee-3fe7-47d2-866d-6465d23378ea","Type":"ContainerStarted","Data":"0d34a0000f5fcdb9c5200fca3bbdfa6438c3dfb190ac5b100564f735cb276bbe"} Jan 22 16:48:09 crc kubenswrapper[4758]: I0122 16:48:09.486698 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-dfb5n" Jan 22 16:48:09 crc kubenswrapper[4758]: I0122 16:48:09.507909 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-dfb5n" podStartSLOduration=4.416015037 podStartE2EDuration="45.507885865s" podCreationTimestamp="2026-01-22 16:47:24 +0000 UTC" firstStartedPulling="2026-01-22 16:47:27.503108021 +0000 UTC m=+1068.986447306" lastFinishedPulling="2026-01-22 16:48:08.594978849 +0000 UTC m=+1110.078318134" observedRunningTime="2026-01-22 16:48:09.504443272 +0000 UTC m=+1110.987782557" watchObservedRunningTime="2026-01-22 16:48:09.507885865 +0000 UTC m=+1110.991225150" Jan 22 16:48:11 crc kubenswrapper[4758]: E0122 16:48:11.866912 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2xj52" podUID="644142ed-c937-406d-9fd5-3fe856879a92" Jan 22 16:48:12 crc kubenswrapper[4758]: I0122 16:48:12.526255 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-sb974" event={"ID":"35a3fafd-45ea-465d-90ef-36148a60685e","Type":"ContainerStarted","Data":"cd55dc9adc842248637987f9b3fb3f590baf4dde9075a2f9fba7f513cf9fe363"} Jan 22 16:48:12 crc kubenswrapper[4758]: I0122 16:48:12.526636 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-sb974" Jan 22 16:48:12 crc kubenswrapper[4758]: I0122 16:48:12.528924 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wxd6d" event={"ID":"cdd1962b-fbf0-480c-b5e2-e28ee6988046","Type":"ContainerStarted","Data":"ac2fce5f5864d1bf8541cf8f20b6f471fb03b7883c80af965fc653333bc7afd4"} Jan 22 16:48:12 crc kubenswrapper[4758]: I0122 16:48:12.533595 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wxd6d" Jan 22 16:48:12 crc kubenswrapper[4758]: I0122 16:48:12.535630 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-4jthc" event={"ID":"19b4b900-d90f-4e59-b082-61f058f5882b","Type":"ContainerStarted","Data":"d51fb1ad15f929a23ca45418e301aaa67b68ac4fdfe0dfa8eb39fcbdb4b8a0f6"} Jan 22 16:48:12 crc kubenswrapper[4758]: I0122 16:48:12.536484 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-4jthc" Jan 22 16:48:12 crc kubenswrapper[4758]: I0122 16:48:12.555696 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-sb974" podStartSLOduration=41.740323129 podStartE2EDuration="48.555669343s" podCreationTimestamp="2026-01-22 16:47:24 +0000 UTC" firstStartedPulling="2026-01-22 16:48:05.093162524 +0000 UTC m=+1106.576501809" lastFinishedPulling="2026-01-22 16:48:11.908508738 +0000 UTC m=+1113.391848023" observedRunningTime="2026-01-22 16:48:12.549916536 +0000 UTC m=+1114.033255821" watchObservedRunningTime="2026-01-22 16:48:12.555669343 +0000 UTC m=+1114.039008628" Jan 22 16:48:12 crc kubenswrapper[4758]: I0122 16:48:12.588308 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wxd6d" podStartSLOduration=41.208028946 podStartE2EDuration="48.5882859s" podCreationTimestamp="2026-01-22 16:47:24 +0000 UTC" firstStartedPulling="2026-01-22 16:48:04.523712111 +0000 UTC m=+1106.007051396" lastFinishedPulling="2026-01-22 16:48:11.903969065 +0000 UTC m=+1113.387308350" observedRunningTime="2026-01-22 16:48:12.583478328 +0000 UTC m=+1114.066817623" watchObservedRunningTime="2026-01-22 16:48:12.5882859 +0000 UTC m=+1114.071625185" Jan 22 16:48:12 crc kubenswrapper[4758]: I0122 16:48:12.616417 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-4jthc" podStartSLOduration=4.359086791 podStartE2EDuration="48.616392533s" podCreationTimestamp="2026-01-22 16:47:24 +0000 UTC" firstStartedPulling="2026-01-22 16:47:27.647868365 +0000 UTC m=+1069.131207650" lastFinishedPulling="2026-01-22 16:48:11.905174107 +0000 UTC m=+1113.388513392" observedRunningTime="2026-01-22 16:48:12.611617653 +0000 UTC m=+1114.094956938" watchObservedRunningTime="2026-01-22 16:48:12.616392533 +0000 UTC m=+1114.099731818" Jan 22 16:48:13 crc kubenswrapper[4758]: I0122 16:48:13.543166 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cb5t8" event={"ID":"26d5529a-b270-40fc-9faa-037435dd2f80","Type":"ContainerStarted","Data":"fc8ff14bdec8806608a8a75f3794ae87e47866f8eec743c5d6cec4f1daefb700"} Jan 22 16:48:13 crc kubenswrapper[4758]: I0122 16:48:13.546378 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-d2nmz" event={"ID":"d67bb459-81fe-48a2-ac8a-cb4441bb35bb","Type":"ContainerStarted","Data":"95d524686bf752428f84ea0aeeb170f883fe48d942e5469121e60914ddd0df88"} Jan 22 16:48:13 crc kubenswrapper[4758]: I0122 16:48:13.546767 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-d2nmz" Jan 22 16:48:13 crc kubenswrapper[4758]: I0122 16:48:13.565105 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cb5t8" podStartSLOduration=3.886718963 podStartE2EDuration="48.565087126s" podCreationTimestamp="2026-01-22 16:47:25 +0000 UTC" firstStartedPulling="2026-01-22 16:47:27.940788355 +0000 UTC m=+1069.424127640" lastFinishedPulling="2026-01-22 16:48:12.619156528 +0000 UTC m=+1114.102495803" observedRunningTime="2026-01-22 16:48:13.558780605 +0000 UTC m=+1115.042119890" watchObservedRunningTime="2026-01-22 16:48:13.565087126 +0000 UTC m=+1115.048426411" Jan 22 16:48:13 crc kubenswrapper[4758]: I0122 16:48:13.576178 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-d2nmz" podStartSLOduration=4.8404929469999995 podStartE2EDuration="49.576138417s" podCreationTimestamp="2026-01-22 16:47:24 +0000 UTC" firstStartedPulling="2026-01-22 16:47:27.646258641 +0000 UTC m=+1069.129597936" lastFinishedPulling="2026-01-22 16:48:12.381904101 +0000 UTC m=+1113.865243406" observedRunningTime="2026-01-22 16:48:13.572155248 +0000 UTC m=+1115.055494533" watchObservedRunningTime="2026-01-22 16:48:13.576138417 +0000 UTC m=+1115.059477702" Jan 22 16:48:13 crc kubenswrapper[4758]: E0122 16:48:13.809667 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4rlkk" podUID="40845474-36a2-48c0-a0df-af5deb2a31fd" Jan 22 16:48:14 crc kubenswrapper[4758]: I0122 16:48:14.484451 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-s8q8p" Jan 22 16:48:14 crc kubenswrapper[4758]: I0122 16:48:14.543330 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-2mr2s" Jan 22 16:48:14 crc kubenswrapper[4758]: I0122 16:48:14.614666 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-tlt96" Jan 22 16:48:14 crc kubenswrapper[4758]: I0122 16:48:14.703005 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-zkfzz" Jan 22 16:48:14 crc kubenswrapper[4758]: E0122 16:48:14.810895 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-2qp8f" podUID="5ade5af9-f79e-4285-841c-0f08e88cca47" Jan 22 16:48:14 crc kubenswrapper[4758]: I0122 16:48:14.945588 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-skwtp" Jan 22 16:48:14 crc kubenswrapper[4758]: I0122 16:48:14.945621 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2fkhp" Jan 22 16:48:14 crc kubenswrapper[4758]: I0122 16:48:14.961396 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-dfb5n" Jan 22 16:48:14 crc kubenswrapper[4758]: I0122 16:48:14.982966 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-gd568" Jan 22 16:48:15 crc kubenswrapper[4758]: I0122 16:48:15.402438 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jr994" Jan 22 16:48:15 crc kubenswrapper[4758]: I0122 16:48:15.403321 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lb8mx" Jan 22 16:48:15 crc kubenswrapper[4758]: I0122 16:48:15.497326 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-7tzm4" Jan 22 16:48:15 crc kubenswrapper[4758]: I0122 16:48:15.569054 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-zfcl5" Jan 22 16:48:15 crc kubenswrapper[4758]: I0122 16:48:15.748960 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-59n7w" Jan 22 16:48:16 crc kubenswrapper[4758]: I0122 16:48:16.042611 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-85b8fd6746-9vvd6" Jan 22 16:48:17 crc kubenswrapper[4758]: I0122 16:48:17.603261 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-675f79667-vjvtj" Jan 22 16:48:21 crc kubenswrapper[4758]: I0122 16:48:21.110357 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wxd6d" Jan 22 16:48:25 crc kubenswrapper[4758]: I0122 16:48:25.125607 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-d2nmz" Jan 22 16:48:25 crc kubenswrapper[4758]: I0122 16:48:25.532777 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-4jthc" Jan 22 16:48:26 crc kubenswrapper[4758]: I0122 16:48:26.768368 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-sb974" Jan 22 16:48:27 crc kubenswrapper[4758]: I0122 16:48:27.751256 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2xj52" event={"ID":"644142ed-c937-406d-9fd5-3fe856879a92","Type":"ContainerStarted","Data":"59d62e800ffc23ef90c6cb957fb818dce6dce732562db83f9c8eba85e2739440"} Jan 22 16:48:27 crc kubenswrapper[4758]: I0122 16:48:27.752228 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2xj52" Jan 22 16:48:27 crc kubenswrapper[4758]: I0122 16:48:27.766952 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2xj52" podStartSLOduration=4.7657605610000005 podStartE2EDuration="1m3.766929308s" podCreationTimestamp="2026-01-22 16:47:24 +0000 UTC" firstStartedPulling="2026-01-22 16:47:27.961269301 +0000 UTC m=+1069.444608576" lastFinishedPulling="2026-01-22 16:48:26.962438038 +0000 UTC m=+1128.445777323" observedRunningTime="2026-01-22 16:48:27.764849702 +0000 UTC m=+1129.248188987" watchObservedRunningTime="2026-01-22 16:48:27.766929308 +0000 UTC m=+1129.250268593" Jan 22 16:48:30 crc kubenswrapper[4758]: I0122 16:48:30.775291 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-2qp8f" event={"ID":"5ade5af9-f79e-4285-841c-0f08e88cca47","Type":"ContainerStarted","Data":"c8dfde0b29e3dd16bd35e249a70593762e6ce0947cfa9be7442ca7bc4007ffe6"} Jan 22 16:48:30 crc kubenswrapper[4758]: I0122 16:48:30.776040 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-2qp8f" Jan 22 16:48:30 crc kubenswrapper[4758]: I0122 16:48:30.776890 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4rlkk" event={"ID":"40845474-36a2-48c0-a0df-af5deb2a31fd","Type":"ContainerStarted","Data":"a7809f27497752a919b6754cb12a9a6bab28418e529fc85219c6af1b2b6e0687"} Jan 22 16:48:30 crc kubenswrapper[4758]: I0122 16:48:30.777092 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4rlkk" Jan 22 16:48:30 crc kubenswrapper[4758]: I0122 16:48:30.799705 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-2qp8f" podStartSLOduration=4.313114252 podStartE2EDuration="1m6.799683077s" podCreationTimestamp="2026-01-22 16:47:24 +0000 UTC" firstStartedPulling="2026-01-22 16:47:27.820512946 +0000 UTC m=+1069.303852231" lastFinishedPulling="2026-01-22 16:48:30.307081771 +0000 UTC m=+1131.790421056" observedRunningTime="2026-01-22 16:48:30.792066761 +0000 UTC m=+1132.275406046" watchObservedRunningTime="2026-01-22 16:48:30.799683077 +0000 UTC m=+1132.283022362" Jan 22 16:48:30 crc kubenswrapper[4758]: I0122 16:48:30.814789 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4rlkk" podStartSLOduration=4.830581413 podStartE2EDuration="1m6.814760417s" podCreationTimestamp="2026-01-22 16:47:24 +0000 UTC" firstStartedPulling="2026-01-22 16:47:27.83427483 +0000 UTC m=+1069.317614115" lastFinishedPulling="2026-01-22 16:48:29.818453834 +0000 UTC m=+1131.301793119" observedRunningTime="2026-01-22 16:48:30.809866805 +0000 UTC m=+1132.293206090" watchObservedRunningTime="2026-01-22 16:48:30.814760417 +0000 UTC m=+1132.298099722" Jan 22 16:48:35 crc kubenswrapper[4758]: I0122 16:48:35.092155 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-2qp8f" Jan 22 16:48:35 crc kubenswrapper[4758]: I0122 16:48:35.727782 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4rlkk" Jan 22 16:48:36 crc kubenswrapper[4758]: I0122 16:48:36.057425 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2xj52" Jan 22 16:48:57 crc kubenswrapper[4758]: I0122 16:48:57.674003 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-744cb6745-gpxr2"] Jan 22 16:48:57 crc kubenswrapper[4758]: I0122 16:48:57.675647 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744cb6745-gpxr2" Jan 22 16:48:57 crc kubenswrapper[4758]: I0122 16:48:57.677895 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 22 16:48:57 crc kubenswrapper[4758]: I0122 16:48:57.678398 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-w2txv" Jan 22 16:48:57 crc kubenswrapper[4758]: I0122 16:48:57.678711 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 22 16:48:57 crc kubenswrapper[4758]: I0122 16:48:57.679091 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 22 16:48:57 crc kubenswrapper[4758]: I0122 16:48:57.695647 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-744cb6745-gpxr2"] Jan 22 16:48:57 crc kubenswrapper[4758]: I0122 16:48:57.853051 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8dd458c9c-d66hc"] Jan 22 16:48:57 crc kubenswrapper[4758]: I0122 16:48:57.854484 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8dd458c9c-d66hc" Jan 22 16:48:57 crc kubenswrapper[4758]: I0122 16:48:57.857695 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 22 16:48:57 crc kubenswrapper[4758]: I0122 16:48:57.871385 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8dd458c9c-d66hc"] Jan 22 16:48:57 crc kubenswrapper[4758]: I0122 16:48:57.877501 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e70c0c5b-a151-49be-aad0-41549f1fa4d3-config\") pod \"dnsmasq-dns-744cb6745-gpxr2\" (UID: \"e70c0c5b-a151-49be-aad0-41549f1fa4d3\") " pod="openstack/dnsmasq-dns-744cb6745-gpxr2" Jan 22 16:48:57 crc kubenswrapper[4758]: I0122 16:48:57.877632 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k6xc\" (UniqueName: \"kubernetes.io/projected/e70c0c5b-a151-49be-aad0-41549f1fa4d3-kube-api-access-7k6xc\") pod \"dnsmasq-dns-744cb6745-gpxr2\" (UID: \"e70c0c5b-a151-49be-aad0-41549f1fa4d3\") " pod="openstack/dnsmasq-dns-744cb6745-gpxr2" Jan 22 16:48:57 crc kubenswrapper[4758]: I0122 16:48:57.979469 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzxnm\" (UniqueName: \"kubernetes.io/projected/357fd4b8-9b78-4aac-a03e-985eb2e27dfd-kube-api-access-gzxnm\") pod \"dnsmasq-dns-8dd458c9c-d66hc\" (UID: \"357fd4b8-9b78-4aac-a03e-985eb2e27dfd\") " pod="openstack/dnsmasq-dns-8dd458c9c-d66hc" Jan 22 16:48:57 crc kubenswrapper[4758]: I0122 16:48:57.979572 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/357fd4b8-9b78-4aac-a03e-985eb2e27dfd-dns-svc\") pod \"dnsmasq-dns-8dd458c9c-d66hc\" (UID: \"357fd4b8-9b78-4aac-a03e-985eb2e27dfd\") " pod="openstack/dnsmasq-dns-8dd458c9c-d66hc" Jan 22 16:48:57 crc kubenswrapper[4758]: I0122 16:48:57.979633 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k6xc\" (UniqueName: \"kubernetes.io/projected/e70c0c5b-a151-49be-aad0-41549f1fa4d3-kube-api-access-7k6xc\") pod \"dnsmasq-dns-744cb6745-gpxr2\" (UID: \"e70c0c5b-a151-49be-aad0-41549f1fa4d3\") " pod="openstack/dnsmasq-dns-744cb6745-gpxr2" Jan 22 16:48:57 crc kubenswrapper[4758]: I0122 16:48:57.979661 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/357fd4b8-9b78-4aac-a03e-985eb2e27dfd-config\") pod \"dnsmasq-dns-8dd458c9c-d66hc\" (UID: \"357fd4b8-9b78-4aac-a03e-985eb2e27dfd\") " pod="openstack/dnsmasq-dns-8dd458c9c-d66hc" Jan 22 16:48:57 crc kubenswrapper[4758]: I0122 16:48:57.979908 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e70c0c5b-a151-49be-aad0-41549f1fa4d3-config\") pod \"dnsmasq-dns-744cb6745-gpxr2\" (UID: \"e70c0c5b-a151-49be-aad0-41549f1fa4d3\") " pod="openstack/dnsmasq-dns-744cb6745-gpxr2" Jan 22 16:48:57 crc kubenswrapper[4758]: I0122 16:48:57.981060 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e70c0c5b-a151-49be-aad0-41549f1fa4d3-config\") pod \"dnsmasq-dns-744cb6745-gpxr2\" (UID: \"e70c0c5b-a151-49be-aad0-41549f1fa4d3\") " pod="openstack/dnsmasq-dns-744cb6745-gpxr2" Jan 22 16:48:58 crc kubenswrapper[4758]: I0122 16:48:58.008569 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k6xc\" (UniqueName: \"kubernetes.io/projected/e70c0c5b-a151-49be-aad0-41549f1fa4d3-kube-api-access-7k6xc\") pod \"dnsmasq-dns-744cb6745-gpxr2\" (UID: \"e70c0c5b-a151-49be-aad0-41549f1fa4d3\") " pod="openstack/dnsmasq-dns-744cb6745-gpxr2" Jan 22 16:48:58 crc kubenswrapper[4758]: I0122 16:48:58.081558 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzxnm\" (UniqueName: \"kubernetes.io/projected/357fd4b8-9b78-4aac-a03e-985eb2e27dfd-kube-api-access-gzxnm\") pod \"dnsmasq-dns-8dd458c9c-d66hc\" (UID: \"357fd4b8-9b78-4aac-a03e-985eb2e27dfd\") " pod="openstack/dnsmasq-dns-8dd458c9c-d66hc" Jan 22 16:48:58 crc kubenswrapper[4758]: I0122 16:48:58.081671 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/357fd4b8-9b78-4aac-a03e-985eb2e27dfd-dns-svc\") pod \"dnsmasq-dns-8dd458c9c-d66hc\" (UID: \"357fd4b8-9b78-4aac-a03e-985eb2e27dfd\") " pod="openstack/dnsmasq-dns-8dd458c9c-d66hc" Jan 22 16:48:58 crc kubenswrapper[4758]: I0122 16:48:58.081719 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/357fd4b8-9b78-4aac-a03e-985eb2e27dfd-config\") pod \"dnsmasq-dns-8dd458c9c-d66hc\" (UID: \"357fd4b8-9b78-4aac-a03e-985eb2e27dfd\") " pod="openstack/dnsmasq-dns-8dd458c9c-d66hc" Jan 22 16:48:58 crc kubenswrapper[4758]: I0122 16:48:58.082954 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/357fd4b8-9b78-4aac-a03e-985eb2e27dfd-config\") pod \"dnsmasq-dns-8dd458c9c-d66hc\" (UID: \"357fd4b8-9b78-4aac-a03e-985eb2e27dfd\") " pod="openstack/dnsmasq-dns-8dd458c9c-d66hc" Jan 22 16:48:58 crc kubenswrapper[4758]: I0122 16:48:58.084662 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/357fd4b8-9b78-4aac-a03e-985eb2e27dfd-dns-svc\") pod \"dnsmasq-dns-8dd458c9c-d66hc\" (UID: \"357fd4b8-9b78-4aac-a03e-985eb2e27dfd\") " pod="openstack/dnsmasq-dns-8dd458c9c-d66hc" Jan 22 16:48:58 crc kubenswrapper[4758]: I0122 16:48:58.105680 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzxnm\" (UniqueName: \"kubernetes.io/projected/357fd4b8-9b78-4aac-a03e-985eb2e27dfd-kube-api-access-gzxnm\") pod \"dnsmasq-dns-8dd458c9c-d66hc\" (UID: \"357fd4b8-9b78-4aac-a03e-985eb2e27dfd\") " pod="openstack/dnsmasq-dns-8dd458c9c-d66hc" Jan 22 16:48:58 crc kubenswrapper[4758]: I0122 16:48:58.168471 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8dd458c9c-d66hc" Jan 22 16:48:58 crc kubenswrapper[4758]: I0122 16:48:58.298625 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744cb6745-gpxr2" Jan 22 16:48:58 crc kubenswrapper[4758]: I0122 16:48:58.481395 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8dd458c9c-d66hc"] Jan 22 16:48:58 crc kubenswrapper[4758]: I0122 16:48:58.777774 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-744cb6745-gpxr2"] Jan 22 16:48:58 crc kubenswrapper[4758]: W0122 16:48:58.790592 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode70c0c5b_a151_49be_aad0_41549f1fa4d3.slice/crio-c04e79c7d8fa97611ab4891a49a263948c63940578884d53bb3943245613004f WatchSource:0}: Error finding container c04e79c7d8fa97611ab4891a49a263948c63940578884d53bb3943245613004f: Status 404 returned error can't find the container with id c04e79c7d8fa97611ab4891a49a263948c63940578884d53bb3943245613004f Jan 22 16:48:59 crc kubenswrapper[4758]: I0122 16:48:59.003428 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744cb6745-gpxr2" event={"ID":"e70c0c5b-a151-49be-aad0-41549f1fa4d3","Type":"ContainerStarted","Data":"c04e79c7d8fa97611ab4891a49a263948c63940578884d53bb3943245613004f"} Jan 22 16:48:59 crc kubenswrapper[4758]: I0122 16:48:59.005310 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8dd458c9c-d66hc" event={"ID":"357fd4b8-9b78-4aac-a03e-985eb2e27dfd","Type":"ContainerStarted","Data":"5cd1dcaf5f19b74705885a5bac5f245a75a43c66bf3d927bfc62dd51104b9d92"} Jan 22 16:49:01 crc kubenswrapper[4758]: I0122 16:49:01.401299 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-744cb6745-gpxr2"] Jan 22 16:49:01 crc kubenswrapper[4758]: I0122 16:49:01.433437 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5997d47949-qh6rj"] Jan 22 16:49:01 crc kubenswrapper[4758]: I0122 16:49:01.434597 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5997d47949-qh6rj" Jan 22 16:49:01 crc kubenswrapper[4758]: I0122 16:49:01.446861 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5997d47949-qh6rj"] Jan 22 16:49:01 crc kubenswrapper[4758]: I0122 16:49:01.535879 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5hf2\" (UniqueName: \"kubernetes.io/projected/b242dc27-4e77-4ae4-a402-0aba8d78e356-kube-api-access-b5hf2\") pod \"dnsmasq-dns-5997d47949-qh6rj\" (UID: \"b242dc27-4e77-4ae4-a402-0aba8d78e356\") " pod="openstack/dnsmasq-dns-5997d47949-qh6rj" Jan 22 16:49:01 crc kubenswrapper[4758]: I0122 16:49:01.535980 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b242dc27-4e77-4ae4-a402-0aba8d78e356-dns-svc\") pod \"dnsmasq-dns-5997d47949-qh6rj\" (UID: \"b242dc27-4e77-4ae4-a402-0aba8d78e356\") " pod="openstack/dnsmasq-dns-5997d47949-qh6rj" Jan 22 16:49:01 crc kubenswrapper[4758]: I0122 16:49:01.536010 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b242dc27-4e77-4ae4-a402-0aba8d78e356-config\") pod \"dnsmasq-dns-5997d47949-qh6rj\" (UID: \"b242dc27-4e77-4ae4-a402-0aba8d78e356\") " pod="openstack/dnsmasq-dns-5997d47949-qh6rj" Jan 22 16:49:01 crc kubenswrapper[4758]: I0122 16:49:01.637358 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b242dc27-4e77-4ae4-a402-0aba8d78e356-dns-svc\") pod \"dnsmasq-dns-5997d47949-qh6rj\" (UID: \"b242dc27-4e77-4ae4-a402-0aba8d78e356\") " pod="openstack/dnsmasq-dns-5997d47949-qh6rj" Jan 22 16:49:01 crc kubenswrapper[4758]: I0122 16:49:01.637419 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b242dc27-4e77-4ae4-a402-0aba8d78e356-config\") pod \"dnsmasq-dns-5997d47949-qh6rj\" (UID: \"b242dc27-4e77-4ae4-a402-0aba8d78e356\") " pod="openstack/dnsmasq-dns-5997d47949-qh6rj" Jan 22 16:49:01 crc kubenswrapper[4758]: I0122 16:49:01.637454 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5hf2\" (UniqueName: \"kubernetes.io/projected/b242dc27-4e77-4ae4-a402-0aba8d78e356-kube-api-access-b5hf2\") pod \"dnsmasq-dns-5997d47949-qh6rj\" (UID: \"b242dc27-4e77-4ae4-a402-0aba8d78e356\") " pod="openstack/dnsmasq-dns-5997d47949-qh6rj" Jan 22 16:49:01 crc kubenswrapper[4758]: I0122 16:49:01.638685 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b242dc27-4e77-4ae4-a402-0aba8d78e356-dns-svc\") pod \"dnsmasq-dns-5997d47949-qh6rj\" (UID: \"b242dc27-4e77-4ae4-a402-0aba8d78e356\") " pod="openstack/dnsmasq-dns-5997d47949-qh6rj" Jan 22 16:49:01 crc kubenswrapper[4758]: I0122 16:49:01.639269 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b242dc27-4e77-4ae4-a402-0aba8d78e356-config\") pod \"dnsmasq-dns-5997d47949-qh6rj\" (UID: \"b242dc27-4e77-4ae4-a402-0aba8d78e356\") " pod="openstack/dnsmasq-dns-5997d47949-qh6rj" Jan 22 16:49:01 crc kubenswrapper[4758]: I0122 16:49:01.663789 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5hf2\" (UniqueName: \"kubernetes.io/projected/b242dc27-4e77-4ae4-a402-0aba8d78e356-kube-api-access-b5hf2\") pod \"dnsmasq-dns-5997d47949-qh6rj\" (UID: \"b242dc27-4e77-4ae4-a402-0aba8d78e356\") " pod="openstack/dnsmasq-dns-5997d47949-qh6rj" Jan 22 16:49:01 crc kubenswrapper[4758]: I0122 16:49:01.755947 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5997d47949-qh6rj" Jan 22 16:49:01 crc kubenswrapper[4758]: I0122 16:49:01.816945 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8dd458c9c-d66hc"] Jan 22 16:49:01 crc kubenswrapper[4758]: I0122 16:49:01.865035 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7856b7c87-dm5lm"] Jan 22 16:49:01 crc kubenswrapper[4758]: I0122 16:49:01.879258 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7856b7c87-dm5lm"] Jan 22 16:49:01 crc kubenswrapper[4758]: I0122 16:49:01.881181 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7856b7c87-dm5lm" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.048449 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40854732-0c8c-4f6b-bb33-d599ba3de433-dns-svc\") pod \"dnsmasq-dns-7856b7c87-dm5lm\" (UID: \"40854732-0c8c-4f6b-bb33-d599ba3de433\") " pod="openstack/dnsmasq-dns-7856b7c87-dm5lm" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.048501 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvckk\" (UniqueName: \"kubernetes.io/projected/40854732-0c8c-4f6b-bb33-d599ba3de433-kube-api-access-nvckk\") pod \"dnsmasq-dns-7856b7c87-dm5lm\" (UID: \"40854732-0c8c-4f6b-bb33-d599ba3de433\") " pod="openstack/dnsmasq-dns-7856b7c87-dm5lm" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.048534 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40854732-0c8c-4f6b-bb33-d599ba3de433-config\") pod \"dnsmasq-dns-7856b7c87-dm5lm\" (UID: \"40854732-0c8c-4f6b-bb33-d599ba3de433\") " pod="openstack/dnsmasq-dns-7856b7c87-dm5lm" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.150527 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40854732-0c8c-4f6b-bb33-d599ba3de433-dns-svc\") pod \"dnsmasq-dns-7856b7c87-dm5lm\" (UID: \"40854732-0c8c-4f6b-bb33-d599ba3de433\") " pod="openstack/dnsmasq-dns-7856b7c87-dm5lm" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.150592 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvckk\" (UniqueName: \"kubernetes.io/projected/40854732-0c8c-4f6b-bb33-d599ba3de433-kube-api-access-nvckk\") pod \"dnsmasq-dns-7856b7c87-dm5lm\" (UID: \"40854732-0c8c-4f6b-bb33-d599ba3de433\") " pod="openstack/dnsmasq-dns-7856b7c87-dm5lm" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.150629 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40854732-0c8c-4f6b-bb33-d599ba3de433-config\") pod \"dnsmasq-dns-7856b7c87-dm5lm\" (UID: \"40854732-0c8c-4f6b-bb33-d599ba3de433\") " pod="openstack/dnsmasq-dns-7856b7c87-dm5lm" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.151645 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40854732-0c8c-4f6b-bb33-d599ba3de433-config\") pod \"dnsmasq-dns-7856b7c87-dm5lm\" (UID: \"40854732-0c8c-4f6b-bb33-d599ba3de433\") " pod="openstack/dnsmasq-dns-7856b7c87-dm5lm" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.152278 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40854732-0c8c-4f6b-bb33-d599ba3de433-dns-svc\") pod \"dnsmasq-dns-7856b7c87-dm5lm\" (UID: \"40854732-0c8c-4f6b-bb33-d599ba3de433\") " pod="openstack/dnsmasq-dns-7856b7c87-dm5lm" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.193991 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvckk\" (UniqueName: \"kubernetes.io/projected/40854732-0c8c-4f6b-bb33-d599ba3de433-kube-api-access-nvckk\") pod \"dnsmasq-dns-7856b7c87-dm5lm\" (UID: \"40854732-0c8c-4f6b-bb33-d599ba3de433\") " pod="openstack/dnsmasq-dns-7856b7c87-dm5lm" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.194042 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5997d47949-qh6rj"] Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.226625 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7856b7c87-dm5lm" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.231129 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6594fdd9c9-22rg8"] Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.233962 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6594fdd9c9-22rg8" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.249950 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6594fdd9c9-22rg8"] Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.353787 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a52e45a-35af-4c02-926d-d82f762b39da-config\") pod \"dnsmasq-dns-6594fdd9c9-22rg8\" (UID: \"5a52e45a-35af-4c02-926d-d82f762b39da\") " pod="openstack/dnsmasq-dns-6594fdd9c9-22rg8" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.353860 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a52e45a-35af-4c02-926d-d82f762b39da-dns-svc\") pod \"dnsmasq-dns-6594fdd9c9-22rg8\" (UID: \"5a52e45a-35af-4c02-926d-d82f762b39da\") " pod="openstack/dnsmasq-dns-6594fdd9c9-22rg8" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.353931 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqp2b\" (UniqueName: \"kubernetes.io/projected/5a52e45a-35af-4c02-926d-d82f762b39da-kube-api-access-zqp2b\") pod \"dnsmasq-dns-6594fdd9c9-22rg8\" (UID: \"5a52e45a-35af-4c02-926d-d82f762b39da\") " pod="openstack/dnsmasq-dns-6594fdd9c9-22rg8" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.455661 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a52e45a-35af-4c02-926d-d82f762b39da-config\") pod \"dnsmasq-dns-6594fdd9c9-22rg8\" (UID: \"5a52e45a-35af-4c02-926d-d82f762b39da\") " pod="openstack/dnsmasq-dns-6594fdd9c9-22rg8" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.456062 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a52e45a-35af-4c02-926d-d82f762b39da-dns-svc\") pod \"dnsmasq-dns-6594fdd9c9-22rg8\" (UID: \"5a52e45a-35af-4c02-926d-d82f762b39da\") " pod="openstack/dnsmasq-dns-6594fdd9c9-22rg8" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.456135 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqp2b\" (UniqueName: \"kubernetes.io/projected/5a52e45a-35af-4c02-926d-d82f762b39da-kube-api-access-zqp2b\") pod \"dnsmasq-dns-6594fdd9c9-22rg8\" (UID: \"5a52e45a-35af-4c02-926d-d82f762b39da\") " pod="openstack/dnsmasq-dns-6594fdd9c9-22rg8" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.458330 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a52e45a-35af-4c02-926d-d82f762b39da-dns-svc\") pod \"dnsmasq-dns-6594fdd9c9-22rg8\" (UID: \"5a52e45a-35af-4c02-926d-d82f762b39da\") " pod="openstack/dnsmasq-dns-6594fdd9c9-22rg8" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.459049 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a52e45a-35af-4c02-926d-d82f762b39da-config\") pod \"dnsmasq-dns-6594fdd9c9-22rg8\" (UID: \"5a52e45a-35af-4c02-926d-d82f762b39da\") " pod="openstack/dnsmasq-dns-6594fdd9c9-22rg8" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.472635 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqp2b\" (UniqueName: \"kubernetes.io/projected/5a52e45a-35af-4c02-926d-d82f762b39da-kube-api-access-zqp2b\") pod \"dnsmasq-dns-6594fdd9c9-22rg8\" (UID: \"5a52e45a-35af-4c02-926d-d82f762b39da\") " pod="openstack/dnsmasq-dns-6594fdd9c9-22rg8" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.577190 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6594fdd9c9-22rg8" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.595528 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.596652 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.598489 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.599543 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.605165 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-d8jxf" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.606142 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.606197 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.606262 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.606398 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.623792 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.759993 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/78374f0a-c7de-486b-9118-fe2dccc5bdca-server-conf\") pod \"rabbitmq-server-0\" (UID: \"78374f0a-c7de-486b-9118-fe2dccc5bdca\") " pod="openstack/rabbitmq-server-0" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.760061 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78374f0a-c7de-486b-9118-fe2dccc5bdca-config-data\") pod \"rabbitmq-server-0\" (UID: \"78374f0a-c7de-486b-9118-fe2dccc5bdca\") " pod="openstack/rabbitmq-server-0" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.760161 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/78374f0a-c7de-486b-9118-fe2dccc5bdca-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"78374f0a-c7de-486b-9118-fe2dccc5bdca\") " pod="openstack/rabbitmq-server-0" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.760189 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/78374f0a-c7de-486b-9118-fe2dccc5bdca-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"78374f0a-c7de-486b-9118-fe2dccc5bdca\") " pod="openstack/rabbitmq-server-0" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.760232 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/78374f0a-c7de-486b-9118-fe2dccc5bdca-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"78374f0a-c7de-486b-9118-fe2dccc5bdca\") " pod="openstack/rabbitmq-server-0" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.760252 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"78374f0a-c7de-486b-9118-fe2dccc5bdca\") " pod="openstack/rabbitmq-server-0" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.760277 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/78374f0a-c7de-486b-9118-fe2dccc5bdca-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"78374f0a-c7de-486b-9118-fe2dccc5bdca\") " pod="openstack/rabbitmq-server-0" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.760300 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/78374f0a-c7de-486b-9118-fe2dccc5bdca-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"78374f0a-c7de-486b-9118-fe2dccc5bdca\") " pod="openstack/rabbitmq-server-0" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.760328 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz86b\" (UniqueName: \"kubernetes.io/projected/78374f0a-c7de-486b-9118-fe2dccc5bdca-kube-api-access-xz86b\") pod \"rabbitmq-server-0\" (UID: \"78374f0a-c7de-486b-9118-fe2dccc5bdca\") " pod="openstack/rabbitmq-server-0" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.760371 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/78374f0a-c7de-486b-9118-fe2dccc5bdca-pod-info\") pod \"rabbitmq-server-0\" (UID: \"78374f0a-c7de-486b-9118-fe2dccc5bdca\") " pod="openstack/rabbitmq-server-0" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.760401 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/78374f0a-c7de-486b-9118-fe2dccc5bdca-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"78374f0a-c7de-486b-9118-fe2dccc5bdca\") " pod="openstack/rabbitmq-server-0" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.861453 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/78374f0a-c7de-486b-9118-fe2dccc5bdca-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"78374f0a-c7de-486b-9118-fe2dccc5bdca\") " pod="openstack/rabbitmq-server-0" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.861541 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/78374f0a-c7de-486b-9118-fe2dccc5bdca-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"78374f0a-c7de-486b-9118-fe2dccc5bdca\") " pod="openstack/rabbitmq-server-0" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.861576 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"78374f0a-c7de-486b-9118-fe2dccc5bdca\") " pod="openstack/rabbitmq-server-0" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.861615 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/78374f0a-c7de-486b-9118-fe2dccc5bdca-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"78374f0a-c7de-486b-9118-fe2dccc5bdca\") " pod="openstack/rabbitmq-server-0" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.861642 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/78374f0a-c7de-486b-9118-fe2dccc5bdca-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"78374f0a-c7de-486b-9118-fe2dccc5bdca\") " pod="openstack/rabbitmq-server-0" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.861673 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xz86b\" (UniqueName: \"kubernetes.io/projected/78374f0a-c7de-486b-9118-fe2dccc5bdca-kube-api-access-xz86b\") pod \"rabbitmq-server-0\" (UID: \"78374f0a-c7de-486b-9118-fe2dccc5bdca\") " pod="openstack/rabbitmq-server-0" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.861716 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/78374f0a-c7de-486b-9118-fe2dccc5bdca-pod-info\") pod \"rabbitmq-server-0\" (UID: \"78374f0a-c7de-486b-9118-fe2dccc5bdca\") " pod="openstack/rabbitmq-server-0" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.861770 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/78374f0a-c7de-486b-9118-fe2dccc5bdca-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"78374f0a-c7de-486b-9118-fe2dccc5bdca\") " pod="openstack/rabbitmq-server-0" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.861796 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/78374f0a-c7de-486b-9118-fe2dccc5bdca-server-conf\") pod \"rabbitmq-server-0\" (UID: \"78374f0a-c7de-486b-9118-fe2dccc5bdca\") " pod="openstack/rabbitmq-server-0" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.861825 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78374f0a-c7de-486b-9118-fe2dccc5bdca-config-data\") pod \"rabbitmq-server-0\" (UID: \"78374f0a-c7de-486b-9118-fe2dccc5bdca\") " pod="openstack/rabbitmq-server-0" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.861854 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/78374f0a-c7de-486b-9118-fe2dccc5bdca-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"78374f0a-c7de-486b-9118-fe2dccc5bdca\") " pod="openstack/rabbitmq-server-0" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.862496 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/78374f0a-c7de-486b-9118-fe2dccc5bdca-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"78374f0a-c7de-486b-9118-fe2dccc5bdca\") " pod="openstack/rabbitmq-server-0" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.863387 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/78374f0a-c7de-486b-9118-fe2dccc5bdca-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"78374f0a-c7de-486b-9118-fe2dccc5bdca\") " pod="openstack/rabbitmq-server-0" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.863963 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78374f0a-c7de-486b-9118-fe2dccc5bdca-config-data\") pod \"rabbitmq-server-0\" (UID: \"78374f0a-c7de-486b-9118-fe2dccc5bdca\") " pod="openstack/rabbitmq-server-0" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.864022 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/78374f0a-c7de-486b-9118-fe2dccc5bdca-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"78374f0a-c7de-486b-9118-fe2dccc5bdca\") " pod="openstack/rabbitmq-server-0" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.864395 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"78374f0a-c7de-486b-9118-fe2dccc5bdca\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-server-0" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.869096 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/78374f0a-c7de-486b-9118-fe2dccc5bdca-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"78374f0a-c7de-486b-9118-fe2dccc5bdca\") " pod="openstack/rabbitmq-server-0" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.869211 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/78374f0a-c7de-486b-9118-fe2dccc5bdca-server-conf\") pod \"rabbitmq-server-0\" (UID: \"78374f0a-c7de-486b-9118-fe2dccc5bdca\") " pod="openstack/rabbitmq-server-0" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.869372 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/78374f0a-c7de-486b-9118-fe2dccc5bdca-pod-info\") pod \"rabbitmq-server-0\" (UID: \"78374f0a-c7de-486b-9118-fe2dccc5bdca\") " pod="openstack/rabbitmq-server-0" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.889926 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/78374f0a-c7de-486b-9118-fe2dccc5bdca-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"78374f0a-c7de-486b-9118-fe2dccc5bdca\") " pod="openstack/rabbitmq-server-0" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.889939 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/78374f0a-c7de-486b-9118-fe2dccc5bdca-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"78374f0a-c7de-486b-9118-fe2dccc5bdca\") " pod="openstack/rabbitmq-server-0" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.890422 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"78374f0a-c7de-486b-9118-fe2dccc5bdca\") " pod="openstack/rabbitmq-server-0" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.892105 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz86b\" (UniqueName: \"kubernetes.io/projected/78374f0a-c7de-486b-9118-fe2dccc5bdca-kube-api-access-xz86b\") pod \"rabbitmq-server-0\" (UID: \"78374f0a-c7de-486b-9118-fe2dccc5bdca\") " pod="openstack/rabbitmq-server-0" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.927945 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.971462 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.973896 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.983237 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.983373 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.983534 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.983606 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.983731 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.985046 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-5sdkn" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.985378 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 22 16:49:02 crc kubenswrapper[4758]: I0122 16:49:02.991589 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.066758 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f7805c55-6999-45a8-afd4-3fd1fa039b7a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.066822 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f7805c55-6999-45a8-afd4-3fd1fa039b7a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.066868 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dlwr\" (UniqueName: \"kubernetes.io/projected/f7805c55-6999-45a8-afd4-3fd1fa039b7a-kube-api-access-8dlwr\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.066898 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f7805c55-6999-45a8-afd4-3fd1fa039b7a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.066944 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f7805c55-6999-45a8-afd4-3fd1fa039b7a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.066962 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7805c55-6999-45a8-afd4-3fd1fa039b7a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.067009 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f7805c55-6999-45a8-afd4-3fd1fa039b7a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.067038 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f7805c55-6999-45a8-afd4-3fd1fa039b7a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.067065 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.067168 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f7805c55-6999-45a8-afd4-3fd1fa039b7a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.067250 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f7805c55-6999-45a8-afd4-3fd1fa039b7a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.168130 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f7805c55-6999-45a8-afd4-3fd1fa039b7a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.168197 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f7805c55-6999-45a8-afd4-3fd1fa039b7a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.168245 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.168281 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f7805c55-6999-45a8-afd4-3fd1fa039b7a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.168312 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f7805c55-6999-45a8-afd4-3fd1fa039b7a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.168351 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f7805c55-6999-45a8-afd4-3fd1fa039b7a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.168370 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f7805c55-6999-45a8-afd4-3fd1fa039b7a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.168412 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dlwr\" (UniqueName: \"kubernetes.io/projected/f7805c55-6999-45a8-afd4-3fd1fa039b7a-kube-api-access-8dlwr\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.168442 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f7805c55-6999-45a8-afd4-3fd1fa039b7a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.168474 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f7805c55-6999-45a8-afd4-3fd1fa039b7a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.168496 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7805c55-6999-45a8-afd4-3fd1fa039b7a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.168921 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f7805c55-6999-45a8-afd4-3fd1fa039b7a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.169255 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f7805c55-6999-45a8-afd4-3fd1fa039b7a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.169360 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.171434 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f7805c55-6999-45a8-afd4-3fd1fa039b7a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.174177 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f7805c55-6999-45a8-afd4-3fd1fa039b7a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.174520 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7805c55-6999-45a8-afd4-3fd1fa039b7a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.174539 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f7805c55-6999-45a8-afd4-3fd1fa039b7a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.175840 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f7805c55-6999-45a8-afd4-3fd1fa039b7a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.176567 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f7805c55-6999-45a8-afd4-3fd1fa039b7a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.190543 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dlwr\" (UniqueName: \"kubernetes.io/projected/f7805c55-6999-45a8-afd4-3fd1fa039b7a-kube-api-access-8dlwr\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.191865 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.196837 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f7805c55-6999-45a8-afd4-3fd1fa039b7a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.313229 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.344373 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.345570 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-notifications-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.347802 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-plugins-conf" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.353619 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-server-conf" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.353835 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-default-user" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.353971 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-server-dockercfg-8d4mj" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.354083 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-config-data" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.354226 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-notifications-svc" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.354359 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-erlang-cookie" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.370495 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.475292 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/be871bb7-c028-4788-9769-51685b7290ea-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"be871bb7-c028-4788-9769-51685b7290ea\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.475345 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"be871bb7-c028-4788-9769-51685b7290ea\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.475411 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/be871bb7-c028-4788-9769-51685b7290ea-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"be871bb7-c028-4788-9769-51685b7290ea\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.475436 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gntpc\" (UniqueName: \"kubernetes.io/projected/be871bb7-c028-4788-9769-51685b7290ea-kube-api-access-gntpc\") pod \"rabbitmq-notifications-server-0\" (UID: \"be871bb7-c028-4788-9769-51685b7290ea\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.475490 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/be871bb7-c028-4788-9769-51685b7290ea-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"be871bb7-c028-4788-9769-51685b7290ea\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.475538 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/be871bb7-c028-4788-9769-51685b7290ea-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"be871bb7-c028-4788-9769-51685b7290ea\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.475567 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/be871bb7-c028-4788-9769-51685b7290ea-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"be871bb7-c028-4788-9769-51685b7290ea\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.475586 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/be871bb7-c028-4788-9769-51685b7290ea-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"be871bb7-c028-4788-9769-51685b7290ea\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.475755 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/be871bb7-c028-4788-9769-51685b7290ea-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"be871bb7-c028-4788-9769-51685b7290ea\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.475852 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/be871bb7-c028-4788-9769-51685b7290ea-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"be871bb7-c028-4788-9769-51685b7290ea\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.475892 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/be871bb7-c028-4788-9769-51685b7290ea-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"be871bb7-c028-4788-9769-51685b7290ea\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.577197 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/be871bb7-c028-4788-9769-51685b7290ea-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"be871bb7-c028-4788-9769-51685b7290ea\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.577256 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gntpc\" (UniqueName: \"kubernetes.io/projected/be871bb7-c028-4788-9769-51685b7290ea-kube-api-access-gntpc\") pod \"rabbitmq-notifications-server-0\" (UID: \"be871bb7-c028-4788-9769-51685b7290ea\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.577291 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/be871bb7-c028-4788-9769-51685b7290ea-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"be871bb7-c028-4788-9769-51685b7290ea\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.577342 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/be871bb7-c028-4788-9769-51685b7290ea-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"be871bb7-c028-4788-9769-51685b7290ea\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.577373 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/be871bb7-c028-4788-9769-51685b7290ea-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"be871bb7-c028-4788-9769-51685b7290ea\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.577397 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/be871bb7-c028-4788-9769-51685b7290ea-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"be871bb7-c028-4788-9769-51685b7290ea\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.577445 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/be871bb7-c028-4788-9769-51685b7290ea-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"be871bb7-c028-4788-9769-51685b7290ea\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.577482 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/be871bb7-c028-4788-9769-51685b7290ea-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"be871bb7-c028-4788-9769-51685b7290ea\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.577512 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/be871bb7-c028-4788-9769-51685b7290ea-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"be871bb7-c028-4788-9769-51685b7290ea\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.577542 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/be871bb7-c028-4788-9769-51685b7290ea-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"be871bb7-c028-4788-9769-51685b7290ea\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.577564 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"be871bb7-c028-4788-9769-51685b7290ea\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.577733 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"be871bb7-c028-4788-9769-51685b7290ea\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-notifications-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.578054 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/be871bb7-c028-4788-9769-51685b7290ea-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"be871bb7-c028-4788-9769-51685b7290ea\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.578361 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/be871bb7-c028-4788-9769-51685b7290ea-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"be871bb7-c028-4788-9769-51685b7290ea\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.578371 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/be871bb7-c028-4788-9769-51685b7290ea-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"be871bb7-c028-4788-9769-51685b7290ea\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.579501 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/be871bb7-c028-4788-9769-51685b7290ea-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"be871bb7-c028-4788-9769-51685b7290ea\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.580355 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/be871bb7-c028-4788-9769-51685b7290ea-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"be871bb7-c028-4788-9769-51685b7290ea\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.582943 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/be871bb7-c028-4788-9769-51685b7290ea-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"be871bb7-c028-4788-9769-51685b7290ea\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.583952 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/be871bb7-c028-4788-9769-51685b7290ea-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"be871bb7-c028-4788-9769-51685b7290ea\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.587680 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/be871bb7-c028-4788-9769-51685b7290ea-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"be871bb7-c028-4788-9769-51685b7290ea\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.588129 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/be871bb7-c028-4788-9769-51685b7290ea-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"be871bb7-c028-4788-9769-51685b7290ea\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.593868 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gntpc\" (UniqueName: \"kubernetes.io/projected/be871bb7-c028-4788-9769-51685b7290ea-kube-api-access-gntpc\") pod \"rabbitmq-notifications-server-0\" (UID: \"be871bb7-c028-4788-9769-51685b7290ea\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.608719 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"be871bb7-c028-4788-9769-51685b7290ea\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 16:49:03 crc kubenswrapper[4758]: I0122 16:49:03.666897 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-notifications-server-0" Jan 22 16:49:04 crc kubenswrapper[4758]: I0122 16:49:04.560799 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 22 16:49:04 crc kubenswrapper[4758]: I0122 16:49:04.563150 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 22 16:49:04 crc kubenswrapper[4758]: I0122 16:49:04.566422 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 22 16:49:04 crc kubenswrapper[4758]: I0122 16:49:04.566444 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-g2jsf" Jan 22 16:49:04 crc kubenswrapper[4758]: I0122 16:49:04.571061 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 22 16:49:04 crc kubenswrapper[4758]: I0122 16:49:04.571552 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 22 16:49:04 crc kubenswrapper[4758]: I0122 16:49:04.572333 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 22 16:49:04 crc kubenswrapper[4758]: I0122 16:49:04.579979 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 22 16:49:04 crc kubenswrapper[4758]: I0122 16:49:04.696841 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f52e2571-4001-441f-b7b7-b4746ae1c10d-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f52e2571-4001-441f-b7b7-b4746ae1c10d\") " pod="openstack/openstack-galera-0" Jan 22 16:49:04 crc kubenswrapper[4758]: I0122 16:49:04.696936 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f52e2571-4001-441f-b7b7-b4746ae1c10d-config-data-default\") pod \"openstack-galera-0\" (UID: \"f52e2571-4001-441f-b7b7-b4746ae1c10d\") " pod="openstack/openstack-galera-0" Jan 22 16:49:04 crc kubenswrapper[4758]: I0122 16:49:04.697031 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f52e2571-4001-441f-b7b7-b4746ae1c10d-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f52e2571-4001-441f-b7b7-b4746ae1c10d\") " pod="openstack/openstack-galera-0" Jan 22 16:49:04 crc kubenswrapper[4758]: I0122 16:49:04.705962 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f52e2571-4001-441f-b7b7-b4746ae1c10d-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f52e2571-4001-441f-b7b7-b4746ae1c10d\") " pod="openstack/openstack-galera-0" Jan 22 16:49:04 crc kubenswrapper[4758]: I0122 16:49:04.706065 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdnhv\" (UniqueName: \"kubernetes.io/projected/f52e2571-4001-441f-b7b7-b4746ae1c10d-kube-api-access-xdnhv\") pod \"openstack-galera-0\" (UID: \"f52e2571-4001-441f-b7b7-b4746ae1c10d\") " pod="openstack/openstack-galera-0" Jan 22 16:49:04 crc kubenswrapper[4758]: I0122 16:49:04.706116 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"f52e2571-4001-441f-b7b7-b4746ae1c10d\") " pod="openstack/openstack-galera-0" Jan 22 16:49:04 crc kubenswrapper[4758]: I0122 16:49:04.706154 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f52e2571-4001-441f-b7b7-b4746ae1c10d-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f52e2571-4001-441f-b7b7-b4746ae1c10d\") " pod="openstack/openstack-galera-0" Jan 22 16:49:04 crc kubenswrapper[4758]: I0122 16:49:04.706240 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f52e2571-4001-441f-b7b7-b4746ae1c10d-kolla-config\") pod \"openstack-galera-0\" (UID: \"f52e2571-4001-441f-b7b7-b4746ae1c10d\") " pod="openstack/openstack-galera-0" Jan 22 16:49:04 crc kubenswrapper[4758]: I0122 16:49:04.808757 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f52e2571-4001-441f-b7b7-b4746ae1c10d-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f52e2571-4001-441f-b7b7-b4746ae1c10d\") " pod="openstack/openstack-galera-0" Jan 22 16:49:04 crc kubenswrapper[4758]: I0122 16:49:04.808812 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f52e2571-4001-441f-b7b7-b4746ae1c10d-kolla-config\") pod \"openstack-galera-0\" (UID: \"f52e2571-4001-441f-b7b7-b4746ae1c10d\") " pod="openstack/openstack-galera-0" Jan 22 16:49:04 crc kubenswrapper[4758]: I0122 16:49:04.808876 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f52e2571-4001-441f-b7b7-b4746ae1c10d-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f52e2571-4001-441f-b7b7-b4746ae1c10d\") " pod="openstack/openstack-galera-0" Jan 22 16:49:04 crc kubenswrapper[4758]: I0122 16:49:04.808900 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f52e2571-4001-441f-b7b7-b4746ae1c10d-config-data-default\") pod \"openstack-galera-0\" (UID: \"f52e2571-4001-441f-b7b7-b4746ae1c10d\") " pod="openstack/openstack-galera-0" Jan 22 16:49:04 crc kubenswrapper[4758]: I0122 16:49:04.808935 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f52e2571-4001-441f-b7b7-b4746ae1c10d-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f52e2571-4001-441f-b7b7-b4746ae1c10d\") " pod="openstack/openstack-galera-0" Jan 22 16:49:04 crc kubenswrapper[4758]: I0122 16:49:04.808950 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f52e2571-4001-441f-b7b7-b4746ae1c10d-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f52e2571-4001-441f-b7b7-b4746ae1c10d\") " pod="openstack/openstack-galera-0" Jan 22 16:49:04 crc kubenswrapper[4758]: I0122 16:49:04.808976 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdnhv\" (UniqueName: \"kubernetes.io/projected/f52e2571-4001-441f-b7b7-b4746ae1c10d-kube-api-access-xdnhv\") pod \"openstack-galera-0\" (UID: \"f52e2571-4001-441f-b7b7-b4746ae1c10d\") " pod="openstack/openstack-galera-0" Jan 22 16:49:04 crc kubenswrapper[4758]: I0122 16:49:04.809438 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f52e2571-4001-441f-b7b7-b4746ae1c10d-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f52e2571-4001-441f-b7b7-b4746ae1c10d\") " pod="openstack/openstack-galera-0" Jan 22 16:49:04 crc kubenswrapper[4758]: I0122 16:49:04.809729 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"f52e2571-4001-441f-b7b7-b4746ae1c10d\") " pod="openstack/openstack-galera-0" Jan 22 16:49:04 crc kubenswrapper[4758]: I0122 16:49:04.810353 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"f52e2571-4001-441f-b7b7-b4746ae1c10d\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-galera-0" Jan 22 16:49:04 crc kubenswrapper[4758]: I0122 16:49:04.810330 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f52e2571-4001-441f-b7b7-b4746ae1c10d-kolla-config\") pod \"openstack-galera-0\" (UID: \"f52e2571-4001-441f-b7b7-b4746ae1c10d\") " pod="openstack/openstack-galera-0" Jan 22 16:49:04 crc kubenswrapper[4758]: I0122 16:49:04.810473 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f52e2571-4001-441f-b7b7-b4746ae1c10d-config-data-default\") pod \"openstack-galera-0\" (UID: \"f52e2571-4001-441f-b7b7-b4746ae1c10d\") " pod="openstack/openstack-galera-0" Jan 22 16:49:04 crc kubenswrapper[4758]: I0122 16:49:04.810477 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f52e2571-4001-441f-b7b7-b4746ae1c10d-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f52e2571-4001-441f-b7b7-b4746ae1c10d\") " pod="openstack/openstack-galera-0" Jan 22 16:49:04 crc kubenswrapper[4758]: I0122 16:49:04.814340 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f52e2571-4001-441f-b7b7-b4746ae1c10d-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f52e2571-4001-441f-b7b7-b4746ae1c10d\") " pod="openstack/openstack-galera-0" Jan 22 16:49:04 crc kubenswrapper[4758]: I0122 16:49:04.827282 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f52e2571-4001-441f-b7b7-b4746ae1c10d-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f52e2571-4001-441f-b7b7-b4746ae1c10d\") " pod="openstack/openstack-galera-0" Jan 22 16:49:04 crc kubenswrapper[4758]: I0122 16:49:04.834038 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdnhv\" (UniqueName: \"kubernetes.io/projected/f52e2571-4001-441f-b7b7-b4746ae1c10d-kube-api-access-xdnhv\") pod \"openstack-galera-0\" (UID: \"f52e2571-4001-441f-b7b7-b4746ae1c10d\") " pod="openstack/openstack-galera-0" Jan 22 16:49:04 crc kubenswrapper[4758]: I0122 16:49:04.869917 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"f52e2571-4001-441f-b7b7-b4746ae1c10d\") " pod="openstack/openstack-galera-0" Jan 22 16:49:04 crc kubenswrapper[4758]: I0122 16:49:04.891362 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 22 16:49:05 crc kubenswrapper[4758]: I0122 16:49:05.941255 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 22 16:49:05 crc kubenswrapper[4758]: I0122 16:49:05.942868 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 22 16:49:05 crc kubenswrapper[4758]: I0122 16:49:05.945816 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-thg4w" Jan 22 16:49:05 crc kubenswrapper[4758]: I0122 16:49:05.945953 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 22 16:49:05 crc kubenswrapper[4758]: I0122 16:49:05.946487 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 22 16:49:05 crc kubenswrapper[4758]: I0122 16:49:05.946713 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 22 16:49:05 crc kubenswrapper[4758]: I0122 16:49:05.969646 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 22 16:49:06 crc kubenswrapper[4758]: I0122 16:49:06.027030 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf\") " pod="openstack/openstack-cell1-galera-0" Jan 22 16:49:06 crc kubenswrapper[4758]: I0122 16:49:06.027086 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf\") " pod="openstack/openstack-cell1-galera-0" Jan 22 16:49:06 crc kubenswrapper[4758]: I0122 16:49:06.027125 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf\") " pod="openstack/openstack-cell1-galera-0" Jan 22 16:49:06 crc kubenswrapper[4758]: I0122 16:49:06.027161 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf\") " pod="openstack/openstack-cell1-galera-0" Jan 22 16:49:06 crc kubenswrapper[4758]: I0122 16:49:06.027195 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf\") " pod="openstack/openstack-cell1-galera-0" Jan 22 16:49:06 crc kubenswrapper[4758]: I0122 16:49:06.027362 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf\") " pod="openstack/openstack-cell1-galera-0" Jan 22 16:49:06 crc kubenswrapper[4758]: I0122 16:49:06.027552 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkgrq\" (UniqueName: \"kubernetes.io/projected/3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf-kube-api-access-jkgrq\") pod \"openstack-cell1-galera-0\" (UID: \"3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf\") " pod="openstack/openstack-cell1-galera-0" Jan 22 16:49:06 crc kubenswrapper[4758]: I0122 16:49:06.027600 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf\") " pod="openstack/openstack-cell1-galera-0" Jan 22 16:49:06 crc kubenswrapper[4758]: I0122 16:49:06.129337 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkgrq\" (UniqueName: \"kubernetes.io/projected/3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf-kube-api-access-jkgrq\") pod \"openstack-cell1-galera-0\" (UID: \"3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf\") " pod="openstack/openstack-cell1-galera-0" Jan 22 16:49:06 crc kubenswrapper[4758]: I0122 16:49:06.129388 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf\") " pod="openstack/openstack-cell1-galera-0" Jan 22 16:49:06 crc kubenswrapper[4758]: I0122 16:49:06.129430 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf\") " pod="openstack/openstack-cell1-galera-0" Jan 22 16:49:06 crc kubenswrapper[4758]: I0122 16:49:06.129452 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf\") " pod="openstack/openstack-cell1-galera-0" Jan 22 16:49:06 crc kubenswrapper[4758]: I0122 16:49:06.129474 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf\") " pod="openstack/openstack-cell1-galera-0" Jan 22 16:49:06 crc kubenswrapper[4758]: I0122 16:49:06.129501 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf\") " pod="openstack/openstack-cell1-galera-0" Jan 22 16:49:06 crc kubenswrapper[4758]: I0122 16:49:06.129522 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf\") " pod="openstack/openstack-cell1-galera-0" Jan 22 16:49:06 crc kubenswrapper[4758]: I0122 16:49:06.129564 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf\") " pod="openstack/openstack-cell1-galera-0" Jan 22 16:49:06 crc kubenswrapper[4758]: I0122 16:49:06.129882 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/openstack-cell1-galera-0" Jan 22 16:49:06 crc kubenswrapper[4758]: I0122 16:49:06.130005 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf\") " pod="openstack/openstack-cell1-galera-0" Jan 22 16:49:06 crc kubenswrapper[4758]: I0122 16:49:06.130622 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf\") " pod="openstack/openstack-cell1-galera-0" Jan 22 16:49:06 crc kubenswrapper[4758]: I0122 16:49:06.130697 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf\") " pod="openstack/openstack-cell1-galera-0" Jan 22 16:49:06 crc kubenswrapper[4758]: I0122 16:49:06.131216 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf\") " pod="openstack/openstack-cell1-galera-0" Jan 22 16:49:06 crc kubenswrapper[4758]: I0122 16:49:06.133259 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf\") " pod="openstack/openstack-cell1-galera-0" Jan 22 16:49:06 crc kubenswrapper[4758]: I0122 16:49:06.140616 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf\") " pod="openstack/openstack-cell1-galera-0" Jan 22 16:49:06 crc kubenswrapper[4758]: I0122 16:49:06.156810 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkgrq\" (UniqueName: \"kubernetes.io/projected/3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf-kube-api-access-jkgrq\") pod \"openstack-cell1-galera-0\" (UID: \"3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf\") " pod="openstack/openstack-cell1-galera-0" Jan 22 16:49:06 crc kubenswrapper[4758]: I0122 16:49:06.157339 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf\") " pod="openstack/openstack-cell1-galera-0" Jan 22 16:49:06 crc kubenswrapper[4758]: I0122 16:49:06.215577 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 22 16:49:06 crc kubenswrapper[4758]: I0122 16:49:06.216508 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 22 16:49:06 crc kubenswrapper[4758]: I0122 16:49:06.218809 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 22 16:49:06 crc kubenswrapper[4758]: I0122 16:49:06.218888 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-2w6nn" Jan 22 16:49:06 crc kubenswrapper[4758]: I0122 16:49:06.220596 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 22 16:49:06 crc kubenswrapper[4758]: I0122 16:49:06.234436 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 22 16:49:06 crc kubenswrapper[4758]: I0122 16:49:06.270162 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 22 16:49:06 crc kubenswrapper[4758]: I0122 16:49:06.332086 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7bab3882-8d1f-43dd-bbd6-53fc702f137d-config-data\") pod \"memcached-0\" (UID: \"7bab3882-8d1f-43dd-bbd6-53fc702f137d\") " pod="openstack/memcached-0" Jan 22 16:49:06 crc kubenswrapper[4758]: I0122 16:49:06.332154 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bab3882-8d1f-43dd-bbd6-53fc702f137d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"7bab3882-8d1f-43dd-bbd6-53fc702f137d\") " pod="openstack/memcached-0" Jan 22 16:49:06 crc kubenswrapper[4758]: I0122 16:49:06.332392 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bab3882-8d1f-43dd-bbd6-53fc702f137d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"7bab3882-8d1f-43dd-bbd6-53fc702f137d\") " pod="openstack/memcached-0" Jan 22 16:49:06 crc kubenswrapper[4758]: I0122 16:49:06.332456 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7bab3882-8d1f-43dd-bbd6-53fc702f137d-kolla-config\") pod \"memcached-0\" (UID: \"7bab3882-8d1f-43dd-bbd6-53fc702f137d\") " pod="openstack/memcached-0" Jan 22 16:49:06 crc kubenswrapper[4758]: I0122 16:49:06.332644 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brwh7\" (UniqueName: \"kubernetes.io/projected/7bab3882-8d1f-43dd-bbd6-53fc702f137d-kube-api-access-brwh7\") pod \"memcached-0\" (UID: \"7bab3882-8d1f-43dd-bbd6-53fc702f137d\") " pod="openstack/memcached-0" Jan 22 16:49:06 crc kubenswrapper[4758]: I0122 16:49:06.434253 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brwh7\" (UniqueName: \"kubernetes.io/projected/7bab3882-8d1f-43dd-bbd6-53fc702f137d-kube-api-access-brwh7\") pod \"memcached-0\" (UID: \"7bab3882-8d1f-43dd-bbd6-53fc702f137d\") " pod="openstack/memcached-0" Jan 22 16:49:06 crc kubenswrapper[4758]: I0122 16:49:06.434338 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7bab3882-8d1f-43dd-bbd6-53fc702f137d-config-data\") pod \"memcached-0\" (UID: \"7bab3882-8d1f-43dd-bbd6-53fc702f137d\") " pod="openstack/memcached-0" Jan 22 16:49:06 crc kubenswrapper[4758]: I0122 16:49:06.434378 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bab3882-8d1f-43dd-bbd6-53fc702f137d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"7bab3882-8d1f-43dd-bbd6-53fc702f137d\") " pod="openstack/memcached-0" Jan 22 16:49:06 crc kubenswrapper[4758]: I0122 16:49:06.434473 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bab3882-8d1f-43dd-bbd6-53fc702f137d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"7bab3882-8d1f-43dd-bbd6-53fc702f137d\") " pod="openstack/memcached-0" Jan 22 16:49:06 crc kubenswrapper[4758]: I0122 16:49:06.435133 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7bab3882-8d1f-43dd-bbd6-53fc702f137d-config-data\") pod \"memcached-0\" (UID: \"7bab3882-8d1f-43dd-bbd6-53fc702f137d\") " pod="openstack/memcached-0" Jan 22 16:49:06 crc kubenswrapper[4758]: I0122 16:49:06.435227 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7bab3882-8d1f-43dd-bbd6-53fc702f137d-kolla-config\") pod \"memcached-0\" (UID: \"7bab3882-8d1f-43dd-bbd6-53fc702f137d\") " pod="openstack/memcached-0" Jan 22 16:49:06 crc kubenswrapper[4758]: I0122 16:49:06.434604 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7bab3882-8d1f-43dd-bbd6-53fc702f137d-kolla-config\") pod \"memcached-0\" (UID: \"7bab3882-8d1f-43dd-bbd6-53fc702f137d\") " pod="openstack/memcached-0" Jan 22 16:49:06 crc kubenswrapper[4758]: I0122 16:49:06.440519 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bab3882-8d1f-43dd-bbd6-53fc702f137d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"7bab3882-8d1f-43dd-bbd6-53fc702f137d\") " pod="openstack/memcached-0" Jan 22 16:49:06 crc kubenswrapper[4758]: I0122 16:49:06.442501 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bab3882-8d1f-43dd-bbd6-53fc702f137d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"7bab3882-8d1f-43dd-bbd6-53fc702f137d\") " pod="openstack/memcached-0" Jan 22 16:49:06 crc kubenswrapper[4758]: I0122 16:49:06.462379 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brwh7\" (UniqueName: \"kubernetes.io/projected/7bab3882-8d1f-43dd-bbd6-53fc702f137d-kube-api-access-brwh7\") pod \"memcached-0\" (UID: \"7bab3882-8d1f-43dd-bbd6-53fc702f137d\") " pod="openstack/memcached-0" Jan 22 16:49:06 crc kubenswrapper[4758]: I0122 16:49:06.533728 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 22 16:49:08 crc kubenswrapper[4758]: I0122 16:49:08.303016 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 16:49:08 crc kubenswrapper[4758]: I0122 16:49:08.304395 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 22 16:49:08 crc kubenswrapper[4758]: I0122 16:49:08.308400 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-kvpw9" Jan 22 16:49:08 crc kubenswrapper[4758]: I0122 16:49:08.322616 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 16:49:08 crc kubenswrapper[4758]: I0122 16:49:08.366288 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brf9k\" (UniqueName: \"kubernetes.io/projected/772760c9-f1af-44f5-bfc0-9b949a639e9f-kube-api-access-brf9k\") pod \"kube-state-metrics-0\" (UID: \"772760c9-f1af-44f5-bfc0-9b949a639e9f\") " pod="openstack/kube-state-metrics-0" Jan 22 16:49:08 crc kubenswrapper[4758]: I0122 16:49:08.468204 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brf9k\" (UniqueName: \"kubernetes.io/projected/772760c9-f1af-44f5-bfc0-9b949a639e9f-kube-api-access-brf9k\") pod \"kube-state-metrics-0\" (UID: \"772760c9-f1af-44f5-bfc0-9b949a639e9f\") " pod="openstack/kube-state-metrics-0" Jan 22 16:49:08 crc kubenswrapper[4758]: I0122 16:49:08.525683 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brf9k\" (UniqueName: \"kubernetes.io/projected/772760c9-f1af-44f5-bfc0-9b949a639e9f-kube-api-access-brf9k\") pod \"kube-state-metrics-0\" (UID: \"772760c9-f1af-44f5-bfc0-9b949a639e9f\") " pod="openstack/kube-state-metrics-0" Jan 22 16:49:08 crc kubenswrapper[4758]: I0122 16:49:08.620817 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 22 16:49:09 crc kubenswrapper[4758]: I0122 16:49:09.753238 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 22 16:49:09 crc kubenswrapper[4758]: I0122 16:49:09.769016 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 22 16:49:09 crc kubenswrapper[4758]: I0122 16:49:09.776481 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 22 16:49:09 crc kubenswrapper[4758]: I0122 16:49:09.777106 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 22 16:49:09 crc kubenswrapper[4758]: I0122 16:49:09.777341 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 22 16:49:09 crc kubenswrapper[4758]: I0122 16:49:09.777957 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 22 16:49:09 crc kubenswrapper[4758]: I0122 16:49:09.778154 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-4ftsd" Jan 22 16:49:09 crc kubenswrapper[4758]: I0122 16:49:09.777987 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 22 16:49:09 crc kubenswrapper[4758]: I0122 16:49:09.778528 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 22 16:49:09 crc kubenswrapper[4758]: I0122 16:49:09.780616 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 22 16:49:09 crc kubenswrapper[4758]: I0122 16:49:09.789072 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 22 16:49:09 crc kubenswrapper[4758]: I0122 16:49:09.892008 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c980e076-b6f7-4713-8b10-08bea2949331-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c980e076-b6f7-4713-8b10-08bea2949331\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:49:09 crc kubenswrapper[4758]: I0122 16:49:09.892066 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c980e076-b6f7-4713-8b10-08bea2949331-config\") pod \"prometheus-metric-storage-0\" (UID: \"c980e076-b6f7-4713-8b10-08bea2949331\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:49:09 crc kubenswrapper[4758]: I0122 16:49:09.892095 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c980e076-b6f7-4713-8b10-08bea2949331-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c980e076-b6f7-4713-8b10-08bea2949331\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:49:09 crc kubenswrapper[4758]: I0122 16:49:09.892115 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c980e076-b6f7-4713-8b10-08bea2949331-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c980e076-b6f7-4713-8b10-08bea2949331\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:49:09 crc kubenswrapper[4758]: I0122 16:49:09.892174 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c980e076-b6f7-4713-8b10-08bea2949331-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c980e076-b6f7-4713-8b10-08bea2949331\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:49:09 crc kubenswrapper[4758]: I0122 16:49:09.892201 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4d84\" (UniqueName: \"kubernetes.io/projected/c980e076-b6f7-4713-8b10-08bea2949331-kube-api-access-v4d84\") pod \"prometheus-metric-storage-0\" (UID: \"c980e076-b6f7-4713-8b10-08bea2949331\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:49:09 crc kubenswrapper[4758]: I0122 16:49:09.892281 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c980e076-b6f7-4713-8b10-08bea2949331-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c980e076-b6f7-4713-8b10-08bea2949331\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:49:09 crc kubenswrapper[4758]: I0122 16:49:09.892306 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c980e076-b6f7-4713-8b10-08bea2949331-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c980e076-b6f7-4713-8b10-08bea2949331\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:49:09 crc kubenswrapper[4758]: I0122 16:49:09.892369 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c980e076-b6f7-4713-8b10-08bea2949331-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c980e076-b6f7-4713-8b10-08bea2949331\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:49:09 crc kubenswrapper[4758]: I0122 16:49:09.892395 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-90012821-fb2f-4f8d-aaca-e2d78515e50d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-90012821-fb2f-4f8d-aaca-e2d78515e50d\") pod \"prometheus-metric-storage-0\" (UID: \"c980e076-b6f7-4713-8b10-08bea2949331\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:49:09 crc kubenswrapper[4758]: I0122 16:49:09.993383 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c980e076-b6f7-4713-8b10-08bea2949331-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c980e076-b6f7-4713-8b10-08bea2949331\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:49:09 crc kubenswrapper[4758]: I0122 16:49:09.993786 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-90012821-fb2f-4f8d-aaca-e2d78515e50d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-90012821-fb2f-4f8d-aaca-e2d78515e50d\") pod \"prometheus-metric-storage-0\" (UID: \"c980e076-b6f7-4713-8b10-08bea2949331\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:49:09 crc kubenswrapper[4758]: I0122 16:49:09.993832 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c980e076-b6f7-4713-8b10-08bea2949331-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c980e076-b6f7-4713-8b10-08bea2949331\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:49:09 crc kubenswrapper[4758]: I0122 16:49:09.993875 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c980e076-b6f7-4713-8b10-08bea2949331-config\") pod \"prometheus-metric-storage-0\" (UID: \"c980e076-b6f7-4713-8b10-08bea2949331\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:49:09 crc kubenswrapper[4758]: I0122 16:49:09.993900 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c980e076-b6f7-4713-8b10-08bea2949331-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c980e076-b6f7-4713-8b10-08bea2949331\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:49:09 crc kubenswrapper[4758]: I0122 16:49:09.993920 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c980e076-b6f7-4713-8b10-08bea2949331-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c980e076-b6f7-4713-8b10-08bea2949331\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:49:09 crc kubenswrapper[4758]: I0122 16:49:09.993985 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c980e076-b6f7-4713-8b10-08bea2949331-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c980e076-b6f7-4713-8b10-08bea2949331\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:49:09 crc kubenswrapper[4758]: I0122 16:49:09.994008 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4d84\" (UniqueName: \"kubernetes.io/projected/c980e076-b6f7-4713-8b10-08bea2949331-kube-api-access-v4d84\") pod \"prometheus-metric-storage-0\" (UID: \"c980e076-b6f7-4713-8b10-08bea2949331\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:49:09 crc kubenswrapper[4758]: I0122 16:49:09.994052 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c980e076-b6f7-4713-8b10-08bea2949331-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c980e076-b6f7-4713-8b10-08bea2949331\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:49:09 crc kubenswrapper[4758]: I0122 16:49:09.994078 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c980e076-b6f7-4713-8b10-08bea2949331-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c980e076-b6f7-4713-8b10-08bea2949331\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:49:09 crc kubenswrapper[4758]: I0122 16:49:09.995506 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c980e076-b6f7-4713-8b10-08bea2949331-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c980e076-b6f7-4713-8b10-08bea2949331\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:49:10 crc kubenswrapper[4758]: I0122 16:49:09.996556 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c980e076-b6f7-4713-8b10-08bea2949331-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c980e076-b6f7-4713-8b10-08bea2949331\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:49:10 crc kubenswrapper[4758]: I0122 16:49:09.997034 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c980e076-b6f7-4713-8b10-08bea2949331-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c980e076-b6f7-4713-8b10-08bea2949331\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:49:10 crc kubenswrapper[4758]: I0122 16:49:09.999692 4758 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 16:49:10 crc kubenswrapper[4758]: I0122 16:49:09.999719 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-90012821-fb2f-4f8d-aaca-e2d78515e50d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-90012821-fb2f-4f8d-aaca-e2d78515e50d\") pod \"prometheus-metric-storage-0\" (UID: \"c980e076-b6f7-4713-8b10-08bea2949331\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/51d824e7b7431a599087fae5dbad8d5d5ded71f29385012a23b0aa020d358d8d/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 22 16:49:10 crc kubenswrapper[4758]: I0122 16:49:09.999904 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c980e076-b6f7-4713-8b10-08bea2949331-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c980e076-b6f7-4713-8b10-08bea2949331\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:49:10 crc kubenswrapper[4758]: I0122 16:49:10.000436 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c980e076-b6f7-4713-8b10-08bea2949331-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c980e076-b6f7-4713-8b10-08bea2949331\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:49:10 crc kubenswrapper[4758]: I0122 16:49:10.000983 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c980e076-b6f7-4713-8b10-08bea2949331-config\") pod \"prometheus-metric-storage-0\" (UID: \"c980e076-b6f7-4713-8b10-08bea2949331\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:49:10 crc kubenswrapper[4758]: I0122 16:49:10.002068 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c980e076-b6f7-4713-8b10-08bea2949331-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c980e076-b6f7-4713-8b10-08bea2949331\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:49:10 crc kubenswrapper[4758]: I0122 16:49:10.019146 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c980e076-b6f7-4713-8b10-08bea2949331-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c980e076-b6f7-4713-8b10-08bea2949331\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:49:10 crc kubenswrapper[4758]: I0122 16:49:10.031341 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4d84\" (UniqueName: \"kubernetes.io/projected/c980e076-b6f7-4713-8b10-08bea2949331-kube-api-access-v4d84\") pod \"prometheus-metric-storage-0\" (UID: \"c980e076-b6f7-4713-8b10-08bea2949331\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:49:10 crc kubenswrapper[4758]: I0122 16:49:10.043412 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-90012821-fb2f-4f8d-aaca-e2d78515e50d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-90012821-fb2f-4f8d-aaca-e2d78515e50d\") pod \"prometheus-metric-storage-0\" (UID: \"c980e076-b6f7-4713-8b10-08bea2949331\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:49:10 crc kubenswrapper[4758]: I0122 16:49:10.090299 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 22 16:49:11 crc kubenswrapper[4758]: I0122 16:49:11.401437 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-mpsgq"] Jan 22 16:49:11 crc kubenswrapper[4758]: I0122 16:49:11.403535 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mpsgq" Jan 22 16:49:11 crc kubenswrapper[4758]: I0122 16:49:11.405255 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-pxl5h" Jan 22 16:49:11 crc kubenswrapper[4758]: I0122 16:49:11.406176 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 22 16:49:11 crc kubenswrapper[4758]: I0122 16:49:11.406260 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 22 16:49:11 crc kubenswrapper[4758]: I0122 16:49:11.417668 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mpsgq"] Jan 22 16:49:11 crc kubenswrapper[4758]: I0122 16:49:11.432099 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/7911c0f6-531a-403c-861f-f9cd3ec18ce4-ovn-controller-tls-certs\") pod \"ovn-controller-mpsgq\" (UID: \"7911c0f6-531a-403c-861f-f9cd3ec18ce4\") " pod="openstack/ovn-controller-mpsgq" Jan 22 16:49:11 crc kubenswrapper[4758]: I0122 16:49:11.432168 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7911c0f6-531a-403c-861f-f9cd3ec18ce4-combined-ca-bundle\") pod \"ovn-controller-mpsgq\" (UID: \"7911c0f6-531a-403c-861f-f9cd3ec18ce4\") " pod="openstack/ovn-controller-mpsgq" Jan 22 16:49:11 crc kubenswrapper[4758]: I0122 16:49:11.432208 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7911c0f6-531a-403c-861f-f9cd3ec18ce4-var-run\") pod \"ovn-controller-mpsgq\" (UID: \"7911c0f6-531a-403c-861f-f9cd3ec18ce4\") " pod="openstack/ovn-controller-mpsgq" Jan 22 16:49:11 crc kubenswrapper[4758]: I0122 16:49:11.432228 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-677s4\" (UniqueName: \"kubernetes.io/projected/7911c0f6-531a-403c-861f-f9cd3ec18ce4-kube-api-access-677s4\") pod \"ovn-controller-mpsgq\" (UID: \"7911c0f6-531a-403c-861f-f9cd3ec18ce4\") " pod="openstack/ovn-controller-mpsgq" Jan 22 16:49:11 crc kubenswrapper[4758]: I0122 16:49:11.432251 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7911c0f6-531a-403c-861f-f9cd3ec18ce4-scripts\") pod \"ovn-controller-mpsgq\" (UID: \"7911c0f6-531a-403c-861f-f9cd3ec18ce4\") " pod="openstack/ovn-controller-mpsgq" Jan 22 16:49:11 crc kubenswrapper[4758]: I0122 16:49:11.432308 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7911c0f6-531a-403c-861f-f9cd3ec18ce4-var-run-ovn\") pod \"ovn-controller-mpsgq\" (UID: \"7911c0f6-531a-403c-861f-f9cd3ec18ce4\") " pod="openstack/ovn-controller-mpsgq" Jan 22 16:49:11 crc kubenswrapper[4758]: I0122 16:49:11.432327 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7911c0f6-531a-403c-861f-f9cd3ec18ce4-var-log-ovn\") pod \"ovn-controller-mpsgq\" (UID: \"7911c0f6-531a-403c-861f-f9cd3ec18ce4\") " pod="openstack/ovn-controller-mpsgq" Jan 22 16:49:11 crc kubenswrapper[4758]: I0122 16:49:11.485098 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-6sx98"] Jan 22 16:49:11 crc kubenswrapper[4758]: I0122 16:49:11.486697 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-6sx98" Jan 22 16:49:11 crc kubenswrapper[4758]: I0122 16:49:11.494589 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-6sx98"] Jan 22 16:49:11 crc kubenswrapper[4758]: I0122 16:49:11.533848 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/ca3428d6-c5a4-4c73-897f-7a03fa7c8463-etc-ovs\") pod \"ovn-controller-ovs-6sx98\" (UID: \"ca3428d6-c5a4-4c73-897f-7a03fa7c8463\") " pod="openstack/ovn-controller-ovs-6sx98" Jan 22 16:49:11 crc kubenswrapper[4758]: I0122 16:49:11.533911 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7nph\" (UniqueName: \"kubernetes.io/projected/ca3428d6-c5a4-4c73-897f-7a03fa7c8463-kube-api-access-q7nph\") pod \"ovn-controller-ovs-6sx98\" (UID: \"ca3428d6-c5a4-4c73-897f-7a03fa7c8463\") " pod="openstack/ovn-controller-ovs-6sx98" Jan 22 16:49:11 crc kubenswrapper[4758]: I0122 16:49:11.533948 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/7911c0f6-531a-403c-861f-f9cd3ec18ce4-ovn-controller-tls-certs\") pod \"ovn-controller-mpsgq\" (UID: \"7911c0f6-531a-403c-861f-f9cd3ec18ce4\") " pod="openstack/ovn-controller-mpsgq" Jan 22 16:49:11 crc kubenswrapper[4758]: I0122 16:49:11.533980 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/ca3428d6-c5a4-4c73-897f-7a03fa7c8463-var-lib\") pod \"ovn-controller-ovs-6sx98\" (UID: \"ca3428d6-c5a4-4c73-897f-7a03fa7c8463\") " pod="openstack/ovn-controller-ovs-6sx98" Jan 22 16:49:11 crc kubenswrapper[4758]: I0122 16:49:11.534010 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ca3428d6-c5a4-4c73-897f-7a03fa7c8463-var-log\") pod \"ovn-controller-ovs-6sx98\" (UID: \"ca3428d6-c5a4-4c73-897f-7a03fa7c8463\") " pod="openstack/ovn-controller-ovs-6sx98" Jan 22 16:49:11 crc kubenswrapper[4758]: I0122 16:49:11.534030 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7911c0f6-531a-403c-861f-f9cd3ec18ce4-combined-ca-bundle\") pod \"ovn-controller-mpsgq\" (UID: \"7911c0f6-531a-403c-861f-f9cd3ec18ce4\") " pod="openstack/ovn-controller-mpsgq" Jan 22 16:49:11 crc kubenswrapper[4758]: I0122 16:49:11.534051 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ca3428d6-c5a4-4c73-897f-7a03fa7c8463-var-run\") pod \"ovn-controller-ovs-6sx98\" (UID: \"ca3428d6-c5a4-4c73-897f-7a03fa7c8463\") " pod="openstack/ovn-controller-ovs-6sx98" Jan 22 16:49:11 crc kubenswrapper[4758]: I0122 16:49:11.534081 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7911c0f6-531a-403c-861f-f9cd3ec18ce4-var-run\") pod \"ovn-controller-mpsgq\" (UID: \"7911c0f6-531a-403c-861f-f9cd3ec18ce4\") " pod="openstack/ovn-controller-mpsgq" Jan 22 16:49:11 crc kubenswrapper[4758]: I0122 16:49:11.534106 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-677s4\" (UniqueName: \"kubernetes.io/projected/7911c0f6-531a-403c-861f-f9cd3ec18ce4-kube-api-access-677s4\") pod \"ovn-controller-mpsgq\" (UID: \"7911c0f6-531a-403c-861f-f9cd3ec18ce4\") " pod="openstack/ovn-controller-mpsgq" Jan 22 16:49:11 crc kubenswrapper[4758]: I0122 16:49:11.534128 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7911c0f6-531a-403c-861f-f9cd3ec18ce4-scripts\") pod \"ovn-controller-mpsgq\" (UID: \"7911c0f6-531a-403c-861f-f9cd3ec18ce4\") " pod="openstack/ovn-controller-mpsgq" Jan 22 16:49:11 crc kubenswrapper[4758]: I0122 16:49:11.534147 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ca3428d6-c5a4-4c73-897f-7a03fa7c8463-scripts\") pod \"ovn-controller-ovs-6sx98\" (UID: \"ca3428d6-c5a4-4c73-897f-7a03fa7c8463\") " pod="openstack/ovn-controller-ovs-6sx98" Jan 22 16:49:11 crc kubenswrapper[4758]: I0122 16:49:11.534233 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7911c0f6-531a-403c-861f-f9cd3ec18ce4-var-run-ovn\") pod \"ovn-controller-mpsgq\" (UID: \"7911c0f6-531a-403c-861f-f9cd3ec18ce4\") " pod="openstack/ovn-controller-mpsgq" Jan 22 16:49:11 crc kubenswrapper[4758]: I0122 16:49:11.534251 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7911c0f6-531a-403c-861f-f9cd3ec18ce4-var-log-ovn\") pod \"ovn-controller-mpsgq\" (UID: \"7911c0f6-531a-403c-861f-f9cd3ec18ce4\") " pod="openstack/ovn-controller-mpsgq" Jan 22 16:49:11 crc kubenswrapper[4758]: I0122 16:49:11.534699 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7911c0f6-531a-403c-861f-f9cd3ec18ce4-var-log-ovn\") pod \"ovn-controller-mpsgq\" (UID: \"7911c0f6-531a-403c-861f-f9cd3ec18ce4\") " pod="openstack/ovn-controller-mpsgq" Jan 22 16:49:11 crc kubenswrapper[4758]: I0122 16:49:11.534890 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7911c0f6-531a-403c-861f-f9cd3ec18ce4-var-run\") pod \"ovn-controller-mpsgq\" (UID: \"7911c0f6-531a-403c-861f-f9cd3ec18ce4\") " pod="openstack/ovn-controller-mpsgq" Jan 22 16:49:11 crc kubenswrapper[4758]: I0122 16:49:11.535037 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7911c0f6-531a-403c-861f-f9cd3ec18ce4-var-run-ovn\") pod \"ovn-controller-mpsgq\" (UID: \"7911c0f6-531a-403c-861f-f9cd3ec18ce4\") " pod="openstack/ovn-controller-mpsgq" Jan 22 16:49:11 crc kubenswrapper[4758]: I0122 16:49:11.537366 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7911c0f6-531a-403c-861f-f9cd3ec18ce4-scripts\") pod \"ovn-controller-mpsgq\" (UID: \"7911c0f6-531a-403c-861f-f9cd3ec18ce4\") " pod="openstack/ovn-controller-mpsgq" Jan 22 16:49:11 crc kubenswrapper[4758]: I0122 16:49:11.537617 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7911c0f6-531a-403c-861f-f9cd3ec18ce4-combined-ca-bundle\") pod \"ovn-controller-mpsgq\" (UID: \"7911c0f6-531a-403c-861f-f9cd3ec18ce4\") " pod="openstack/ovn-controller-mpsgq" Jan 22 16:49:11 crc kubenswrapper[4758]: I0122 16:49:11.537832 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/7911c0f6-531a-403c-861f-f9cd3ec18ce4-ovn-controller-tls-certs\") pod \"ovn-controller-mpsgq\" (UID: \"7911c0f6-531a-403c-861f-f9cd3ec18ce4\") " pod="openstack/ovn-controller-mpsgq" Jan 22 16:49:11 crc kubenswrapper[4758]: I0122 16:49:11.550136 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-677s4\" (UniqueName: \"kubernetes.io/projected/7911c0f6-531a-403c-861f-f9cd3ec18ce4-kube-api-access-677s4\") pod \"ovn-controller-mpsgq\" (UID: \"7911c0f6-531a-403c-861f-f9cd3ec18ce4\") " pod="openstack/ovn-controller-mpsgq" Jan 22 16:49:11 crc kubenswrapper[4758]: I0122 16:49:11.635542 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ca3428d6-c5a4-4c73-897f-7a03fa7c8463-var-log\") pod \"ovn-controller-ovs-6sx98\" (UID: \"ca3428d6-c5a4-4c73-897f-7a03fa7c8463\") " pod="openstack/ovn-controller-ovs-6sx98" Jan 22 16:49:11 crc kubenswrapper[4758]: I0122 16:49:11.635768 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ca3428d6-c5a4-4c73-897f-7a03fa7c8463-var-log\") pod \"ovn-controller-ovs-6sx98\" (UID: \"ca3428d6-c5a4-4c73-897f-7a03fa7c8463\") " pod="openstack/ovn-controller-ovs-6sx98" Jan 22 16:49:11 crc kubenswrapper[4758]: I0122 16:49:11.636006 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ca3428d6-c5a4-4c73-897f-7a03fa7c8463-var-run\") pod \"ovn-controller-ovs-6sx98\" (UID: \"ca3428d6-c5a4-4c73-897f-7a03fa7c8463\") " pod="openstack/ovn-controller-ovs-6sx98" Jan 22 16:49:11 crc kubenswrapper[4758]: I0122 16:49:11.636159 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ca3428d6-c5a4-4c73-897f-7a03fa7c8463-scripts\") pod \"ovn-controller-ovs-6sx98\" (UID: \"ca3428d6-c5a4-4c73-897f-7a03fa7c8463\") " pod="openstack/ovn-controller-ovs-6sx98" Jan 22 16:49:11 crc kubenswrapper[4758]: I0122 16:49:11.636269 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ca3428d6-c5a4-4c73-897f-7a03fa7c8463-var-run\") pod \"ovn-controller-ovs-6sx98\" (UID: \"ca3428d6-c5a4-4c73-897f-7a03fa7c8463\") " pod="openstack/ovn-controller-ovs-6sx98" Jan 22 16:49:11 crc kubenswrapper[4758]: I0122 16:49:11.642927 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ca3428d6-c5a4-4c73-897f-7a03fa7c8463-scripts\") pod \"ovn-controller-ovs-6sx98\" (UID: \"ca3428d6-c5a4-4c73-897f-7a03fa7c8463\") " pod="openstack/ovn-controller-ovs-6sx98" Jan 22 16:49:11 crc kubenswrapper[4758]: I0122 16:49:11.643252 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/ca3428d6-c5a4-4c73-897f-7a03fa7c8463-etc-ovs\") pod \"ovn-controller-ovs-6sx98\" (UID: \"ca3428d6-c5a4-4c73-897f-7a03fa7c8463\") " pod="openstack/ovn-controller-ovs-6sx98" Jan 22 16:49:11 crc kubenswrapper[4758]: I0122 16:49:11.643468 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/ca3428d6-c5a4-4c73-897f-7a03fa7c8463-etc-ovs\") pod \"ovn-controller-ovs-6sx98\" (UID: \"ca3428d6-c5a4-4c73-897f-7a03fa7c8463\") " pod="openstack/ovn-controller-ovs-6sx98" Jan 22 16:49:11 crc kubenswrapper[4758]: I0122 16:49:11.643508 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7nph\" (UniqueName: \"kubernetes.io/projected/ca3428d6-c5a4-4c73-897f-7a03fa7c8463-kube-api-access-q7nph\") pod \"ovn-controller-ovs-6sx98\" (UID: \"ca3428d6-c5a4-4c73-897f-7a03fa7c8463\") " pod="openstack/ovn-controller-ovs-6sx98" Jan 22 16:49:11 crc kubenswrapper[4758]: I0122 16:49:11.643866 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/ca3428d6-c5a4-4c73-897f-7a03fa7c8463-var-lib\") pod \"ovn-controller-ovs-6sx98\" (UID: \"ca3428d6-c5a4-4c73-897f-7a03fa7c8463\") " pod="openstack/ovn-controller-ovs-6sx98" Jan 22 16:49:11 crc kubenswrapper[4758]: I0122 16:49:11.644277 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/ca3428d6-c5a4-4c73-897f-7a03fa7c8463-var-lib\") pod \"ovn-controller-ovs-6sx98\" (UID: \"ca3428d6-c5a4-4c73-897f-7a03fa7c8463\") " pod="openstack/ovn-controller-ovs-6sx98" Jan 22 16:49:11 crc kubenswrapper[4758]: I0122 16:49:11.666441 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7nph\" (UniqueName: \"kubernetes.io/projected/ca3428d6-c5a4-4c73-897f-7a03fa7c8463-kube-api-access-q7nph\") pod \"ovn-controller-ovs-6sx98\" (UID: \"ca3428d6-c5a4-4c73-897f-7a03fa7c8463\") " pod="openstack/ovn-controller-ovs-6sx98" Jan 22 16:49:11 crc kubenswrapper[4758]: I0122 16:49:11.760554 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mpsgq" Jan 22 16:49:11 crc kubenswrapper[4758]: I0122 16:49:11.803498 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-6sx98" Jan 22 16:49:12 crc kubenswrapper[4758]: I0122 16:49:12.839491 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 22 16:49:12 crc kubenswrapper[4758]: I0122 16:49:12.841039 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 22 16:49:12 crc kubenswrapper[4758]: I0122 16:49:12.846108 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-tzrkw" Jan 22 16:49:12 crc kubenswrapper[4758]: I0122 16:49:12.846126 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 22 16:49:12 crc kubenswrapper[4758]: I0122 16:49:12.846199 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 22 16:49:12 crc kubenswrapper[4758]: I0122 16:49:12.846347 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 22 16:49:12 crc kubenswrapper[4758]: I0122 16:49:12.846404 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 22 16:49:12 crc kubenswrapper[4758]: I0122 16:49:12.857008 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 22 16:49:13 crc kubenswrapper[4758]: I0122 16:49:13.002883 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r75jk\" (UniqueName: \"kubernetes.io/projected/aa00a9b2-102b-4b46-b69f-86efda64b178-kube-api-access-r75jk\") pod \"ovsdbserver-nb-0\" (UID: \"aa00a9b2-102b-4b46-b69f-86efda64b178\") " pod="openstack/ovsdbserver-nb-0" Jan 22 16:49:13 crc kubenswrapper[4758]: I0122 16:49:13.003085 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa00a9b2-102b-4b46-b69f-86efda64b178-config\") pod \"ovsdbserver-nb-0\" (UID: \"aa00a9b2-102b-4b46-b69f-86efda64b178\") " pod="openstack/ovsdbserver-nb-0" Jan 22 16:49:13 crc kubenswrapper[4758]: I0122 16:49:13.003170 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa00a9b2-102b-4b46-b69f-86efda64b178-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"aa00a9b2-102b-4b46-b69f-86efda64b178\") " pod="openstack/ovsdbserver-nb-0" Jan 22 16:49:13 crc kubenswrapper[4758]: I0122 16:49:13.003256 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"aa00a9b2-102b-4b46-b69f-86efda64b178\") " pod="openstack/ovsdbserver-nb-0" Jan 22 16:49:13 crc kubenswrapper[4758]: I0122 16:49:13.003350 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa00a9b2-102b-4b46-b69f-86efda64b178-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"aa00a9b2-102b-4b46-b69f-86efda64b178\") " pod="openstack/ovsdbserver-nb-0" Jan 22 16:49:13 crc kubenswrapper[4758]: I0122 16:49:13.003461 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/aa00a9b2-102b-4b46-b69f-86efda64b178-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"aa00a9b2-102b-4b46-b69f-86efda64b178\") " pod="openstack/ovsdbserver-nb-0" Jan 22 16:49:13 crc kubenswrapper[4758]: I0122 16:49:13.003649 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa00a9b2-102b-4b46-b69f-86efda64b178-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"aa00a9b2-102b-4b46-b69f-86efda64b178\") " pod="openstack/ovsdbserver-nb-0" Jan 22 16:49:13 crc kubenswrapper[4758]: I0122 16:49:13.003858 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa00a9b2-102b-4b46-b69f-86efda64b178-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"aa00a9b2-102b-4b46-b69f-86efda64b178\") " pod="openstack/ovsdbserver-nb-0" Jan 22 16:49:13 crc kubenswrapper[4758]: I0122 16:49:13.105566 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/aa00a9b2-102b-4b46-b69f-86efda64b178-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"aa00a9b2-102b-4b46-b69f-86efda64b178\") " pod="openstack/ovsdbserver-nb-0" Jan 22 16:49:13 crc kubenswrapper[4758]: I0122 16:49:13.106049 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa00a9b2-102b-4b46-b69f-86efda64b178-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"aa00a9b2-102b-4b46-b69f-86efda64b178\") " pod="openstack/ovsdbserver-nb-0" Jan 22 16:49:13 crc kubenswrapper[4758]: I0122 16:49:13.107010 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa00a9b2-102b-4b46-b69f-86efda64b178-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"aa00a9b2-102b-4b46-b69f-86efda64b178\") " pod="openstack/ovsdbserver-nb-0" Jan 22 16:49:13 crc kubenswrapper[4758]: I0122 16:49:13.106246 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/aa00a9b2-102b-4b46-b69f-86efda64b178-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"aa00a9b2-102b-4b46-b69f-86efda64b178\") " pod="openstack/ovsdbserver-nb-0" Jan 22 16:49:13 crc kubenswrapper[4758]: I0122 16:49:13.107197 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r75jk\" (UniqueName: \"kubernetes.io/projected/aa00a9b2-102b-4b46-b69f-86efda64b178-kube-api-access-r75jk\") pod \"ovsdbserver-nb-0\" (UID: \"aa00a9b2-102b-4b46-b69f-86efda64b178\") " pod="openstack/ovsdbserver-nb-0" Jan 22 16:49:13 crc kubenswrapper[4758]: I0122 16:49:13.107285 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa00a9b2-102b-4b46-b69f-86efda64b178-config\") pod \"ovsdbserver-nb-0\" (UID: \"aa00a9b2-102b-4b46-b69f-86efda64b178\") " pod="openstack/ovsdbserver-nb-0" Jan 22 16:49:13 crc kubenswrapper[4758]: I0122 16:49:13.107315 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa00a9b2-102b-4b46-b69f-86efda64b178-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"aa00a9b2-102b-4b46-b69f-86efda64b178\") " pod="openstack/ovsdbserver-nb-0" Jan 22 16:49:13 crc kubenswrapper[4758]: I0122 16:49:13.107361 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"aa00a9b2-102b-4b46-b69f-86efda64b178\") " pod="openstack/ovsdbserver-nb-0" Jan 22 16:49:13 crc kubenswrapper[4758]: I0122 16:49:13.107395 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa00a9b2-102b-4b46-b69f-86efda64b178-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"aa00a9b2-102b-4b46-b69f-86efda64b178\") " pod="openstack/ovsdbserver-nb-0" Jan 22 16:49:13 crc kubenswrapper[4758]: I0122 16:49:13.107654 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"aa00a9b2-102b-4b46-b69f-86efda64b178\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-nb-0" Jan 22 16:49:13 crc kubenswrapper[4758]: I0122 16:49:13.108051 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa00a9b2-102b-4b46-b69f-86efda64b178-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"aa00a9b2-102b-4b46-b69f-86efda64b178\") " pod="openstack/ovsdbserver-nb-0" Jan 22 16:49:13 crc kubenswrapper[4758]: I0122 16:49:13.108449 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa00a9b2-102b-4b46-b69f-86efda64b178-config\") pod \"ovsdbserver-nb-0\" (UID: \"aa00a9b2-102b-4b46-b69f-86efda64b178\") " pod="openstack/ovsdbserver-nb-0" Jan 22 16:49:13 crc kubenswrapper[4758]: I0122 16:49:13.114350 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa00a9b2-102b-4b46-b69f-86efda64b178-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"aa00a9b2-102b-4b46-b69f-86efda64b178\") " pod="openstack/ovsdbserver-nb-0" Jan 22 16:49:13 crc kubenswrapper[4758]: I0122 16:49:13.114481 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa00a9b2-102b-4b46-b69f-86efda64b178-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"aa00a9b2-102b-4b46-b69f-86efda64b178\") " pod="openstack/ovsdbserver-nb-0" Jan 22 16:49:13 crc kubenswrapper[4758]: I0122 16:49:13.114512 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa00a9b2-102b-4b46-b69f-86efda64b178-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"aa00a9b2-102b-4b46-b69f-86efda64b178\") " pod="openstack/ovsdbserver-nb-0" Jan 22 16:49:13 crc kubenswrapper[4758]: I0122 16:49:13.128764 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r75jk\" (UniqueName: \"kubernetes.io/projected/aa00a9b2-102b-4b46-b69f-86efda64b178-kube-api-access-r75jk\") pod \"ovsdbserver-nb-0\" (UID: \"aa00a9b2-102b-4b46-b69f-86efda64b178\") " pod="openstack/ovsdbserver-nb-0" Jan 22 16:49:13 crc kubenswrapper[4758]: I0122 16:49:13.137714 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"aa00a9b2-102b-4b46-b69f-86efda64b178\") " pod="openstack/ovsdbserver-nb-0" Jan 22 16:49:13 crc kubenswrapper[4758]: I0122 16:49:13.306154 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 22 16:49:15 crc kubenswrapper[4758]: I0122 16:49:15.821519 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 22 16:49:15 crc kubenswrapper[4758]: I0122 16:49:15.824112 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 22 16:49:15 crc kubenswrapper[4758]: I0122 16:49:15.826040 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 22 16:49:15 crc kubenswrapper[4758]: I0122 16:49:15.826420 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-x59mw" Jan 22 16:49:15 crc kubenswrapper[4758]: I0122 16:49:15.827133 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 22 16:49:15 crc kubenswrapper[4758]: I0122 16:49:15.827681 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 22 16:49:15 crc kubenswrapper[4758]: I0122 16:49:15.837727 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 22 16:49:15 crc kubenswrapper[4758]: I0122 16:49:15.925684 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"fad5367d-b78c-4015-ac3a-4db4e3d3012a\") " pod="openstack/ovsdbserver-sb-0" Jan 22 16:49:15 crc kubenswrapper[4758]: I0122 16:49:15.925835 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fad5367d-b78c-4015-ac3a-4db4e3d3012a-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"fad5367d-b78c-4015-ac3a-4db4e3d3012a\") " pod="openstack/ovsdbserver-sb-0" Jan 22 16:49:15 crc kubenswrapper[4758]: I0122 16:49:15.925859 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fad5367d-b78c-4015-ac3a-4db4e3d3012a-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"fad5367d-b78c-4015-ac3a-4db4e3d3012a\") " pod="openstack/ovsdbserver-sb-0" Jan 22 16:49:15 crc kubenswrapper[4758]: I0122 16:49:15.925892 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fad5367d-b78c-4015-ac3a-4db4e3d3012a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"fad5367d-b78c-4015-ac3a-4db4e3d3012a\") " pod="openstack/ovsdbserver-sb-0" Jan 22 16:49:15 crc kubenswrapper[4758]: I0122 16:49:15.925923 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fad5367d-b78c-4015-ac3a-4db4e3d3012a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"fad5367d-b78c-4015-ac3a-4db4e3d3012a\") " pod="openstack/ovsdbserver-sb-0" Jan 22 16:49:15 crc kubenswrapper[4758]: I0122 16:49:15.925948 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fad5367d-b78c-4015-ac3a-4db4e3d3012a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"fad5367d-b78c-4015-ac3a-4db4e3d3012a\") " pod="openstack/ovsdbserver-sb-0" Jan 22 16:49:15 crc kubenswrapper[4758]: I0122 16:49:15.926081 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qgv6\" (UniqueName: \"kubernetes.io/projected/fad5367d-b78c-4015-ac3a-4db4e3d3012a-kube-api-access-6qgv6\") pod \"ovsdbserver-sb-0\" (UID: \"fad5367d-b78c-4015-ac3a-4db4e3d3012a\") " pod="openstack/ovsdbserver-sb-0" Jan 22 16:49:15 crc kubenswrapper[4758]: I0122 16:49:15.926183 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fad5367d-b78c-4015-ac3a-4db4e3d3012a-config\") pod \"ovsdbserver-sb-0\" (UID: \"fad5367d-b78c-4015-ac3a-4db4e3d3012a\") " pod="openstack/ovsdbserver-sb-0" Jan 22 16:49:16 crc kubenswrapper[4758]: I0122 16:49:16.027051 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fad5367d-b78c-4015-ac3a-4db4e3d3012a-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"fad5367d-b78c-4015-ac3a-4db4e3d3012a\") " pod="openstack/ovsdbserver-sb-0" Jan 22 16:49:16 crc kubenswrapper[4758]: I0122 16:49:16.027095 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fad5367d-b78c-4015-ac3a-4db4e3d3012a-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"fad5367d-b78c-4015-ac3a-4db4e3d3012a\") " pod="openstack/ovsdbserver-sb-0" Jan 22 16:49:16 crc kubenswrapper[4758]: I0122 16:49:16.027135 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fad5367d-b78c-4015-ac3a-4db4e3d3012a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"fad5367d-b78c-4015-ac3a-4db4e3d3012a\") " pod="openstack/ovsdbserver-sb-0" Jan 22 16:49:16 crc kubenswrapper[4758]: I0122 16:49:16.027175 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fad5367d-b78c-4015-ac3a-4db4e3d3012a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"fad5367d-b78c-4015-ac3a-4db4e3d3012a\") " pod="openstack/ovsdbserver-sb-0" Jan 22 16:49:16 crc kubenswrapper[4758]: I0122 16:49:16.027210 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fad5367d-b78c-4015-ac3a-4db4e3d3012a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"fad5367d-b78c-4015-ac3a-4db4e3d3012a\") " pod="openstack/ovsdbserver-sb-0" Jan 22 16:49:16 crc kubenswrapper[4758]: I0122 16:49:16.027238 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qgv6\" (UniqueName: \"kubernetes.io/projected/fad5367d-b78c-4015-ac3a-4db4e3d3012a-kube-api-access-6qgv6\") pod \"ovsdbserver-sb-0\" (UID: \"fad5367d-b78c-4015-ac3a-4db4e3d3012a\") " pod="openstack/ovsdbserver-sb-0" Jan 22 16:49:16 crc kubenswrapper[4758]: I0122 16:49:16.027301 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fad5367d-b78c-4015-ac3a-4db4e3d3012a-config\") pod \"ovsdbserver-sb-0\" (UID: \"fad5367d-b78c-4015-ac3a-4db4e3d3012a\") " pod="openstack/ovsdbserver-sb-0" Jan 22 16:49:16 crc kubenswrapper[4758]: I0122 16:49:16.027345 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"fad5367d-b78c-4015-ac3a-4db4e3d3012a\") " pod="openstack/ovsdbserver-sb-0" Jan 22 16:49:16 crc kubenswrapper[4758]: I0122 16:49:16.027678 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"fad5367d-b78c-4015-ac3a-4db4e3d3012a\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-sb-0" Jan 22 16:49:16 crc kubenswrapper[4758]: I0122 16:49:16.028300 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fad5367d-b78c-4015-ac3a-4db4e3d3012a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"fad5367d-b78c-4015-ac3a-4db4e3d3012a\") " pod="openstack/ovsdbserver-sb-0" Jan 22 16:49:16 crc kubenswrapper[4758]: I0122 16:49:16.029109 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fad5367d-b78c-4015-ac3a-4db4e3d3012a-config\") pod \"ovsdbserver-sb-0\" (UID: \"fad5367d-b78c-4015-ac3a-4db4e3d3012a\") " pod="openstack/ovsdbserver-sb-0" Jan 22 16:49:16 crc kubenswrapper[4758]: I0122 16:49:16.029139 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fad5367d-b78c-4015-ac3a-4db4e3d3012a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"fad5367d-b78c-4015-ac3a-4db4e3d3012a\") " pod="openstack/ovsdbserver-sb-0" Jan 22 16:49:16 crc kubenswrapper[4758]: I0122 16:49:16.034598 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fad5367d-b78c-4015-ac3a-4db4e3d3012a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"fad5367d-b78c-4015-ac3a-4db4e3d3012a\") " pod="openstack/ovsdbserver-sb-0" Jan 22 16:49:16 crc kubenswrapper[4758]: I0122 16:49:16.034974 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fad5367d-b78c-4015-ac3a-4db4e3d3012a-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"fad5367d-b78c-4015-ac3a-4db4e3d3012a\") " pod="openstack/ovsdbserver-sb-0" Jan 22 16:49:16 crc kubenswrapper[4758]: I0122 16:49:16.035622 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fad5367d-b78c-4015-ac3a-4db4e3d3012a-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"fad5367d-b78c-4015-ac3a-4db4e3d3012a\") " pod="openstack/ovsdbserver-sb-0" Jan 22 16:49:16 crc kubenswrapper[4758]: I0122 16:49:16.050315 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qgv6\" (UniqueName: \"kubernetes.io/projected/fad5367d-b78c-4015-ac3a-4db4e3d3012a-kube-api-access-6qgv6\") pod \"ovsdbserver-sb-0\" (UID: \"fad5367d-b78c-4015-ac3a-4db4e3d3012a\") " pod="openstack/ovsdbserver-sb-0" Jan 22 16:49:16 crc kubenswrapper[4758]: I0122 16:49:16.062286 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"fad5367d-b78c-4015-ac3a-4db4e3d3012a\") " pod="openstack/ovsdbserver-sb-0" Jan 22 16:49:16 crc kubenswrapper[4758]: I0122 16:49:16.142755 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 22 16:49:24 crc kubenswrapper[4758]: I0122 16:49:24.552701 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 22 16:49:26 crc kubenswrapper[4758]: E0122 16:49:26.170237 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.196:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 22 16:49:26 crc kubenswrapper[4758]: E0122 16:49:26.170585 4758 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.196:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 22 16:49:26 crc kubenswrapper[4758]: E0122 16:49:26.170734 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.196:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7k6xc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-744cb6745-gpxr2_openstack(e70c0c5b-a151-49be-aad0-41549f1fa4d3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 16:49:26 crc kubenswrapper[4758]: E0122 16:49:26.171953 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-744cb6745-gpxr2" podUID="e70c0c5b-a151-49be-aad0-41549f1fa4d3" Jan 22 16:49:26 crc kubenswrapper[4758]: E0122 16:49:26.181280 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.196:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 22 16:49:26 crc kubenswrapper[4758]: E0122 16:49:26.181350 4758 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.196:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 22 16:49:26 crc kubenswrapper[4758]: E0122 16:49:26.181628 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.196:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gzxnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-8dd458c9c-d66hc_openstack(357fd4b8-9b78-4aac-a03e-985eb2e27dfd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 16:49:26 crc kubenswrapper[4758]: E0122 16:49:26.184295 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-8dd458c9c-d66hc" podUID="357fd4b8-9b78-4aac-a03e-985eb2e27dfd" Jan 22 16:49:26 crc kubenswrapper[4758]: I0122 16:49:26.251463 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf","Type":"ContainerStarted","Data":"fa04000ade07b559a134b22bc31d64edd871b21d498961cd9bc765a78e8a97c6"} Jan 22 16:49:26 crc kubenswrapper[4758]: I0122 16:49:26.904267 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8dd458c9c-d66hc" Jan 22 16:49:26 crc kubenswrapper[4758]: I0122 16:49:26.936339 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744cb6745-gpxr2" Jan 22 16:49:26 crc kubenswrapper[4758]: I0122 16:49:26.937532 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e70c0c5b-a151-49be-aad0-41549f1fa4d3-config\") pod \"e70c0c5b-a151-49be-aad0-41549f1fa4d3\" (UID: \"e70c0c5b-a151-49be-aad0-41549f1fa4d3\") " Jan 22 16:49:26 crc kubenswrapper[4758]: I0122 16:49:26.937604 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/357fd4b8-9b78-4aac-a03e-985eb2e27dfd-dns-svc\") pod \"357fd4b8-9b78-4aac-a03e-985eb2e27dfd\" (UID: \"357fd4b8-9b78-4aac-a03e-985eb2e27dfd\") " Jan 22 16:49:26 crc kubenswrapper[4758]: I0122 16:49:26.937641 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzxnm\" (UniqueName: \"kubernetes.io/projected/357fd4b8-9b78-4aac-a03e-985eb2e27dfd-kube-api-access-gzxnm\") pod \"357fd4b8-9b78-4aac-a03e-985eb2e27dfd\" (UID: \"357fd4b8-9b78-4aac-a03e-985eb2e27dfd\") " Jan 22 16:49:26 crc kubenswrapper[4758]: I0122 16:49:26.937663 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/357fd4b8-9b78-4aac-a03e-985eb2e27dfd-config\") pod \"357fd4b8-9b78-4aac-a03e-985eb2e27dfd\" (UID: \"357fd4b8-9b78-4aac-a03e-985eb2e27dfd\") " Jan 22 16:49:26 crc kubenswrapper[4758]: I0122 16:49:26.937709 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7k6xc\" (UniqueName: \"kubernetes.io/projected/e70c0c5b-a151-49be-aad0-41549f1fa4d3-kube-api-access-7k6xc\") pod \"e70c0c5b-a151-49be-aad0-41549f1fa4d3\" (UID: \"e70c0c5b-a151-49be-aad0-41549f1fa4d3\") " Jan 22 16:49:26 crc kubenswrapper[4758]: I0122 16:49:26.938447 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/357fd4b8-9b78-4aac-a03e-985eb2e27dfd-config" (OuterVolumeSpecName: "config") pod "357fd4b8-9b78-4aac-a03e-985eb2e27dfd" (UID: "357fd4b8-9b78-4aac-a03e-985eb2e27dfd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:49:26 crc kubenswrapper[4758]: I0122 16:49:26.938474 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e70c0c5b-a151-49be-aad0-41549f1fa4d3-config" (OuterVolumeSpecName: "config") pod "e70c0c5b-a151-49be-aad0-41549f1fa4d3" (UID: "e70c0c5b-a151-49be-aad0-41549f1fa4d3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:49:26 crc kubenswrapper[4758]: I0122 16:49:26.938681 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/357fd4b8-9b78-4aac-a03e-985eb2e27dfd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "357fd4b8-9b78-4aac-a03e-985eb2e27dfd" (UID: "357fd4b8-9b78-4aac-a03e-985eb2e27dfd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:49:26 crc kubenswrapper[4758]: I0122 16:49:26.946646 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e70c0c5b-a151-49be-aad0-41549f1fa4d3-kube-api-access-7k6xc" (OuterVolumeSpecName: "kube-api-access-7k6xc") pod "e70c0c5b-a151-49be-aad0-41549f1fa4d3" (UID: "e70c0c5b-a151-49be-aad0-41549f1fa4d3"). InnerVolumeSpecName "kube-api-access-7k6xc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:49:26 crc kubenswrapper[4758]: I0122 16:49:26.946697 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/357fd4b8-9b78-4aac-a03e-985eb2e27dfd-kube-api-access-gzxnm" (OuterVolumeSpecName: "kube-api-access-gzxnm") pod "357fd4b8-9b78-4aac-a03e-985eb2e27dfd" (UID: "357fd4b8-9b78-4aac-a03e-985eb2e27dfd"). InnerVolumeSpecName "kube-api-access-gzxnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:49:27 crc kubenswrapper[4758]: I0122 16:49:27.039405 4758 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/357fd4b8-9b78-4aac-a03e-985eb2e27dfd-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 16:49:27 crc kubenswrapper[4758]: I0122 16:49:27.039435 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzxnm\" (UniqueName: \"kubernetes.io/projected/357fd4b8-9b78-4aac-a03e-985eb2e27dfd-kube-api-access-gzxnm\") on node \"crc\" DevicePath \"\"" Jan 22 16:49:27 crc kubenswrapper[4758]: I0122 16:49:27.039450 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/357fd4b8-9b78-4aac-a03e-985eb2e27dfd-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:49:27 crc kubenswrapper[4758]: I0122 16:49:27.039461 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7k6xc\" (UniqueName: \"kubernetes.io/projected/e70c0c5b-a151-49be-aad0-41549f1fa4d3-kube-api-access-7k6xc\") on node \"crc\" DevicePath \"\"" Jan 22 16:49:27 crc kubenswrapper[4758]: I0122 16:49:27.039472 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e70c0c5b-a151-49be-aad0-41549f1fa4d3-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:49:27 crc kubenswrapper[4758]: I0122 16:49:27.263144 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744cb6745-gpxr2" Jan 22 16:49:27 crc kubenswrapper[4758]: I0122 16:49:27.263174 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744cb6745-gpxr2" event={"ID":"e70c0c5b-a151-49be-aad0-41549f1fa4d3","Type":"ContainerDied","Data":"c04e79c7d8fa97611ab4891a49a263948c63940578884d53bb3943245613004f"} Jan 22 16:49:27 crc kubenswrapper[4758]: I0122 16:49:27.265883 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8dd458c9c-d66hc" event={"ID":"357fd4b8-9b78-4aac-a03e-985eb2e27dfd","Type":"ContainerDied","Data":"5cd1dcaf5f19b74705885a5bac5f245a75a43c66bf3d927bfc62dd51104b9d92"} Jan 22 16:49:27 crc kubenswrapper[4758]: I0122 16:49:27.265968 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8dd458c9c-d66hc" Jan 22 16:49:27 crc kubenswrapper[4758]: I0122 16:49:27.333902 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8dd458c9c-d66hc"] Jan 22 16:49:27 crc kubenswrapper[4758]: I0122 16:49:27.340942 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8dd458c9c-d66hc"] Jan 22 16:49:27 crc kubenswrapper[4758]: I0122 16:49:27.368262 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-744cb6745-gpxr2"] Jan 22 16:49:27 crc kubenswrapper[4758]: I0122 16:49:27.374175 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-744cb6745-gpxr2"] Jan 22 16:49:27 crc kubenswrapper[4758]: I0122 16:49:27.514972 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 16:49:27 crc kubenswrapper[4758]: I0122 16:49:27.533401 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5997d47949-qh6rj"] Jan 22 16:49:27 crc kubenswrapper[4758]: I0122 16:49:27.550535 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mpsgq"] Jan 22 16:49:27 crc kubenswrapper[4758]: W0122 16:49:27.558292 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7911c0f6_531a_403c_861f_f9cd3ec18ce4.slice/crio-04fd9a782162c472736eecaae9eae02cc65f7c2f70b7c308230d2402f4ca6726 WatchSource:0}: Error finding container 04fd9a782162c472736eecaae9eae02cc65f7c2f70b7c308230d2402f4ca6726: Status 404 returned error can't find the container with id 04fd9a782162c472736eecaae9eae02cc65f7c2f70b7c308230d2402f4ca6726 Jan 22 16:49:27 crc kubenswrapper[4758]: I0122 16:49:27.569274 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Jan 22 16:49:27 crc kubenswrapper[4758]: W0122 16:49:27.571865 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7805c55_6999_45a8_afd4_3fd1fa039b7a.slice/crio-3163ba667ef55e66abe3d198eb0aa4c990e5e7e6e438fec9d7dcf6a48d2f19d9 WatchSource:0}: Error finding container 3163ba667ef55e66abe3d198eb0aa4c990e5e7e6e438fec9d7dcf6a48d2f19d9: Status 404 returned error can't find the container with id 3163ba667ef55e66abe3d198eb0aa4c990e5e7e6e438fec9d7dcf6a48d2f19d9 Jan 22 16:49:27 crc kubenswrapper[4758]: I0122 16:49:27.615497 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 22 16:49:27 crc kubenswrapper[4758]: I0122 16:49:27.629216 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6594fdd9c9-22rg8"] Jan 22 16:49:27 crc kubenswrapper[4758]: W0122 16:49:27.687277 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf52e2571_4001_441f_b7b7_b4746ae1c10d.slice/crio-59d9a6af5fac736a82662c17699a4099d2857cca2c034852279ed29fa65fdcfa WatchSource:0}: Error finding container 59d9a6af5fac736a82662c17699a4099d2857cca2c034852279ed29fa65fdcfa: Status 404 returned error can't find the container with id 59d9a6af5fac736a82662c17699a4099d2857cca2c034852279ed29fa65fdcfa Jan 22 16:49:27 crc kubenswrapper[4758]: I0122 16:49:27.718361 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 22 16:49:27 crc kubenswrapper[4758]: W0122 16:49:27.726223 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a52e45a_35af_4c02_926d_d82f762b39da.slice/crio-f1408238b11824975f0e0d3d8b6b32cccb873594c417a3714a940b35d0a103bd WatchSource:0}: Error finding container f1408238b11824975f0e0d3d8b6b32cccb873594c417a3714a940b35d0a103bd: Status 404 returned error can't find the container with id f1408238b11824975f0e0d3d8b6b32cccb873594c417a3714a940b35d0a103bd Jan 22 16:49:27 crc kubenswrapper[4758]: I0122 16:49:27.736071 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 22 16:49:27 crc kubenswrapper[4758]: I0122 16:49:27.830311 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7856b7c87-dm5lm"] Jan 22 16:49:27 crc kubenswrapper[4758]: I0122 16:49:27.849187 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 16:49:27 crc kubenswrapper[4758]: W0122 16:49:27.850131 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40854732_0c8c_4f6b_bb33_d599ba3de433.slice/crio-2b8cf0548540e88304b23721dadec3da56b0f4def3c08636d7be9434bd4fd3d9 WatchSource:0}: Error finding container 2b8cf0548540e88304b23721dadec3da56b0f4def3c08636d7be9434bd4fd3d9: Status 404 returned error can't find the container with id 2b8cf0548540e88304b23721dadec3da56b0f4def3c08636d7be9434bd4fd3d9 Jan 22 16:49:27 crc kubenswrapper[4758]: I0122 16:49:27.855133 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 22 16:49:27 crc kubenswrapper[4758]: W0122 16:49:27.864490 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa00a9b2_102b_4b46_b69f_86efda64b178.slice/crio-c7474d7c4729cf785fae3658df5d4229c32038fd7d28cf84d4dc6dff45bc1db5 WatchSource:0}: Error finding container c7474d7c4729cf785fae3658df5d4229c32038fd7d28cf84d4dc6dff45bc1db5: Status 404 returned error can't find the container with id c7474d7c4729cf785fae3658df5d4229c32038fd7d28cf84d4dc6dff45bc1db5 Jan 22 16:49:27 crc kubenswrapper[4758]: I0122 16:49:27.934485 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 22 16:49:27 crc kubenswrapper[4758]: I0122 16:49:27.985896 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 16:49:28 crc kubenswrapper[4758]: I0122 16:49:28.139829 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-6sx98"] Jan 22 16:49:28 crc kubenswrapper[4758]: E0122 16:49:28.166519 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:ovsdb-server-init,Image:38.102.83.196:5001/podified-master-centos10/openstack-ovn-base:watcher_latest,Command:[/usr/local/bin/container-scripts/init-ovsdb-server.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n94hb5h588h8fh5dfh559h67fh569hdh64h74h57fhd8h678h65bh58bhb6h98h56dh5f5h84h669h698h8ch5bdh5dh9hfbhd9h565hb4hcfq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-ovs,ReadOnly:false,MountPath:/etc/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log,ReadOnly:false,MountPath:/var/log/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib,ReadOnly:false,MountPath:/var/lib/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q7nph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-ovs-6sx98_openstack(ca3428d6-c5a4-4c73-897f-7a03fa7c8463): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 16:49:28 crc kubenswrapper[4758]: E0122 16:49:28.168110 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack/ovn-controller-ovs-6sx98" podUID="ca3428d6-c5a4-4c73-897f-7a03fa7c8463" Jan 22 16:49:28 crc kubenswrapper[4758]: I0122 16:49:28.276768 4758 generic.go:334] "Generic (PLEG): container finished" podID="5a52e45a-35af-4c02-926d-d82f762b39da" containerID="cdeea964999a167c1966de302fc92bedb1c73a9b1c922ccb47fdcb715d22cd0f" exitCode=0 Jan 22 16:49:28 crc kubenswrapper[4758]: I0122 16:49:28.277101 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6594fdd9c9-22rg8" event={"ID":"5a52e45a-35af-4c02-926d-d82f762b39da","Type":"ContainerDied","Data":"cdeea964999a167c1966de302fc92bedb1c73a9b1c922ccb47fdcb715d22cd0f"} Jan 22 16:49:28 crc kubenswrapper[4758]: I0122 16:49:28.277662 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6594fdd9c9-22rg8" event={"ID":"5a52e45a-35af-4c02-926d-d82f762b39da","Type":"ContainerStarted","Data":"f1408238b11824975f0e0d3d8b6b32cccb873594c417a3714a940b35d0a103bd"} Jan 22 16:49:28 crc kubenswrapper[4758]: I0122 16:49:28.280004 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"fad5367d-b78c-4015-ac3a-4db4e3d3012a","Type":"ContainerStarted","Data":"016d85d4f5f603c8dfd616f59045b5069fef3967c7f45b7d460f22b85fba5e73"} Jan 22 16:49:28 crc kubenswrapper[4758]: I0122 16:49:28.291169 4758 generic.go:334] "Generic (PLEG): container finished" podID="40854732-0c8c-4f6b-bb33-d599ba3de433" containerID="673e68d650c00d536d460775abe22fae19c19421a2ffa3a8399122e04cec2528" exitCode=0 Jan 22 16:49:28 crc kubenswrapper[4758]: I0122 16:49:28.291298 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7856b7c87-dm5lm" event={"ID":"40854732-0c8c-4f6b-bb33-d599ba3de433","Type":"ContainerDied","Data":"673e68d650c00d536d460775abe22fae19c19421a2ffa3a8399122e04cec2528"} Jan 22 16:49:28 crc kubenswrapper[4758]: I0122 16:49:28.291328 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7856b7c87-dm5lm" event={"ID":"40854732-0c8c-4f6b-bb33-d599ba3de433","Type":"ContainerStarted","Data":"2b8cf0548540e88304b23721dadec3da56b0f4def3c08636d7be9434bd4fd3d9"} Jan 22 16:49:28 crc kubenswrapper[4758]: I0122 16:49:28.302995 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6sx98" event={"ID":"ca3428d6-c5a4-4c73-897f-7a03fa7c8463","Type":"ContainerStarted","Data":"93ca29379818851fcda8e9a0d8aedcee1fe1a915ffc3ed523744bdae4f990ec4"} Jan 22 16:49:28 crc kubenswrapper[4758]: E0122 16:49:28.305165 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.196:5001/podified-master-centos10/openstack-ovn-base:watcher_latest\\\"\"" pod="openstack/ovn-controller-ovs-6sx98" podUID="ca3428d6-c5a4-4c73-897f-7a03fa7c8463" Jan 22 16:49:28 crc kubenswrapper[4758]: I0122 16:49:28.307867 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"78374f0a-c7de-486b-9118-fe2dccc5bdca","Type":"ContainerStarted","Data":"94944dc56131edccaead4d11a34fc16104c1ec896a0e5471a50bf56b08cfb229"} Jan 22 16:49:28 crc kubenswrapper[4758]: I0122 16:49:28.321431 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c980e076-b6f7-4713-8b10-08bea2949331","Type":"ContainerStarted","Data":"6ac6f9402bbe61d74f7f9a4bdddab60fb5e210ad048b4ceb9052bc11747df09f"} Jan 22 16:49:28 crc kubenswrapper[4758]: I0122 16:49:28.323139 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f52e2571-4001-441f-b7b7-b4746ae1c10d","Type":"ContainerStarted","Data":"59d9a6af5fac736a82662c17699a4099d2857cca2c034852279ed29fa65fdcfa"} Jan 22 16:49:28 crc kubenswrapper[4758]: I0122 16:49:28.325804 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5997d47949-qh6rj" event={"ID":"b242dc27-4e77-4ae4-a402-0aba8d78e356","Type":"ContainerStarted","Data":"92e82d494863e6b13a19497ff09c9f8ac71ec272cebf5a6eb177b0c911031b15"} Jan 22 16:49:28 crc kubenswrapper[4758]: I0122 16:49:28.325843 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5997d47949-qh6rj" event={"ID":"b242dc27-4e77-4ae4-a402-0aba8d78e356","Type":"ContainerStarted","Data":"4498b1b165651deab118d1cc97772b77f545af830e8a231f97e49f13bf49d3cb"} Jan 22 16:49:28 crc kubenswrapper[4758]: I0122 16:49:28.325937 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5997d47949-qh6rj" podUID="b242dc27-4e77-4ae4-a402-0aba8d78e356" containerName="init" containerID="cri-o://92e82d494863e6b13a19497ff09c9f8ac71ec272cebf5a6eb177b0c911031b15" gracePeriod=10 Jan 22 16:49:28 crc kubenswrapper[4758]: I0122 16:49:28.327378 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"772760c9-f1af-44f5-bfc0-9b949a639e9f","Type":"ContainerStarted","Data":"10f76fd4984e92250fd0bbeb0545a5e87393aca7916feaf8654f21daa58194c3"} Jan 22 16:49:28 crc kubenswrapper[4758]: I0122 16:49:28.332134 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"aa00a9b2-102b-4b46-b69f-86efda64b178","Type":"ContainerStarted","Data":"c7474d7c4729cf785fae3658df5d4229c32038fd7d28cf84d4dc6dff45bc1db5"} Jan 22 16:49:28 crc kubenswrapper[4758]: I0122 16:49:28.333775 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mpsgq" event={"ID":"7911c0f6-531a-403c-861f-f9cd3ec18ce4","Type":"ContainerStarted","Data":"04fd9a782162c472736eecaae9eae02cc65f7c2f70b7c308230d2402f4ca6726"} Jan 22 16:49:28 crc kubenswrapper[4758]: I0122 16:49:28.344503 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f7805c55-6999-45a8-afd4-3fd1fa039b7a","Type":"ContainerStarted","Data":"3163ba667ef55e66abe3d198eb0aa4c990e5e7e6e438fec9d7dcf6a48d2f19d9"} Jan 22 16:49:28 crc kubenswrapper[4758]: I0122 16:49:28.351769 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"be871bb7-c028-4788-9769-51685b7290ea","Type":"ContainerStarted","Data":"60e060720558712ef40ab18e628c7890c122ea5d8681a448238a2f91659d871e"} Jan 22 16:49:28 crc kubenswrapper[4758]: I0122 16:49:28.354261 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"7bab3882-8d1f-43dd-bbd6-53fc702f137d","Type":"ContainerStarted","Data":"5ec349bf3bfa9827c010cce76adced678ed765a1622431fc12d07a6834433b7f"} Jan 22 16:49:28 crc kubenswrapper[4758]: I0122 16:49:28.826603 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="357fd4b8-9b78-4aac-a03e-985eb2e27dfd" path="/var/lib/kubelet/pods/357fd4b8-9b78-4aac-a03e-985eb2e27dfd/volumes" Jan 22 16:49:28 crc kubenswrapper[4758]: I0122 16:49:28.827031 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e70c0c5b-a151-49be-aad0-41549f1fa4d3" path="/var/lib/kubelet/pods/e70c0c5b-a151-49be-aad0-41549f1fa4d3/volumes" Jan 22 16:49:29 crc kubenswrapper[4758]: I0122 16:49:29.364974 4758 generic.go:334] "Generic (PLEG): container finished" podID="b242dc27-4e77-4ae4-a402-0aba8d78e356" containerID="92e82d494863e6b13a19497ff09c9f8ac71ec272cebf5a6eb177b0c911031b15" exitCode=0 Jan 22 16:49:29 crc kubenswrapper[4758]: I0122 16:49:29.365056 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5997d47949-qh6rj" event={"ID":"b242dc27-4e77-4ae4-a402-0aba8d78e356","Type":"ContainerDied","Data":"92e82d494863e6b13a19497ff09c9f8ac71ec272cebf5a6eb177b0c911031b15"} Jan 22 16:49:29 crc kubenswrapper[4758]: I0122 16:49:29.368643 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7856b7c87-dm5lm" event={"ID":"40854732-0c8c-4f6b-bb33-d599ba3de433","Type":"ContainerStarted","Data":"70da99c660a9d2f2d3cc10bd71ea00452b1c24a21d12551847ad9b2e93f3548d"} Jan 22 16:49:29 crc kubenswrapper[4758]: I0122 16:49:29.368777 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7856b7c87-dm5lm" Jan 22 16:49:29 crc kubenswrapper[4758]: I0122 16:49:29.371890 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6594fdd9c9-22rg8" event={"ID":"5a52e45a-35af-4c02-926d-d82f762b39da","Type":"ContainerStarted","Data":"f9eefb2edd497e8882b25a606d3567d882d0bffaad01240544d12a105b3b70d6"} Jan 22 16:49:29 crc kubenswrapper[4758]: E0122 16:49:29.373480 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.196:5001/podified-master-centos10/openstack-ovn-base:watcher_latest\\\"\"" pod="openstack/ovn-controller-ovs-6sx98" podUID="ca3428d6-c5a4-4c73-897f-7a03fa7c8463" Jan 22 16:49:29 crc kubenswrapper[4758]: I0122 16:49:29.392946 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7856b7c87-dm5lm" podStartSLOduration=28.392906284 podStartE2EDuration="28.392906284s" podCreationTimestamp="2026-01-22 16:49:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:49:29.387263351 +0000 UTC m=+1190.870602656" watchObservedRunningTime="2026-01-22 16:49:29.392906284 +0000 UTC m=+1190.876245559" Jan 22 16:49:29 crc kubenswrapper[4758]: I0122 16:49:29.430525 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6594fdd9c9-22rg8" podStartSLOduration=27.311047476 podStartE2EDuration="27.430486788s" podCreationTimestamp="2026-01-22 16:49:02 +0000 UTC" firstStartedPulling="2026-01-22 16:49:27.794078867 +0000 UTC m=+1189.277418152" lastFinishedPulling="2026-01-22 16:49:27.913518179 +0000 UTC m=+1189.396857464" observedRunningTime="2026-01-22 16:49:29.424066653 +0000 UTC m=+1190.907405948" watchObservedRunningTime="2026-01-22 16:49:29.430486788 +0000 UTC m=+1190.913826073" Jan 22 16:49:30 crc kubenswrapper[4758]: I0122 16:49:30.378652 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6594fdd9c9-22rg8" Jan 22 16:49:36 crc kubenswrapper[4758]: I0122 16:49:36.426606 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5997d47949-qh6rj" event={"ID":"b242dc27-4e77-4ae4-a402-0aba8d78e356","Type":"ContainerDied","Data":"4498b1b165651deab118d1cc97772b77f545af830e8a231f97e49f13bf49d3cb"} Jan 22 16:49:36 crc kubenswrapper[4758]: I0122 16:49:36.427286 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4498b1b165651deab118d1cc97772b77f545af830e8a231f97e49f13bf49d3cb" Jan 22 16:49:36 crc kubenswrapper[4758]: I0122 16:49:36.451901 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5997d47949-qh6rj" Jan 22 16:49:36 crc kubenswrapper[4758]: I0122 16:49:36.606625 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b242dc27-4e77-4ae4-a402-0aba8d78e356-dns-svc\") pod \"b242dc27-4e77-4ae4-a402-0aba8d78e356\" (UID: \"b242dc27-4e77-4ae4-a402-0aba8d78e356\") " Jan 22 16:49:36 crc kubenswrapper[4758]: I0122 16:49:36.606794 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b242dc27-4e77-4ae4-a402-0aba8d78e356-config\") pod \"b242dc27-4e77-4ae4-a402-0aba8d78e356\" (UID: \"b242dc27-4e77-4ae4-a402-0aba8d78e356\") " Jan 22 16:49:36 crc kubenswrapper[4758]: I0122 16:49:36.606858 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5hf2\" (UniqueName: \"kubernetes.io/projected/b242dc27-4e77-4ae4-a402-0aba8d78e356-kube-api-access-b5hf2\") pod \"b242dc27-4e77-4ae4-a402-0aba8d78e356\" (UID: \"b242dc27-4e77-4ae4-a402-0aba8d78e356\") " Jan 22 16:49:36 crc kubenswrapper[4758]: I0122 16:49:36.614983 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b242dc27-4e77-4ae4-a402-0aba8d78e356-kube-api-access-b5hf2" (OuterVolumeSpecName: "kube-api-access-b5hf2") pod "b242dc27-4e77-4ae4-a402-0aba8d78e356" (UID: "b242dc27-4e77-4ae4-a402-0aba8d78e356"). InnerVolumeSpecName "kube-api-access-b5hf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:49:36 crc kubenswrapper[4758]: I0122 16:49:36.629297 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b242dc27-4e77-4ae4-a402-0aba8d78e356-config" (OuterVolumeSpecName: "config") pod "b242dc27-4e77-4ae4-a402-0aba8d78e356" (UID: "b242dc27-4e77-4ae4-a402-0aba8d78e356"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:49:36 crc kubenswrapper[4758]: I0122 16:49:36.631727 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b242dc27-4e77-4ae4-a402-0aba8d78e356-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b242dc27-4e77-4ae4-a402-0aba8d78e356" (UID: "b242dc27-4e77-4ae4-a402-0aba8d78e356"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:49:36 crc kubenswrapper[4758]: I0122 16:49:36.709012 4758 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b242dc27-4e77-4ae4-a402-0aba8d78e356-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 16:49:36 crc kubenswrapper[4758]: I0122 16:49:36.709046 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b242dc27-4e77-4ae4-a402-0aba8d78e356-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:49:36 crc kubenswrapper[4758]: I0122 16:49:36.709065 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5hf2\" (UniqueName: \"kubernetes.io/projected/b242dc27-4e77-4ae4-a402-0aba8d78e356-kube-api-access-b5hf2\") on node \"crc\" DevicePath \"\"" Jan 22 16:49:37 crc kubenswrapper[4758]: I0122 16:49:37.230447 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7856b7c87-dm5lm" Jan 22 16:49:37 crc kubenswrapper[4758]: I0122 16:49:37.432348 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5997d47949-qh6rj" Jan 22 16:49:37 crc kubenswrapper[4758]: I0122 16:49:37.473039 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5997d47949-qh6rj"] Jan 22 16:49:37 crc kubenswrapper[4758]: I0122 16:49:37.478822 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5997d47949-qh6rj"] Jan 22 16:49:37 crc kubenswrapper[4758]: I0122 16:49:37.578832 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6594fdd9c9-22rg8" Jan 22 16:49:37 crc kubenswrapper[4758]: I0122 16:49:37.629433 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7856b7c87-dm5lm"] Jan 22 16:49:37 crc kubenswrapper[4758]: I0122 16:49:37.629670 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7856b7c87-dm5lm" podUID="40854732-0c8c-4f6b-bb33-d599ba3de433" containerName="dnsmasq-dns" containerID="cri-o://70da99c660a9d2f2d3cc10bd71ea00452b1c24a21d12551847ad9b2e93f3548d" gracePeriod=10 Jan 22 16:49:38 crc kubenswrapper[4758]: I0122 16:49:38.446305 4758 generic.go:334] "Generic (PLEG): container finished" podID="40854732-0c8c-4f6b-bb33-d599ba3de433" containerID="70da99c660a9d2f2d3cc10bd71ea00452b1c24a21d12551847ad9b2e93f3548d" exitCode=0 Jan 22 16:49:38 crc kubenswrapper[4758]: I0122 16:49:38.446354 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7856b7c87-dm5lm" event={"ID":"40854732-0c8c-4f6b-bb33-d599ba3de433","Type":"ContainerDied","Data":"70da99c660a9d2f2d3cc10bd71ea00452b1c24a21d12551847ad9b2e93f3548d"} Jan 22 16:49:38 crc kubenswrapper[4758]: I0122 16:49:38.816849 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b242dc27-4e77-4ae4-a402-0aba8d78e356" path="/var/lib/kubelet/pods/b242dc27-4e77-4ae4-a402-0aba8d78e356/volumes" Jan 22 16:49:39 crc kubenswrapper[4758]: I0122 16:49:39.289418 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7856b7c87-dm5lm" Jan 22 16:49:39 crc kubenswrapper[4758]: I0122 16:49:39.456869 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40854732-0c8c-4f6b-bb33-d599ba3de433-dns-svc\") pod \"40854732-0c8c-4f6b-bb33-d599ba3de433\" (UID: \"40854732-0c8c-4f6b-bb33-d599ba3de433\") " Jan 22 16:49:39 crc kubenswrapper[4758]: I0122 16:49:39.457051 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40854732-0c8c-4f6b-bb33-d599ba3de433-config\") pod \"40854732-0c8c-4f6b-bb33-d599ba3de433\" (UID: \"40854732-0c8c-4f6b-bb33-d599ba3de433\") " Jan 22 16:49:39 crc kubenswrapper[4758]: I0122 16:49:39.457172 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvckk\" (UniqueName: \"kubernetes.io/projected/40854732-0c8c-4f6b-bb33-d599ba3de433-kube-api-access-nvckk\") pod \"40854732-0c8c-4f6b-bb33-d599ba3de433\" (UID: \"40854732-0c8c-4f6b-bb33-d599ba3de433\") " Jan 22 16:49:39 crc kubenswrapper[4758]: I0122 16:49:39.461871 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40854732-0c8c-4f6b-bb33-d599ba3de433-kube-api-access-nvckk" (OuterVolumeSpecName: "kube-api-access-nvckk") pod "40854732-0c8c-4f6b-bb33-d599ba3de433" (UID: "40854732-0c8c-4f6b-bb33-d599ba3de433"). InnerVolumeSpecName "kube-api-access-nvckk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:49:39 crc kubenswrapper[4758]: I0122 16:49:39.472232 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7856b7c87-dm5lm" event={"ID":"40854732-0c8c-4f6b-bb33-d599ba3de433","Type":"ContainerDied","Data":"2b8cf0548540e88304b23721dadec3da56b0f4def3c08636d7be9434bd4fd3d9"} Jan 22 16:49:39 crc kubenswrapper[4758]: I0122 16:49:39.472321 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7856b7c87-dm5lm" Jan 22 16:49:39 crc kubenswrapper[4758]: I0122 16:49:39.472318 4758 scope.go:117] "RemoveContainer" containerID="70da99c660a9d2f2d3cc10bd71ea00452b1c24a21d12551847ad9b2e93f3548d" Jan 22 16:49:39 crc kubenswrapper[4758]: I0122 16:49:39.496466 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40854732-0c8c-4f6b-bb33-d599ba3de433-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "40854732-0c8c-4f6b-bb33-d599ba3de433" (UID: "40854732-0c8c-4f6b-bb33-d599ba3de433"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:49:39 crc kubenswrapper[4758]: I0122 16:49:39.504449 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40854732-0c8c-4f6b-bb33-d599ba3de433-config" (OuterVolumeSpecName: "config") pod "40854732-0c8c-4f6b-bb33-d599ba3de433" (UID: "40854732-0c8c-4f6b-bb33-d599ba3de433"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:49:39 crc kubenswrapper[4758]: I0122 16:49:39.558641 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40854732-0c8c-4f6b-bb33-d599ba3de433-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:49:39 crc kubenswrapper[4758]: I0122 16:49:39.558667 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvckk\" (UniqueName: \"kubernetes.io/projected/40854732-0c8c-4f6b-bb33-d599ba3de433-kube-api-access-nvckk\") on node \"crc\" DevicePath \"\"" Jan 22 16:49:39 crc kubenswrapper[4758]: I0122 16:49:39.558677 4758 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40854732-0c8c-4f6b-bb33-d599ba3de433-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 16:49:39 crc kubenswrapper[4758]: I0122 16:49:39.618460 4758 scope.go:117] "RemoveContainer" containerID="673e68d650c00d536d460775abe22fae19c19421a2ffa3a8399122e04cec2528" Jan 22 16:49:39 crc kubenswrapper[4758]: I0122 16:49:39.807857 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7856b7c87-dm5lm"] Jan 22 16:49:39 crc kubenswrapper[4758]: I0122 16:49:39.814103 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7856b7c87-dm5lm"] Jan 22 16:49:40 crc kubenswrapper[4758]: I0122 16:49:40.821053 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40854732-0c8c-4f6b-bb33-d599ba3de433" path="/var/lib/kubelet/pods/40854732-0c8c-4f6b-bb33-d599ba3de433/volumes" Jan 22 16:49:43 crc kubenswrapper[4758]: I0122 16:49:43.761023 4758 scope.go:117] "RemoveContainer" containerID="701ed7be15db42c7f643dc10d035d41464427c22f85ca8a29d312c001e0ecb01" Jan 22 16:49:43 crc kubenswrapper[4758]: I0122 16:49:43.837564 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:49:43 crc kubenswrapper[4758]: I0122 16:49:43.837727 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:49:44 crc kubenswrapper[4758]: I0122 16:49:44.046119 4758 scope.go:117] "RemoveContainer" containerID="6d7a0a02923094ffd0e11fd5c139e6b05b1f91bdafd4e7ba121ff392f4ef264c" Jan 22 16:49:45 crc kubenswrapper[4758]: E0122 16:49:45.074963 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 22 16:49:45 crc kubenswrapper[4758]: E0122 16:49:45.075296 4758 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 22 16:49:45 crc kubenswrapper[4758]: E0122 16:49:45.075457 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-brf9k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(772760c9-f1af-44f5-bfc0-9b949a639e9f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 16:49:45 crc kubenswrapper[4758]: E0122 16:49:45.077774 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="772760c9-f1af-44f5-bfc0-9b949a639e9f" Jan 22 16:49:45 crc kubenswrapper[4758]: I0122 16:49:45.543251 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"7bab3882-8d1f-43dd-bbd6-53fc702f137d","Type":"ContainerStarted","Data":"ccdb1f7e028d9ea9aac1d2cc6c23883e09a73bdfbd1bc8aea7b4cd0092ffe33b"} Jan 22 16:49:45 crc kubenswrapper[4758]: I0122 16:49:45.543704 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 22 16:49:45 crc kubenswrapper[4758]: I0122 16:49:45.546379 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mpsgq" event={"ID":"7911c0f6-531a-403c-861f-f9cd3ec18ce4","Type":"ContainerStarted","Data":"a78ef4630125cd062034510b68c24d96e9e7c55f4c3e83a3f90d59bfac3c36db"} Jan 22 16:49:45 crc kubenswrapper[4758]: I0122 16:49:45.546912 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-mpsgq" Jan 22 16:49:45 crc kubenswrapper[4758]: I0122 16:49:45.549790 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf","Type":"ContainerStarted","Data":"94d82265dc604783013df0466145fc4a6dda5ffaeafe0be0d3d56500eb57d2ce"} Jan 22 16:49:45 crc kubenswrapper[4758]: I0122 16:49:45.551681 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f52e2571-4001-441f-b7b7-b4746ae1c10d","Type":"ContainerStarted","Data":"46c6da199ec04b38876cd5c037745e5b671466dbd8ac1f9269f365d95e0bbbb2"} Jan 22 16:49:45 crc kubenswrapper[4758]: I0122 16:49:45.553626 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"fad5367d-b78c-4015-ac3a-4db4e3d3012a","Type":"ContainerStarted","Data":"7f3214fb3f84183db55283f799eaf3691f00bba30984964fad9a9aeee35e7667"} Jan 22 16:49:45 crc kubenswrapper[4758]: I0122 16:49:45.555688 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"aa00a9b2-102b-4b46-b69f-86efda64b178","Type":"ContainerStarted","Data":"4f1d8ba50145fd4ee2e6c42a6393970c7b08979750198faf8ed3142fed573c5a"} Jan 22 16:49:45 crc kubenswrapper[4758]: E0122 16:49:45.557050 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="772760c9-f1af-44f5-bfc0-9b949a639e9f" Jan 22 16:49:45 crc kubenswrapper[4758]: I0122 16:49:45.583355 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=28.983674742 podStartE2EDuration="39.583336665s" podCreationTimestamp="2026-01-22 16:49:06 +0000 UTC" firstStartedPulling="2026-01-22 16:49:27.815989764 +0000 UTC m=+1189.299329049" lastFinishedPulling="2026-01-22 16:49:38.415651687 +0000 UTC m=+1199.898990972" observedRunningTime="2026-01-22 16:49:45.570496306 +0000 UTC m=+1207.053835601" watchObservedRunningTime="2026-01-22 16:49:45.583336665 +0000 UTC m=+1207.066675950" Jan 22 16:49:45 crc kubenswrapper[4758]: I0122 16:49:45.590581 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-mpsgq" podStartSLOduration=22.426726275 podStartE2EDuration="34.590555742s" podCreationTimestamp="2026-01-22 16:49:11 +0000 UTC" firstStartedPulling="2026-01-22 16:49:27.569899954 +0000 UTC m=+1189.053239239" lastFinishedPulling="2026-01-22 16:49:39.733729411 +0000 UTC m=+1201.217068706" observedRunningTime="2026-01-22 16:49:45.59011393 +0000 UTC m=+1207.073453215" watchObservedRunningTime="2026-01-22 16:49:45.590555742 +0000 UTC m=+1207.073895057" Jan 22 16:49:47 crc kubenswrapper[4758]: I0122 16:49:47.574866 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"be871bb7-c028-4788-9769-51685b7290ea","Type":"ContainerStarted","Data":"54f9f53ffb779e716dc852302775c5adee468870eae08cc358cc89f3f4e80bb2"} Jan 22 16:49:47 crc kubenswrapper[4758]: I0122 16:49:47.577807 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"78374f0a-c7de-486b-9118-fe2dccc5bdca","Type":"ContainerStarted","Data":"8ef43c5864260465182f92e5fb8fcb55f0a5a865cc6f8dd8ac08a77e2cbd0e8e"} Jan 22 16:49:47 crc kubenswrapper[4758]: I0122 16:49:47.581578 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c980e076-b6f7-4713-8b10-08bea2949331","Type":"ContainerStarted","Data":"35f5e597c5f37af1104495c7dd0f4e746f2800bfdfb3055db1a816191fcd15d4"} Jan 22 16:49:47 crc kubenswrapper[4758]: I0122 16:49:47.583811 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f7805c55-6999-45a8-afd4-3fd1fa039b7a","Type":"ContainerStarted","Data":"c26825e462dfa67cd2f638d3befab499bfa4a240e39dfa9dfa58220e27604d5d"} Jan 22 16:49:48 crc kubenswrapper[4758]: I0122 16:49:48.593552 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6sx98" event={"ID":"ca3428d6-c5a4-4c73-897f-7a03fa7c8463","Type":"ContainerStarted","Data":"5e44532ef7a560cfa33adc35855c6d6ef9483c75c262f67ad5f02002339d315f"} Jan 22 16:49:49 crc kubenswrapper[4758]: I0122 16:49:49.602372 4758 generic.go:334] "Generic (PLEG): container finished" podID="ca3428d6-c5a4-4c73-897f-7a03fa7c8463" containerID="5e44532ef7a560cfa33adc35855c6d6ef9483c75c262f67ad5f02002339d315f" exitCode=0 Jan 22 16:49:49 crc kubenswrapper[4758]: I0122 16:49:49.602431 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6sx98" event={"ID":"ca3428d6-c5a4-4c73-897f-7a03fa7c8463","Type":"ContainerDied","Data":"5e44532ef7a560cfa33adc35855c6d6ef9483c75c262f67ad5f02002339d315f"} Jan 22 16:49:51 crc kubenswrapper[4758]: I0122 16:49:51.535484 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 22 16:49:53 crc kubenswrapper[4758]: I0122 16:49:53.632535 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"aa00a9b2-102b-4b46-b69f-86efda64b178","Type":"ContainerStarted","Data":"d69f19b87df997004cdb18c0ec6255cca755050c4dcbf6015fcbda436a364eb5"} Jan 22 16:49:53 crc kubenswrapper[4758]: I0122 16:49:53.634492 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6sx98" event={"ID":"ca3428d6-c5a4-4c73-897f-7a03fa7c8463","Type":"ContainerStarted","Data":"a9147e5a1d3ef655e8449514db8a2e4807e900fb1a6066fd3a64a832aed97df8"} Jan 22 16:49:53 crc kubenswrapper[4758]: I0122 16:49:53.634538 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6sx98" event={"ID":"ca3428d6-c5a4-4c73-897f-7a03fa7c8463","Type":"ContainerStarted","Data":"d5370bec61a91501f8d8a31e6a3315bff9771976374a922a3f76e0cec64efc35"} Jan 22 16:49:53 crc kubenswrapper[4758]: I0122 16:49:53.634718 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-6sx98" Jan 22 16:49:53 crc kubenswrapper[4758]: I0122 16:49:53.634789 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-6sx98" Jan 22 16:49:53 crc kubenswrapper[4758]: I0122 16:49:53.635907 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"fad5367d-b78c-4015-ac3a-4db4e3d3012a","Type":"ContainerStarted","Data":"b3b9912ceb95b3edad6d4a8a36980054a2fd1d2ca5b16a381625a3ecd25dc14e"} Jan 22 16:49:53 crc kubenswrapper[4758]: I0122 16:49:53.653721 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=19.289475224 podStartE2EDuration="42.653701388s" podCreationTimestamp="2026-01-22 16:49:11 +0000 UTC" firstStartedPulling="2026-01-22 16:49:27.865899883 +0000 UTC m=+1189.349239168" lastFinishedPulling="2026-01-22 16:49:51.230126047 +0000 UTC m=+1212.713465332" observedRunningTime="2026-01-22 16:49:53.653505881 +0000 UTC m=+1215.136845176" watchObservedRunningTime="2026-01-22 16:49:53.653701388 +0000 UTC m=+1215.137040673" Jan 22 16:49:53 crc kubenswrapper[4758]: I0122 16:49:53.685647 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=16.393461654 podStartE2EDuration="39.685619376s" podCreationTimestamp="2026-01-22 16:49:14 +0000 UTC" firstStartedPulling="2026-01-22 16:49:27.950499166 +0000 UTC m=+1189.433838441" lastFinishedPulling="2026-01-22 16:49:51.242656878 +0000 UTC m=+1212.725996163" observedRunningTime="2026-01-22 16:49:53.67252089 +0000 UTC m=+1215.155860175" watchObservedRunningTime="2026-01-22 16:49:53.685619376 +0000 UTC m=+1215.168958661" Jan 22 16:49:53 crc kubenswrapper[4758]: I0122 16:49:53.708868 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-6sx98" podStartSLOduration=22.938173157 podStartE2EDuration="42.708846769s" podCreationTimestamp="2026-01-22 16:49:11 +0000 UTC" firstStartedPulling="2026-01-22 16:49:28.166394403 +0000 UTC m=+1189.649733688" lastFinishedPulling="2026-01-22 16:49:47.937068005 +0000 UTC m=+1209.420407300" observedRunningTime="2026-01-22 16:49:53.69602943 +0000 UTC m=+1215.179368725" watchObservedRunningTime="2026-01-22 16:49:53.708846769 +0000 UTC m=+1215.192186054" Jan 22 16:49:55 crc kubenswrapper[4758]: I0122 16:49:55.144060 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 22 16:49:55 crc kubenswrapper[4758]: I0122 16:49:55.186502 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 22 16:49:55 crc kubenswrapper[4758]: I0122 16:49:55.307094 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 22 16:49:55 crc kubenswrapper[4758]: I0122 16:49:55.344649 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 22 16:49:55 crc kubenswrapper[4758]: I0122 16:49:55.652615 4758 generic.go:334] "Generic (PLEG): container finished" podID="c980e076-b6f7-4713-8b10-08bea2949331" containerID="35f5e597c5f37af1104495c7dd0f4e746f2800bfdfb3055db1a816191fcd15d4" exitCode=0 Jan 22 16:49:55 crc kubenswrapper[4758]: I0122 16:49:55.652710 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c980e076-b6f7-4713-8b10-08bea2949331","Type":"ContainerDied","Data":"35f5e597c5f37af1104495c7dd0f4e746f2800bfdfb3055db1a816191fcd15d4"} Jan 22 16:49:55 crc kubenswrapper[4758]: I0122 16:49:55.653114 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 22 16:49:55 crc kubenswrapper[4758]: I0122 16:49:55.653148 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 22 16:49:55 crc kubenswrapper[4758]: I0122 16:49:55.692792 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 22 16:49:55 crc kubenswrapper[4758]: I0122 16:49:55.703329 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 22 16:49:55 crc kubenswrapper[4758]: I0122 16:49:55.987923 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cc5ddf659-w6pjs"] Jan 22 16:49:55 crc kubenswrapper[4758]: E0122 16:49:55.988601 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40854732-0c8c-4f6b-bb33-d599ba3de433" containerName="dnsmasq-dns" Jan 22 16:49:55 crc kubenswrapper[4758]: I0122 16:49:55.988620 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="40854732-0c8c-4f6b-bb33-d599ba3de433" containerName="dnsmasq-dns" Jan 22 16:49:55 crc kubenswrapper[4758]: E0122 16:49:55.988652 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40854732-0c8c-4f6b-bb33-d599ba3de433" containerName="init" Jan 22 16:49:55 crc kubenswrapper[4758]: I0122 16:49:55.988658 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="40854732-0c8c-4f6b-bb33-d599ba3de433" containerName="init" Jan 22 16:49:55 crc kubenswrapper[4758]: E0122 16:49:55.988674 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b242dc27-4e77-4ae4-a402-0aba8d78e356" containerName="init" Jan 22 16:49:55 crc kubenswrapper[4758]: I0122 16:49:55.988680 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="b242dc27-4e77-4ae4-a402-0aba8d78e356" containerName="init" Jan 22 16:49:55 crc kubenswrapper[4758]: I0122 16:49:55.988853 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="40854732-0c8c-4f6b-bb33-d599ba3de433" containerName="dnsmasq-dns" Jan 22 16:49:55 crc kubenswrapper[4758]: I0122 16:49:55.988866 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="b242dc27-4e77-4ae4-a402-0aba8d78e356" containerName="init" Jan 22 16:49:55 crc kubenswrapper[4758]: I0122 16:49:55.989705 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cc5ddf659-w6pjs" Jan 22 16:49:55 crc kubenswrapper[4758]: I0122 16:49:55.993396 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.011254 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cc5ddf659-w6pjs"] Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.107518 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfr8d\" (UniqueName: \"kubernetes.io/projected/9b440304-b475-4636-bfd6-d449852c32ef-kube-api-access-jfr8d\") pod \"dnsmasq-dns-7cc5ddf659-w6pjs\" (UID: \"9b440304-b475-4636-bfd6-d449852c32ef\") " pod="openstack/dnsmasq-dns-7cc5ddf659-w6pjs" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.107602 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b440304-b475-4636-bfd6-d449852c32ef-config\") pod \"dnsmasq-dns-7cc5ddf659-w6pjs\" (UID: \"9b440304-b475-4636-bfd6-d449852c32ef\") " pod="openstack/dnsmasq-dns-7cc5ddf659-w6pjs" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.107667 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b440304-b475-4636-bfd6-d449852c32ef-dns-svc\") pod \"dnsmasq-dns-7cc5ddf659-w6pjs\" (UID: \"9b440304-b475-4636-bfd6-d449852c32ef\") " pod="openstack/dnsmasq-dns-7cc5ddf659-w6pjs" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.107690 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b440304-b475-4636-bfd6-d449852c32ef-ovsdbserver-nb\") pod \"dnsmasq-dns-7cc5ddf659-w6pjs\" (UID: \"9b440304-b475-4636-bfd6-d449852c32ef\") " pod="openstack/dnsmasq-dns-7cc5ddf659-w6pjs" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.118939 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-pbmk8"] Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.120497 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-pbmk8" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.123939 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.136523 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-pbmk8"] Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.186484 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.188348 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.194225 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.194605 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.194861 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.195158 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-2zlds" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.204601 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.210611 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5335ec54-1c39-41ba-9788-672cde3d164c-config\") pod \"ovn-northd-0\" (UID: \"5335ec54-1c39-41ba-9788-672cde3d164c\") " pod="openstack/ovn-northd-0" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.210676 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b440304-b475-4636-bfd6-d449852c32ef-dns-svc\") pod \"dnsmasq-dns-7cc5ddf659-w6pjs\" (UID: \"9b440304-b475-4636-bfd6-d449852c32ef\") " pod="openstack/dnsmasq-dns-7cc5ddf659-w6pjs" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.210696 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b440304-b475-4636-bfd6-d449852c32ef-ovsdbserver-nb\") pod \"dnsmasq-dns-7cc5ddf659-w6pjs\" (UID: \"9b440304-b475-4636-bfd6-d449852c32ef\") " pod="openstack/dnsmasq-dns-7cc5ddf659-w6pjs" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.210732 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/15cc31e0-f0b3-4f0f-aaf2-af71e3c34aff-ovs-rundir\") pod \"ovn-controller-metrics-pbmk8\" (UID: \"15cc31e0-f0b3-4f0f-aaf2-af71e3c34aff\") " pod="openstack/ovn-controller-metrics-pbmk8" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.211463 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cc5ddf659-w6pjs"] Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.212051 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5335ec54-1c39-41ba-9788-672cde3d164c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5335ec54-1c39-41ba-9788-672cde3d164c\") " pod="openstack/ovn-northd-0" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.212110 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15cc31e0-f0b3-4f0f-aaf2-af71e3c34aff-config\") pod \"ovn-controller-metrics-pbmk8\" (UID: \"15cc31e0-f0b3-4f0f-aaf2-af71e3c34aff\") " pod="openstack/ovn-controller-metrics-pbmk8" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.212140 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/15cc31e0-f0b3-4f0f-aaf2-af71e3c34aff-ovn-rundir\") pod \"ovn-controller-metrics-pbmk8\" (UID: \"15cc31e0-f0b3-4f0f-aaf2-af71e3c34aff\") " pod="openstack/ovn-controller-metrics-pbmk8" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.212166 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/15cc31e0-f0b3-4f0f-aaf2-af71e3c34aff-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-pbmk8\" (UID: \"15cc31e0-f0b3-4f0f-aaf2-af71e3c34aff\") " pod="openstack/ovn-controller-metrics-pbmk8" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.212207 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5335ec54-1c39-41ba-9788-672cde3d164c-scripts\") pod \"ovn-northd-0\" (UID: \"5335ec54-1c39-41ba-9788-672cde3d164c\") " pod="openstack/ovn-northd-0" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.212238 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5335ec54-1c39-41ba-9788-672cde3d164c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5335ec54-1c39-41ba-9788-672cde3d164c\") " pod="openstack/ovn-northd-0" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.212269 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdrs6\" (UniqueName: \"kubernetes.io/projected/5335ec54-1c39-41ba-9788-672cde3d164c-kube-api-access-gdrs6\") pod \"ovn-northd-0\" (UID: \"5335ec54-1c39-41ba-9788-672cde3d164c\") " pod="openstack/ovn-northd-0" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.212442 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfr8d\" (UniqueName: \"kubernetes.io/projected/9b440304-b475-4636-bfd6-d449852c32ef-kube-api-access-jfr8d\") pod \"dnsmasq-dns-7cc5ddf659-w6pjs\" (UID: \"9b440304-b475-4636-bfd6-d449852c32ef\") " pod="openstack/dnsmasq-dns-7cc5ddf659-w6pjs" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.212471 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5335ec54-1c39-41ba-9788-672cde3d164c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5335ec54-1c39-41ba-9788-672cde3d164c\") " pod="openstack/ovn-northd-0" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.212503 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxk8g\" (UniqueName: \"kubernetes.io/projected/15cc31e0-f0b3-4f0f-aaf2-af71e3c34aff-kube-api-access-zxk8g\") pod \"ovn-controller-metrics-pbmk8\" (UID: \"15cc31e0-f0b3-4f0f-aaf2-af71e3c34aff\") " pod="openstack/ovn-controller-metrics-pbmk8" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.212564 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15cc31e0-f0b3-4f0f-aaf2-af71e3c34aff-combined-ca-bundle\") pod \"ovn-controller-metrics-pbmk8\" (UID: \"15cc31e0-f0b3-4f0f-aaf2-af71e3c34aff\") " pod="openstack/ovn-controller-metrics-pbmk8" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.212585 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5335ec54-1c39-41ba-9788-672cde3d164c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5335ec54-1c39-41ba-9788-672cde3d164c\") " pod="openstack/ovn-northd-0" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.212647 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b440304-b475-4636-bfd6-d449852c32ef-config\") pod \"dnsmasq-dns-7cc5ddf659-w6pjs\" (UID: \"9b440304-b475-4636-bfd6-d449852c32ef\") " pod="openstack/dnsmasq-dns-7cc5ddf659-w6pjs" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.213594 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b440304-b475-4636-bfd6-d449852c32ef-config\") pod \"dnsmasq-dns-7cc5ddf659-w6pjs\" (UID: \"9b440304-b475-4636-bfd6-d449852c32ef\") " pod="openstack/dnsmasq-dns-7cc5ddf659-w6pjs" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.214262 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b440304-b475-4636-bfd6-d449852c32ef-dns-svc\") pod \"dnsmasq-dns-7cc5ddf659-w6pjs\" (UID: \"9b440304-b475-4636-bfd6-d449852c32ef\") " pod="openstack/dnsmasq-dns-7cc5ddf659-w6pjs" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.214931 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b440304-b475-4636-bfd6-d449852c32ef-ovsdbserver-nb\") pod \"dnsmasq-dns-7cc5ddf659-w6pjs\" (UID: \"9b440304-b475-4636-bfd6-d449852c32ef\") " pod="openstack/dnsmasq-dns-7cc5ddf659-w6pjs" Jan 22 16:49:56 crc kubenswrapper[4758]: E0122 16:49:56.216149 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-jfr8d ovsdbserver-nb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-7cc5ddf659-w6pjs" podUID="9b440304-b475-4636-bfd6-d449852c32ef" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.247872 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfr8d\" (UniqueName: \"kubernetes.io/projected/9b440304-b475-4636-bfd6-d449852c32ef-kube-api-access-jfr8d\") pod \"dnsmasq-dns-7cc5ddf659-w6pjs\" (UID: \"9b440304-b475-4636-bfd6-d449852c32ef\") " pod="openstack/dnsmasq-dns-7cc5ddf659-w6pjs" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.271227 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-69f676bd95-zrz5w"] Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.273633 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69f676bd95-zrz5w" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.279946 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.284335 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69f676bd95-zrz5w"] Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.314878 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5335ec54-1c39-41ba-9788-672cde3d164c-config\") pod \"ovn-northd-0\" (UID: \"5335ec54-1c39-41ba-9788-672cde3d164c\") " pod="openstack/ovn-northd-0" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.315182 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f-dns-svc\") pod \"dnsmasq-dns-69f676bd95-zrz5w\" (UID: \"6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f\") " pod="openstack/dnsmasq-dns-69f676bd95-zrz5w" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.315245 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/15cc31e0-f0b3-4f0f-aaf2-af71e3c34aff-ovs-rundir\") pod \"ovn-controller-metrics-pbmk8\" (UID: \"15cc31e0-f0b3-4f0f-aaf2-af71e3c34aff\") " pod="openstack/ovn-controller-metrics-pbmk8" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.315274 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f-config\") pod \"dnsmasq-dns-69f676bd95-zrz5w\" (UID: \"6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f\") " pod="openstack/dnsmasq-dns-69f676bd95-zrz5w" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.315327 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5335ec54-1c39-41ba-9788-672cde3d164c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5335ec54-1c39-41ba-9788-672cde3d164c\") " pod="openstack/ovn-northd-0" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.315352 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15cc31e0-f0b3-4f0f-aaf2-af71e3c34aff-config\") pod \"ovn-controller-metrics-pbmk8\" (UID: \"15cc31e0-f0b3-4f0f-aaf2-af71e3c34aff\") " pod="openstack/ovn-controller-metrics-pbmk8" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.315380 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/15cc31e0-f0b3-4f0f-aaf2-af71e3c34aff-ovn-rundir\") pod \"ovn-controller-metrics-pbmk8\" (UID: \"15cc31e0-f0b3-4f0f-aaf2-af71e3c34aff\") " pod="openstack/ovn-controller-metrics-pbmk8" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.315422 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/15cc31e0-f0b3-4f0f-aaf2-af71e3c34aff-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-pbmk8\" (UID: \"15cc31e0-f0b3-4f0f-aaf2-af71e3c34aff\") " pod="openstack/ovn-controller-metrics-pbmk8" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.315451 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5335ec54-1c39-41ba-9788-672cde3d164c-scripts\") pod \"ovn-northd-0\" (UID: \"5335ec54-1c39-41ba-9788-672cde3d164c\") " pod="openstack/ovn-northd-0" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.315495 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5335ec54-1c39-41ba-9788-672cde3d164c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5335ec54-1c39-41ba-9788-672cde3d164c\") " pod="openstack/ovn-northd-0" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.315523 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdrs6\" (UniqueName: \"kubernetes.io/projected/5335ec54-1c39-41ba-9788-672cde3d164c-kube-api-access-gdrs6\") pod \"ovn-northd-0\" (UID: \"5335ec54-1c39-41ba-9788-672cde3d164c\") " pod="openstack/ovn-northd-0" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.315596 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f-ovsdbserver-nb\") pod \"dnsmasq-dns-69f676bd95-zrz5w\" (UID: \"6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f\") " pod="openstack/dnsmasq-dns-69f676bd95-zrz5w" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.315623 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5335ec54-1c39-41ba-9788-672cde3d164c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5335ec54-1c39-41ba-9788-672cde3d164c\") " pod="openstack/ovn-northd-0" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.315656 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxk8g\" (UniqueName: \"kubernetes.io/projected/15cc31e0-f0b3-4f0f-aaf2-af71e3c34aff-kube-api-access-zxk8g\") pod \"ovn-controller-metrics-pbmk8\" (UID: \"15cc31e0-f0b3-4f0f-aaf2-af71e3c34aff\") " pod="openstack/ovn-controller-metrics-pbmk8" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.315688 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq9qt\" (UniqueName: \"kubernetes.io/projected/6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f-kube-api-access-lq9qt\") pod \"dnsmasq-dns-69f676bd95-zrz5w\" (UID: \"6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f\") " pod="openstack/dnsmasq-dns-69f676bd95-zrz5w" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.315717 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5335ec54-1c39-41ba-9788-672cde3d164c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5335ec54-1c39-41ba-9788-672cde3d164c\") " pod="openstack/ovn-northd-0" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.315756 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15cc31e0-f0b3-4f0f-aaf2-af71e3c34aff-combined-ca-bundle\") pod \"ovn-controller-metrics-pbmk8\" (UID: \"15cc31e0-f0b3-4f0f-aaf2-af71e3c34aff\") " pod="openstack/ovn-controller-metrics-pbmk8" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.315785 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f-ovsdbserver-sb\") pod \"dnsmasq-dns-69f676bd95-zrz5w\" (UID: \"6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f\") " pod="openstack/dnsmasq-dns-69f676bd95-zrz5w" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.316490 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5335ec54-1c39-41ba-9788-672cde3d164c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5335ec54-1c39-41ba-9788-672cde3d164c\") " pod="openstack/ovn-northd-0" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.316828 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5335ec54-1c39-41ba-9788-672cde3d164c-scripts\") pod \"ovn-northd-0\" (UID: \"5335ec54-1c39-41ba-9788-672cde3d164c\") " pod="openstack/ovn-northd-0" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.316852 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/15cc31e0-f0b3-4f0f-aaf2-af71e3c34aff-ovn-rundir\") pod \"ovn-controller-metrics-pbmk8\" (UID: \"15cc31e0-f0b3-4f0f-aaf2-af71e3c34aff\") " pod="openstack/ovn-controller-metrics-pbmk8" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.316897 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/15cc31e0-f0b3-4f0f-aaf2-af71e3c34aff-ovs-rundir\") pod \"ovn-controller-metrics-pbmk8\" (UID: \"15cc31e0-f0b3-4f0f-aaf2-af71e3c34aff\") " pod="openstack/ovn-controller-metrics-pbmk8" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.317764 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15cc31e0-f0b3-4f0f-aaf2-af71e3c34aff-config\") pod \"ovn-controller-metrics-pbmk8\" (UID: \"15cc31e0-f0b3-4f0f-aaf2-af71e3c34aff\") " pod="openstack/ovn-controller-metrics-pbmk8" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.320498 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5335ec54-1c39-41ba-9788-672cde3d164c-config\") pod \"ovn-northd-0\" (UID: \"5335ec54-1c39-41ba-9788-672cde3d164c\") " pod="openstack/ovn-northd-0" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.321614 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15cc31e0-f0b3-4f0f-aaf2-af71e3c34aff-combined-ca-bundle\") pod \"ovn-controller-metrics-pbmk8\" (UID: \"15cc31e0-f0b3-4f0f-aaf2-af71e3c34aff\") " pod="openstack/ovn-controller-metrics-pbmk8" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.322470 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5335ec54-1c39-41ba-9788-672cde3d164c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5335ec54-1c39-41ba-9788-672cde3d164c\") " pod="openstack/ovn-northd-0" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.323920 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/15cc31e0-f0b3-4f0f-aaf2-af71e3c34aff-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-pbmk8\" (UID: \"15cc31e0-f0b3-4f0f-aaf2-af71e3c34aff\") " pod="openstack/ovn-controller-metrics-pbmk8" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.326066 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5335ec54-1c39-41ba-9788-672cde3d164c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5335ec54-1c39-41ba-9788-672cde3d164c\") " pod="openstack/ovn-northd-0" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.326382 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5335ec54-1c39-41ba-9788-672cde3d164c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5335ec54-1c39-41ba-9788-672cde3d164c\") " pod="openstack/ovn-northd-0" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.336247 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxk8g\" (UniqueName: \"kubernetes.io/projected/15cc31e0-f0b3-4f0f-aaf2-af71e3c34aff-kube-api-access-zxk8g\") pod \"ovn-controller-metrics-pbmk8\" (UID: \"15cc31e0-f0b3-4f0f-aaf2-af71e3c34aff\") " pod="openstack/ovn-controller-metrics-pbmk8" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.340181 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdrs6\" (UniqueName: \"kubernetes.io/projected/5335ec54-1c39-41ba-9788-672cde3d164c-kube-api-access-gdrs6\") pod \"ovn-northd-0\" (UID: \"5335ec54-1c39-41ba-9788-672cde3d164c\") " pod="openstack/ovn-northd-0" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.418378 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f-config\") pod \"dnsmasq-dns-69f676bd95-zrz5w\" (UID: \"6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f\") " pod="openstack/dnsmasq-dns-69f676bd95-zrz5w" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.418554 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f-ovsdbserver-nb\") pod \"dnsmasq-dns-69f676bd95-zrz5w\" (UID: \"6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f\") " pod="openstack/dnsmasq-dns-69f676bd95-zrz5w" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.418593 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq9qt\" (UniqueName: \"kubernetes.io/projected/6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f-kube-api-access-lq9qt\") pod \"dnsmasq-dns-69f676bd95-zrz5w\" (UID: \"6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f\") " pod="openstack/dnsmasq-dns-69f676bd95-zrz5w" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.418619 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f-ovsdbserver-sb\") pod \"dnsmasq-dns-69f676bd95-zrz5w\" (UID: \"6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f\") " pod="openstack/dnsmasq-dns-69f676bd95-zrz5w" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.418657 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f-dns-svc\") pod \"dnsmasq-dns-69f676bd95-zrz5w\" (UID: \"6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f\") " pod="openstack/dnsmasq-dns-69f676bd95-zrz5w" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.419558 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f-config\") pod \"dnsmasq-dns-69f676bd95-zrz5w\" (UID: \"6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f\") " pod="openstack/dnsmasq-dns-69f676bd95-zrz5w" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.420707 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f-ovsdbserver-nb\") pod \"dnsmasq-dns-69f676bd95-zrz5w\" (UID: \"6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f\") " pod="openstack/dnsmasq-dns-69f676bd95-zrz5w" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.420781 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f-ovsdbserver-sb\") pod \"dnsmasq-dns-69f676bd95-zrz5w\" (UID: \"6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f\") " pod="openstack/dnsmasq-dns-69f676bd95-zrz5w" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.421047 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f-dns-svc\") pod \"dnsmasq-dns-69f676bd95-zrz5w\" (UID: \"6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f\") " pod="openstack/dnsmasq-dns-69f676bd95-zrz5w" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.440956 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq9qt\" (UniqueName: \"kubernetes.io/projected/6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f-kube-api-access-lq9qt\") pod \"dnsmasq-dns-69f676bd95-zrz5w\" (UID: \"6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f\") " pod="openstack/dnsmasq-dns-69f676bd95-zrz5w" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.446140 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-pbmk8" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.507928 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.596350 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69f676bd95-zrz5w" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.668868 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cc5ddf659-w6pjs" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.689009 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cc5ddf659-w6pjs" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.726422 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b440304-b475-4636-bfd6-d449852c32ef-config\") pod \"9b440304-b475-4636-bfd6-d449852c32ef\" (UID: \"9b440304-b475-4636-bfd6-d449852c32ef\") " Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.726538 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b440304-b475-4636-bfd6-d449852c32ef-dns-svc\") pod \"9b440304-b475-4636-bfd6-d449852c32ef\" (UID: \"9b440304-b475-4636-bfd6-d449852c32ef\") " Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.726599 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b440304-b475-4636-bfd6-d449852c32ef-ovsdbserver-nb\") pod \"9b440304-b475-4636-bfd6-d449852c32ef\" (UID: \"9b440304-b475-4636-bfd6-d449852c32ef\") " Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.726659 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfr8d\" (UniqueName: \"kubernetes.io/projected/9b440304-b475-4636-bfd6-d449852c32ef-kube-api-access-jfr8d\") pod \"9b440304-b475-4636-bfd6-d449852c32ef\" (UID: \"9b440304-b475-4636-bfd6-d449852c32ef\") " Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.727225 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b440304-b475-4636-bfd6-d449852c32ef-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9b440304-b475-4636-bfd6-d449852c32ef" (UID: "9b440304-b475-4636-bfd6-d449852c32ef"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.727248 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b440304-b475-4636-bfd6-d449852c32ef-config" (OuterVolumeSpecName: "config") pod "9b440304-b475-4636-bfd6-d449852c32ef" (UID: "9b440304-b475-4636-bfd6-d449852c32ef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.727265 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b440304-b475-4636-bfd6-d449852c32ef-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9b440304-b475-4636-bfd6-d449852c32ef" (UID: "9b440304-b475-4636-bfd6-d449852c32ef"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.728010 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b440304-b475-4636-bfd6-d449852c32ef-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.728031 4758 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b440304-b475-4636-bfd6-d449852c32ef-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.728043 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b440304-b475-4636-bfd6-d449852c32ef-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.732232 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b440304-b475-4636-bfd6-d449852c32ef-kube-api-access-jfr8d" (OuterVolumeSpecName: "kube-api-access-jfr8d") pod "9b440304-b475-4636-bfd6-d449852c32ef" (UID: "9b440304-b475-4636-bfd6-d449852c32ef"). InnerVolumeSpecName "kube-api-access-jfr8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.829409 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfr8d\" (UniqueName: \"kubernetes.io/projected/9b440304-b475-4636-bfd6-d449852c32ef-kube-api-access-jfr8d\") on node \"crc\" DevicePath \"\"" Jan 22 16:49:56 crc kubenswrapper[4758]: I0122 16:49:56.944048 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-pbmk8"] Jan 22 16:49:56 crc kubenswrapper[4758]: W0122 16:49:56.951668 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15cc31e0_f0b3_4f0f_aaf2_af71e3c34aff.slice/crio-7c5be1f1fe60440a3066c8881ce7a9c9e9689e31147706c3d3db497c6c50def7 WatchSource:0}: Error finding container 7c5be1f1fe60440a3066c8881ce7a9c9e9689e31147706c3d3db497c6c50def7: Status 404 returned error can't find the container with id 7c5be1f1fe60440a3066c8881ce7a9c9e9689e31147706c3d3db497c6c50def7 Jan 22 16:49:57 crc kubenswrapper[4758]: I0122 16:49:57.056330 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 22 16:49:57 crc kubenswrapper[4758]: W0122 16:49:57.058687 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5335ec54_1c39_41ba_9788_672cde3d164c.slice/crio-d743a628dac75795074822c69a760099b4243bae50ffa56f9611ac75bc565dcc WatchSource:0}: Error finding container d743a628dac75795074822c69a760099b4243bae50ffa56f9611ac75bc565dcc: Status 404 returned error can't find the container with id d743a628dac75795074822c69a760099b4243bae50ffa56f9611ac75bc565dcc Jan 22 16:49:57 crc kubenswrapper[4758]: I0122 16:49:57.122596 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69f676bd95-zrz5w"] Jan 22 16:49:57 crc kubenswrapper[4758]: W0122 16:49:57.125698 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d34c50d_e958_4ab9_bdf1_8fbdab8dda8f.slice/crio-ca2d4724806b7124ff8e2b5ac1d41a6222935fc14862c6d9389716759c1a0ce8 WatchSource:0}: Error finding container ca2d4724806b7124ff8e2b5ac1d41a6222935fc14862c6d9389716759c1a0ce8: Status 404 returned error can't find the container with id ca2d4724806b7124ff8e2b5ac1d41a6222935fc14862c6d9389716759c1a0ce8 Jan 22 16:49:57 crc kubenswrapper[4758]: I0122 16:49:57.675931 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5335ec54-1c39-41ba-9788-672cde3d164c","Type":"ContainerStarted","Data":"d743a628dac75795074822c69a760099b4243bae50ffa56f9611ac75bc565dcc"} Jan 22 16:49:57 crc kubenswrapper[4758]: I0122 16:49:57.677425 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69f676bd95-zrz5w" event={"ID":"6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f","Type":"ContainerStarted","Data":"ca2d4724806b7124ff8e2b5ac1d41a6222935fc14862c6d9389716759c1a0ce8"} Jan 22 16:49:57 crc kubenswrapper[4758]: I0122 16:49:57.678520 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cc5ddf659-w6pjs" Jan 22 16:49:57 crc kubenswrapper[4758]: I0122 16:49:57.679186 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-pbmk8" event={"ID":"15cc31e0-f0b3-4f0f-aaf2-af71e3c34aff","Type":"ContainerStarted","Data":"7c5be1f1fe60440a3066c8881ce7a9c9e9689e31147706c3d3db497c6c50def7"} Jan 22 16:49:57 crc kubenswrapper[4758]: I0122 16:49:57.726521 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cc5ddf659-w6pjs"] Jan 22 16:49:57 crc kubenswrapper[4758]: I0122 16:49:57.731587 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cc5ddf659-w6pjs"] Jan 22 16:49:58 crc kubenswrapper[4758]: I0122 16:49:58.701520 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69f676bd95-zrz5w" event={"ID":"6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f","Type":"ContainerStarted","Data":"578494ac61c1d0759137a7bdee09f2ad505102af3709868f87f0bd010cd40fa5"} Jan 22 16:49:58 crc kubenswrapper[4758]: I0122 16:49:58.708211 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-pbmk8" event={"ID":"15cc31e0-f0b3-4f0f-aaf2-af71e3c34aff","Type":"ContainerStarted","Data":"0df5cd160aae0a49fa86f3c11bd03d02b9e5209565a1e2b9207cc1cc9a266de3"} Jan 22 16:49:58 crc kubenswrapper[4758]: I0122 16:49:58.738315 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69f676bd95-zrz5w"] Jan 22 16:49:58 crc kubenswrapper[4758]: I0122 16:49:58.763119 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79fb856f67-6q6hs"] Jan 22 16:49:58 crc kubenswrapper[4758]: I0122 16:49:58.764444 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79fb856f67-6q6hs" Jan 22 16:49:58 crc kubenswrapper[4758]: I0122 16:49:58.785305 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79fb856f67-6q6hs"] Jan 22 16:49:58 crc kubenswrapper[4758]: I0122 16:49:58.827481 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b440304-b475-4636-bfd6-d449852c32ef" path="/var/lib/kubelet/pods/9b440304-b475-4636-bfd6-d449852c32ef/volumes" Jan 22 16:49:58 crc kubenswrapper[4758]: I0122 16:49:58.865080 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b704dfb7-fb7d-422c-82b0-1a0f4ae9b755-ovsdbserver-sb\") pod \"dnsmasq-dns-79fb856f67-6q6hs\" (UID: \"b704dfb7-fb7d-422c-82b0-1a0f4ae9b755\") " pod="openstack/dnsmasq-dns-79fb856f67-6q6hs" Jan 22 16:49:58 crc kubenswrapper[4758]: I0122 16:49:58.865164 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltlmv\" (UniqueName: \"kubernetes.io/projected/b704dfb7-fb7d-422c-82b0-1a0f4ae9b755-kube-api-access-ltlmv\") pod \"dnsmasq-dns-79fb856f67-6q6hs\" (UID: \"b704dfb7-fb7d-422c-82b0-1a0f4ae9b755\") " pod="openstack/dnsmasq-dns-79fb856f67-6q6hs" Jan 22 16:49:58 crc kubenswrapper[4758]: I0122 16:49:58.865321 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b704dfb7-fb7d-422c-82b0-1a0f4ae9b755-dns-svc\") pod \"dnsmasq-dns-79fb856f67-6q6hs\" (UID: \"b704dfb7-fb7d-422c-82b0-1a0f4ae9b755\") " pod="openstack/dnsmasq-dns-79fb856f67-6q6hs" Jan 22 16:49:58 crc kubenswrapper[4758]: I0122 16:49:58.865453 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b704dfb7-fb7d-422c-82b0-1a0f4ae9b755-ovsdbserver-nb\") pod \"dnsmasq-dns-79fb856f67-6q6hs\" (UID: \"b704dfb7-fb7d-422c-82b0-1a0f4ae9b755\") " pod="openstack/dnsmasq-dns-79fb856f67-6q6hs" Jan 22 16:49:58 crc kubenswrapper[4758]: I0122 16:49:58.865514 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b704dfb7-fb7d-422c-82b0-1a0f4ae9b755-config\") pod \"dnsmasq-dns-79fb856f67-6q6hs\" (UID: \"b704dfb7-fb7d-422c-82b0-1a0f4ae9b755\") " pod="openstack/dnsmasq-dns-79fb856f67-6q6hs" Jan 22 16:49:58 crc kubenswrapper[4758]: I0122 16:49:58.967603 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b704dfb7-fb7d-422c-82b0-1a0f4ae9b755-ovsdbserver-sb\") pod \"dnsmasq-dns-79fb856f67-6q6hs\" (UID: \"b704dfb7-fb7d-422c-82b0-1a0f4ae9b755\") " pod="openstack/dnsmasq-dns-79fb856f67-6q6hs" Jan 22 16:49:58 crc kubenswrapper[4758]: I0122 16:49:58.967779 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltlmv\" (UniqueName: \"kubernetes.io/projected/b704dfb7-fb7d-422c-82b0-1a0f4ae9b755-kube-api-access-ltlmv\") pod \"dnsmasq-dns-79fb856f67-6q6hs\" (UID: \"b704dfb7-fb7d-422c-82b0-1a0f4ae9b755\") " pod="openstack/dnsmasq-dns-79fb856f67-6q6hs" Jan 22 16:49:58 crc kubenswrapper[4758]: I0122 16:49:58.968252 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b704dfb7-fb7d-422c-82b0-1a0f4ae9b755-dns-svc\") pod \"dnsmasq-dns-79fb856f67-6q6hs\" (UID: \"b704dfb7-fb7d-422c-82b0-1a0f4ae9b755\") " pod="openstack/dnsmasq-dns-79fb856f67-6q6hs" Jan 22 16:49:58 crc kubenswrapper[4758]: I0122 16:49:58.968698 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b704dfb7-fb7d-422c-82b0-1a0f4ae9b755-ovsdbserver-sb\") pod \"dnsmasq-dns-79fb856f67-6q6hs\" (UID: \"b704dfb7-fb7d-422c-82b0-1a0f4ae9b755\") " pod="openstack/dnsmasq-dns-79fb856f67-6q6hs" Jan 22 16:49:58 crc kubenswrapper[4758]: I0122 16:49:58.969268 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b704dfb7-fb7d-422c-82b0-1a0f4ae9b755-dns-svc\") pod \"dnsmasq-dns-79fb856f67-6q6hs\" (UID: \"b704dfb7-fb7d-422c-82b0-1a0f4ae9b755\") " pod="openstack/dnsmasq-dns-79fb856f67-6q6hs" Jan 22 16:49:58 crc kubenswrapper[4758]: I0122 16:49:58.969462 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b704dfb7-fb7d-422c-82b0-1a0f4ae9b755-ovsdbserver-nb\") pod \"dnsmasq-dns-79fb856f67-6q6hs\" (UID: \"b704dfb7-fb7d-422c-82b0-1a0f4ae9b755\") " pod="openstack/dnsmasq-dns-79fb856f67-6q6hs" Jan 22 16:49:58 crc kubenswrapper[4758]: I0122 16:49:58.970092 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b704dfb7-fb7d-422c-82b0-1a0f4ae9b755-ovsdbserver-nb\") pod \"dnsmasq-dns-79fb856f67-6q6hs\" (UID: \"b704dfb7-fb7d-422c-82b0-1a0f4ae9b755\") " pod="openstack/dnsmasq-dns-79fb856f67-6q6hs" Jan 22 16:49:58 crc kubenswrapper[4758]: I0122 16:49:58.970636 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b704dfb7-fb7d-422c-82b0-1a0f4ae9b755-config\") pod \"dnsmasq-dns-79fb856f67-6q6hs\" (UID: \"b704dfb7-fb7d-422c-82b0-1a0f4ae9b755\") " pod="openstack/dnsmasq-dns-79fb856f67-6q6hs" Jan 22 16:49:58 crc kubenswrapper[4758]: I0122 16:49:58.971222 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b704dfb7-fb7d-422c-82b0-1a0f4ae9b755-config\") pod \"dnsmasq-dns-79fb856f67-6q6hs\" (UID: \"b704dfb7-fb7d-422c-82b0-1a0f4ae9b755\") " pod="openstack/dnsmasq-dns-79fb856f67-6q6hs" Jan 22 16:49:58 crc kubenswrapper[4758]: I0122 16:49:58.989589 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltlmv\" (UniqueName: \"kubernetes.io/projected/b704dfb7-fb7d-422c-82b0-1a0f4ae9b755-kube-api-access-ltlmv\") pod \"dnsmasq-dns-79fb856f67-6q6hs\" (UID: \"b704dfb7-fb7d-422c-82b0-1a0f4ae9b755\") " pod="openstack/dnsmasq-dns-79fb856f67-6q6hs" Jan 22 16:49:59 crc kubenswrapper[4758]: I0122 16:49:59.083778 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79fb856f67-6q6hs" Jan 22 16:49:59 crc kubenswrapper[4758]: I0122 16:49:59.584336 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79fb856f67-6q6hs"] Jan 22 16:49:59 crc kubenswrapper[4758]: I0122 16:49:59.723355 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79fb856f67-6q6hs" event={"ID":"b704dfb7-fb7d-422c-82b0-1a0f4ae9b755","Type":"ContainerStarted","Data":"15f2f97b3e383494b10e0314d203753b62116dc958d4ceff459252336d3af890"} Jan 22 16:49:59 crc kubenswrapper[4758]: I0122 16:49:59.923820 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 22 16:49:59 crc kubenswrapper[4758]: I0122 16:49:59.933283 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 22 16:49:59 crc kubenswrapper[4758]: I0122 16:49:59.935995 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 22 16:49:59 crc kubenswrapper[4758]: I0122 16:49:59.935992 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 22 16:49:59 crc kubenswrapper[4758]: I0122 16:49:59.936466 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 22 16:49:59 crc kubenswrapper[4758]: I0122 16:49:59.936543 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-xgjlh" Jan 22 16:49:59 crc kubenswrapper[4758]: I0122 16:49:59.954315 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 22 16:50:00 crc kubenswrapper[4758]: I0122 16:50:00.091507 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htbrr\" (UniqueName: \"kubernetes.io/projected/c63f01b2-8785-4108-b532-b69bc2407a26-kube-api-access-htbrr\") pod \"swift-storage-0\" (UID: \"c63f01b2-8785-4108-b532-b69bc2407a26\") " pod="openstack/swift-storage-0" Jan 22 16:50:00 crc kubenswrapper[4758]: I0122 16:50:00.091608 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c63f01b2-8785-4108-b532-b69bc2407a26-cache\") pod \"swift-storage-0\" (UID: \"c63f01b2-8785-4108-b532-b69bc2407a26\") " pod="openstack/swift-storage-0" Jan 22 16:50:00 crc kubenswrapper[4758]: I0122 16:50:00.091640 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c63f01b2-8785-4108-b532-b69bc2407a26-lock\") pod \"swift-storage-0\" (UID: \"c63f01b2-8785-4108-b532-b69bc2407a26\") " pod="openstack/swift-storage-0" Jan 22 16:50:00 crc kubenswrapper[4758]: I0122 16:50:00.091658 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c63f01b2-8785-4108-b532-b69bc2407a26-etc-swift\") pod \"swift-storage-0\" (UID: \"c63f01b2-8785-4108-b532-b69bc2407a26\") " pod="openstack/swift-storage-0" Jan 22 16:50:00 crc kubenswrapper[4758]: I0122 16:50:00.091685 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c63f01b2-8785-4108-b532-b69bc2407a26-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"c63f01b2-8785-4108-b532-b69bc2407a26\") " pod="openstack/swift-storage-0" Jan 22 16:50:00 crc kubenswrapper[4758]: I0122 16:50:00.091764 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"c63f01b2-8785-4108-b532-b69bc2407a26\") " pod="openstack/swift-storage-0" Jan 22 16:50:00 crc kubenswrapper[4758]: I0122 16:50:00.192913 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htbrr\" (UniqueName: \"kubernetes.io/projected/c63f01b2-8785-4108-b532-b69bc2407a26-kube-api-access-htbrr\") pod \"swift-storage-0\" (UID: \"c63f01b2-8785-4108-b532-b69bc2407a26\") " pod="openstack/swift-storage-0" Jan 22 16:50:00 crc kubenswrapper[4758]: I0122 16:50:00.193041 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c63f01b2-8785-4108-b532-b69bc2407a26-cache\") pod \"swift-storage-0\" (UID: \"c63f01b2-8785-4108-b532-b69bc2407a26\") " pod="openstack/swift-storage-0" Jan 22 16:50:00 crc kubenswrapper[4758]: I0122 16:50:00.193079 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c63f01b2-8785-4108-b532-b69bc2407a26-lock\") pod \"swift-storage-0\" (UID: \"c63f01b2-8785-4108-b532-b69bc2407a26\") " pod="openstack/swift-storage-0" Jan 22 16:50:00 crc kubenswrapper[4758]: I0122 16:50:00.193103 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c63f01b2-8785-4108-b532-b69bc2407a26-etc-swift\") pod \"swift-storage-0\" (UID: \"c63f01b2-8785-4108-b532-b69bc2407a26\") " pod="openstack/swift-storage-0" Jan 22 16:50:00 crc kubenswrapper[4758]: I0122 16:50:00.193136 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c63f01b2-8785-4108-b532-b69bc2407a26-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"c63f01b2-8785-4108-b532-b69bc2407a26\") " pod="openstack/swift-storage-0" Jan 22 16:50:00 crc kubenswrapper[4758]: I0122 16:50:00.193163 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"c63f01b2-8785-4108-b532-b69bc2407a26\") " pod="openstack/swift-storage-0" Jan 22 16:50:00 crc kubenswrapper[4758]: E0122 16:50:00.193270 4758 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 22 16:50:00 crc kubenswrapper[4758]: E0122 16:50:00.193291 4758 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 22 16:50:00 crc kubenswrapper[4758]: E0122 16:50:00.193331 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c63f01b2-8785-4108-b532-b69bc2407a26-etc-swift podName:c63f01b2-8785-4108-b532-b69bc2407a26 nodeName:}" failed. No retries permitted until 2026-01-22 16:50:00.693315265 +0000 UTC m=+1222.176654550 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c63f01b2-8785-4108-b532-b69bc2407a26-etc-swift") pod "swift-storage-0" (UID: "c63f01b2-8785-4108-b532-b69bc2407a26") : configmap "swift-ring-files" not found Jan 22 16:50:00 crc kubenswrapper[4758]: I0122 16:50:00.193570 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"c63f01b2-8785-4108-b532-b69bc2407a26\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/swift-storage-0" Jan 22 16:50:00 crc kubenswrapper[4758]: I0122 16:50:00.193641 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c63f01b2-8785-4108-b532-b69bc2407a26-cache\") pod \"swift-storage-0\" (UID: \"c63f01b2-8785-4108-b532-b69bc2407a26\") " pod="openstack/swift-storage-0" Jan 22 16:50:00 crc kubenswrapper[4758]: I0122 16:50:00.193648 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c63f01b2-8785-4108-b532-b69bc2407a26-lock\") pod \"swift-storage-0\" (UID: \"c63f01b2-8785-4108-b532-b69bc2407a26\") " pod="openstack/swift-storage-0" Jan 22 16:50:00 crc kubenswrapper[4758]: I0122 16:50:00.197763 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c63f01b2-8785-4108-b532-b69bc2407a26-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"c63f01b2-8785-4108-b532-b69bc2407a26\") " pod="openstack/swift-storage-0" Jan 22 16:50:00 crc kubenswrapper[4758]: I0122 16:50:00.213839 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htbrr\" (UniqueName: \"kubernetes.io/projected/c63f01b2-8785-4108-b532-b69bc2407a26-kube-api-access-htbrr\") pod \"swift-storage-0\" (UID: \"c63f01b2-8785-4108-b532-b69bc2407a26\") " pod="openstack/swift-storage-0" Jan 22 16:50:00 crc kubenswrapper[4758]: I0122 16:50:00.217559 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"c63f01b2-8785-4108-b532-b69bc2407a26\") " pod="openstack/swift-storage-0" Jan 22 16:50:00 crc kubenswrapper[4758]: I0122 16:50:00.323242 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-q78gl"] Jan 22 16:50:00 crc kubenswrapper[4758]: I0122 16:50:00.324616 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-q78gl" Jan 22 16:50:00 crc kubenswrapper[4758]: I0122 16:50:00.327126 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 22 16:50:00 crc kubenswrapper[4758]: I0122 16:50:00.327254 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 22 16:50:00 crc kubenswrapper[4758]: I0122 16:50:00.328870 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 22 16:50:00 crc kubenswrapper[4758]: I0122 16:50:00.342710 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-q78gl"] Jan 22 16:50:00 crc kubenswrapper[4758]: I0122 16:50:00.498728 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3df63c93-1525-4b38-92e3-4d9b15a5c293-combined-ca-bundle\") pod \"swift-ring-rebalance-q78gl\" (UID: \"3df63c93-1525-4b38-92e3-4d9b15a5c293\") " pod="openstack/swift-ring-rebalance-q78gl" Jan 22 16:50:00 crc kubenswrapper[4758]: I0122 16:50:00.499082 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3df63c93-1525-4b38-92e3-4d9b15a5c293-etc-swift\") pod \"swift-ring-rebalance-q78gl\" (UID: \"3df63c93-1525-4b38-92e3-4d9b15a5c293\") " pod="openstack/swift-ring-rebalance-q78gl" Jan 22 16:50:00 crc kubenswrapper[4758]: I0122 16:50:00.499254 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3df63c93-1525-4b38-92e3-4d9b15a5c293-swiftconf\") pod \"swift-ring-rebalance-q78gl\" (UID: \"3df63c93-1525-4b38-92e3-4d9b15a5c293\") " pod="openstack/swift-ring-rebalance-q78gl" Jan 22 16:50:00 crc kubenswrapper[4758]: I0122 16:50:00.499352 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3df63c93-1525-4b38-92e3-4d9b15a5c293-dispersionconf\") pod \"swift-ring-rebalance-q78gl\" (UID: \"3df63c93-1525-4b38-92e3-4d9b15a5c293\") " pod="openstack/swift-ring-rebalance-q78gl" Jan 22 16:50:00 crc kubenswrapper[4758]: I0122 16:50:00.499490 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3df63c93-1525-4b38-92e3-4d9b15a5c293-ring-data-devices\") pod \"swift-ring-rebalance-q78gl\" (UID: \"3df63c93-1525-4b38-92e3-4d9b15a5c293\") " pod="openstack/swift-ring-rebalance-q78gl" Jan 22 16:50:00 crc kubenswrapper[4758]: I0122 16:50:00.499613 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3df63c93-1525-4b38-92e3-4d9b15a5c293-scripts\") pod \"swift-ring-rebalance-q78gl\" (UID: \"3df63c93-1525-4b38-92e3-4d9b15a5c293\") " pod="openstack/swift-ring-rebalance-q78gl" Jan 22 16:50:00 crc kubenswrapper[4758]: I0122 16:50:00.499724 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skv96\" (UniqueName: \"kubernetes.io/projected/3df63c93-1525-4b38-92e3-4d9b15a5c293-kube-api-access-skv96\") pod \"swift-ring-rebalance-q78gl\" (UID: \"3df63c93-1525-4b38-92e3-4d9b15a5c293\") " pod="openstack/swift-ring-rebalance-q78gl" Jan 22 16:50:00 crc kubenswrapper[4758]: I0122 16:50:00.601319 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3df63c93-1525-4b38-92e3-4d9b15a5c293-combined-ca-bundle\") pod \"swift-ring-rebalance-q78gl\" (UID: \"3df63c93-1525-4b38-92e3-4d9b15a5c293\") " pod="openstack/swift-ring-rebalance-q78gl" Jan 22 16:50:00 crc kubenswrapper[4758]: I0122 16:50:00.601465 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3df63c93-1525-4b38-92e3-4d9b15a5c293-etc-swift\") pod \"swift-ring-rebalance-q78gl\" (UID: \"3df63c93-1525-4b38-92e3-4d9b15a5c293\") " pod="openstack/swift-ring-rebalance-q78gl" Jan 22 16:50:00 crc kubenswrapper[4758]: I0122 16:50:00.601566 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3df63c93-1525-4b38-92e3-4d9b15a5c293-swiftconf\") pod \"swift-ring-rebalance-q78gl\" (UID: \"3df63c93-1525-4b38-92e3-4d9b15a5c293\") " pod="openstack/swift-ring-rebalance-q78gl" Jan 22 16:50:00 crc kubenswrapper[4758]: I0122 16:50:00.601593 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3df63c93-1525-4b38-92e3-4d9b15a5c293-dispersionconf\") pod \"swift-ring-rebalance-q78gl\" (UID: \"3df63c93-1525-4b38-92e3-4d9b15a5c293\") " pod="openstack/swift-ring-rebalance-q78gl" Jan 22 16:50:00 crc kubenswrapper[4758]: I0122 16:50:00.601625 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3df63c93-1525-4b38-92e3-4d9b15a5c293-ring-data-devices\") pod \"swift-ring-rebalance-q78gl\" (UID: \"3df63c93-1525-4b38-92e3-4d9b15a5c293\") " pod="openstack/swift-ring-rebalance-q78gl" Jan 22 16:50:00 crc kubenswrapper[4758]: I0122 16:50:00.601659 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3df63c93-1525-4b38-92e3-4d9b15a5c293-scripts\") pod \"swift-ring-rebalance-q78gl\" (UID: \"3df63c93-1525-4b38-92e3-4d9b15a5c293\") " pod="openstack/swift-ring-rebalance-q78gl" Jan 22 16:50:00 crc kubenswrapper[4758]: I0122 16:50:00.601691 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skv96\" (UniqueName: \"kubernetes.io/projected/3df63c93-1525-4b38-92e3-4d9b15a5c293-kube-api-access-skv96\") pod \"swift-ring-rebalance-q78gl\" (UID: \"3df63c93-1525-4b38-92e3-4d9b15a5c293\") " pod="openstack/swift-ring-rebalance-q78gl" Jan 22 16:50:00 crc kubenswrapper[4758]: I0122 16:50:00.602589 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3df63c93-1525-4b38-92e3-4d9b15a5c293-etc-swift\") pod \"swift-ring-rebalance-q78gl\" (UID: \"3df63c93-1525-4b38-92e3-4d9b15a5c293\") " pod="openstack/swift-ring-rebalance-q78gl" Jan 22 16:50:00 crc kubenswrapper[4758]: I0122 16:50:00.602950 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3df63c93-1525-4b38-92e3-4d9b15a5c293-scripts\") pod \"swift-ring-rebalance-q78gl\" (UID: \"3df63c93-1525-4b38-92e3-4d9b15a5c293\") " pod="openstack/swift-ring-rebalance-q78gl" Jan 22 16:50:00 crc kubenswrapper[4758]: I0122 16:50:00.603298 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3df63c93-1525-4b38-92e3-4d9b15a5c293-ring-data-devices\") pod \"swift-ring-rebalance-q78gl\" (UID: \"3df63c93-1525-4b38-92e3-4d9b15a5c293\") " pod="openstack/swift-ring-rebalance-q78gl" Jan 22 16:50:00 crc kubenswrapper[4758]: I0122 16:50:00.606147 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3df63c93-1525-4b38-92e3-4d9b15a5c293-combined-ca-bundle\") pod \"swift-ring-rebalance-q78gl\" (UID: \"3df63c93-1525-4b38-92e3-4d9b15a5c293\") " pod="openstack/swift-ring-rebalance-q78gl" Jan 22 16:50:00 crc kubenswrapper[4758]: I0122 16:50:00.606581 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3df63c93-1525-4b38-92e3-4d9b15a5c293-dispersionconf\") pod \"swift-ring-rebalance-q78gl\" (UID: \"3df63c93-1525-4b38-92e3-4d9b15a5c293\") " pod="openstack/swift-ring-rebalance-q78gl" Jan 22 16:50:00 crc kubenswrapper[4758]: I0122 16:50:00.615250 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3df63c93-1525-4b38-92e3-4d9b15a5c293-swiftconf\") pod \"swift-ring-rebalance-q78gl\" (UID: \"3df63c93-1525-4b38-92e3-4d9b15a5c293\") " pod="openstack/swift-ring-rebalance-q78gl" Jan 22 16:50:00 crc kubenswrapper[4758]: I0122 16:50:00.622424 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skv96\" (UniqueName: \"kubernetes.io/projected/3df63c93-1525-4b38-92e3-4d9b15a5c293-kube-api-access-skv96\") pod \"swift-ring-rebalance-q78gl\" (UID: \"3df63c93-1525-4b38-92e3-4d9b15a5c293\") " pod="openstack/swift-ring-rebalance-q78gl" Jan 22 16:50:00 crc kubenswrapper[4758]: I0122 16:50:00.647047 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-q78gl" Jan 22 16:50:00 crc kubenswrapper[4758]: I0122 16:50:00.703716 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c63f01b2-8785-4108-b532-b69bc2407a26-etc-swift\") pod \"swift-storage-0\" (UID: \"c63f01b2-8785-4108-b532-b69bc2407a26\") " pod="openstack/swift-storage-0" Jan 22 16:50:00 crc kubenswrapper[4758]: E0122 16:50:00.704287 4758 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 22 16:50:00 crc kubenswrapper[4758]: E0122 16:50:00.704310 4758 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 22 16:50:00 crc kubenswrapper[4758]: E0122 16:50:00.704358 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c63f01b2-8785-4108-b532-b69bc2407a26-etc-swift podName:c63f01b2-8785-4108-b532-b69bc2407a26 nodeName:}" failed. No retries permitted until 2026-01-22 16:50:01.704341278 +0000 UTC m=+1223.187680563 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c63f01b2-8785-4108-b532-b69bc2407a26-etc-swift") pod "swift-storage-0" (UID: "c63f01b2-8785-4108-b532-b69bc2407a26") : configmap "swift-ring-files" not found Jan 22 16:50:01 crc kubenswrapper[4758]: I0122 16:50:01.142630 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-q78gl"] Jan 22 16:50:01 crc kubenswrapper[4758]: I0122 16:50:01.734919 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c63f01b2-8785-4108-b532-b69bc2407a26-etc-swift\") pod \"swift-storage-0\" (UID: \"c63f01b2-8785-4108-b532-b69bc2407a26\") " pod="openstack/swift-storage-0" Jan 22 16:50:01 crc kubenswrapper[4758]: E0122 16:50:01.735083 4758 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 22 16:50:01 crc kubenswrapper[4758]: E0122 16:50:01.735099 4758 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 22 16:50:01 crc kubenswrapper[4758]: E0122 16:50:01.735157 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c63f01b2-8785-4108-b532-b69bc2407a26-etc-swift podName:c63f01b2-8785-4108-b532-b69bc2407a26 nodeName:}" failed. No retries permitted until 2026-01-22 16:50:03.735141831 +0000 UTC m=+1225.218481116 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c63f01b2-8785-4108-b532-b69bc2407a26-etc-swift") pod "swift-storage-0" (UID: "c63f01b2-8785-4108-b532-b69bc2407a26") : configmap "swift-ring-files" not found Jan 22 16:50:01 crc kubenswrapper[4758]: I0122 16:50:01.741514 4758 generic.go:334] "Generic (PLEG): container finished" podID="6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f" containerID="578494ac61c1d0759137a7bdee09f2ad505102af3709868f87f0bd010cd40fa5" exitCode=0 Jan 22 16:50:01 crc kubenswrapper[4758]: I0122 16:50:01.741576 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69f676bd95-zrz5w" event={"ID":"6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f","Type":"ContainerDied","Data":"578494ac61c1d0759137a7bdee09f2ad505102af3709868f87f0bd010cd40fa5"} Jan 22 16:50:01 crc kubenswrapper[4758]: I0122 16:50:01.743288 4758 generic.go:334] "Generic (PLEG): container finished" podID="3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf" containerID="94d82265dc604783013df0466145fc4a6dda5ffaeafe0be0d3d56500eb57d2ce" exitCode=0 Jan 22 16:50:01 crc kubenswrapper[4758]: I0122 16:50:01.743583 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf","Type":"ContainerDied","Data":"94d82265dc604783013df0466145fc4a6dda5ffaeafe0be0d3d56500eb57d2ce"} Jan 22 16:50:01 crc kubenswrapper[4758]: I0122 16:50:01.745512 4758 generic.go:334] "Generic (PLEG): container finished" podID="f52e2571-4001-441f-b7b7-b4746ae1c10d" containerID="46c6da199ec04b38876cd5c037745e5b671466dbd8ac1f9269f365d95e0bbbb2" exitCode=0 Jan 22 16:50:01 crc kubenswrapper[4758]: I0122 16:50:01.745623 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f52e2571-4001-441f-b7b7-b4746ae1c10d","Type":"ContainerDied","Data":"46c6da199ec04b38876cd5c037745e5b671466dbd8ac1f9269f365d95e0bbbb2"} Jan 22 16:50:01 crc kubenswrapper[4758]: I0122 16:50:01.749300 4758 generic.go:334] "Generic (PLEG): container finished" podID="b704dfb7-fb7d-422c-82b0-1a0f4ae9b755" containerID="00fd018e3fa79c43af5ef0d5be2465fa8629a9da7b8faf7c664d2d91b19985ec" exitCode=0 Jan 22 16:50:01 crc kubenswrapper[4758]: I0122 16:50:01.749368 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79fb856f67-6q6hs" event={"ID":"b704dfb7-fb7d-422c-82b0-1a0f4ae9b755","Type":"ContainerDied","Data":"00fd018e3fa79c43af5ef0d5be2465fa8629a9da7b8faf7c664d2d91b19985ec"} Jan 22 16:50:01 crc kubenswrapper[4758]: I0122 16:50:01.753626 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-q78gl" event={"ID":"3df63c93-1525-4b38-92e3-4d9b15a5c293","Type":"ContainerStarted","Data":"39d8495e9971c2f2a000ce016267f014f89c9fa72088e7acdfd91fc84710c3c0"} Jan 22 16:50:01 crc kubenswrapper[4758]: I0122 16:50:01.822475 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-pbmk8" podStartSLOduration=5.822452298 podStartE2EDuration="5.822452298s" podCreationTimestamp="2026-01-22 16:49:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:50:01.79753407 +0000 UTC m=+1223.280873355" watchObservedRunningTime="2026-01-22 16:50:01.822452298 +0000 UTC m=+1223.305791583" Jan 22 16:50:02 crc kubenswrapper[4758]: I0122 16:50:02.669095 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69f676bd95-zrz5w" Jan 22 16:50:02 crc kubenswrapper[4758]: I0122 16:50:02.759479 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f-dns-svc\") pod \"6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f\" (UID: \"6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f\") " Jan 22 16:50:02 crc kubenswrapper[4758]: I0122 16:50:02.759586 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f-config\") pod \"6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f\" (UID: \"6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f\") " Jan 22 16:50:02 crc kubenswrapper[4758]: I0122 16:50:02.759632 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f-ovsdbserver-nb\") pod \"6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f\" (UID: \"6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f\") " Jan 22 16:50:02 crc kubenswrapper[4758]: I0122 16:50:02.759666 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f-ovsdbserver-sb\") pod \"6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f\" (UID: \"6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f\") " Jan 22 16:50:02 crc kubenswrapper[4758]: I0122 16:50:02.759836 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lq9qt\" (UniqueName: \"kubernetes.io/projected/6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f-kube-api-access-lq9qt\") pod \"6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f\" (UID: \"6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f\") " Jan 22 16:50:02 crc kubenswrapper[4758]: I0122 16:50:02.764882 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69f676bd95-zrz5w" event={"ID":"6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f","Type":"ContainerDied","Data":"ca2d4724806b7124ff8e2b5ac1d41a6222935fc14862c6d9389716759c1a0ce8"} Jan 22 16:50:02 crc kubenswrapper[4758]: I0122 16:50:02.764904 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f-kube-api-access-lq9qt" (OuterVolumeSpecName: "kube-api-access-lq9qt") pod "6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f" (UID: "6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f"). InnerVolumeSpecName "kube-api-access-lq9qt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:50:02 crc kubenswrapper[4758]: I0122 16:50:02.764931 4758 scope.go:117] "RemoveContainer" containerID="578494ac61c1d0759137a7bdee09f2ad505102af3709868f87f0bd010cd40fa5" Jan 22 16:50:02 crc kubenswrapper[4758]: I0122 16:50:02.765393 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69f676bd95-zrz5w" Jan 22 16:50:02 crc kubenswrapper[4758]: I0122 16:50:02.784932 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f" (UID: "6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:50:02 crc kubenswrapper[4758]: I0122 16:50:02.788758 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f-config" (OuterVolumeSpecName: "config") pod "6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f" (UID: "6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:50:02 crc kubenswrapper[4758]: I0122 16:50:02.789297 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f" (UID: "6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:50:02 crc kubenswrapper[4758]: I0122 16:50:02.793508 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f" (UID: "6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:50:02 crc kubenswrapper[4758]: I0122 16:50:02.862823 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:02 crc kubenswrapper[4758]: I0122 16:50:02.862861 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:02 crc kubenswrapper[4758]: I0122 16:50:02.862877 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:02 crc kubenswrapper[4758]: I0122 16:50:02.862891 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lq9qt\" (UniqueName: \"kubernetes.io/projected/6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f-kube-api-access-lq9qt\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:02 crc kubenswrapper[4758]: I0122 16:50:02.862905 4758 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:03 crc kubenswrapper[4758]: I0122 16:50:03.119854 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69f676bd95-zrz5w"] Jan 22 16:50:03 crc kubenswrapper[4758]: I0122 16:50:03.132972 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-69f676bd95-zrz5w"] Jan 22 16:50:03 crc kubenswrapper[4758]: I0122 16:50:03.779204 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c63f01b2-8785-4108-b532-b69bc2407a26-etc-swift\") pod \"swift-storage-0\" (UID: \"c63f01b2-8785-4108-b532-b69bc2407a26\") " pod="openstack/swift-storage-0" Jan 22 16:50:03 crc kubenswrapper[4758]: E0122 16:50:03.779361 4758 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 22 16:50:03 crc kubenswrapper[4758]: E0122 16:50:03.779374 4758 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 22 16:50:03 crc kubenswrapper[4758]: E0122 16:50:03.779418 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c63f01b2-8785-4108-b532-b69bc2407a26-etc-swift podName:c63f01b2-8785-4108-b532-b69bc2407a26 nodeName:}" failed. No retries permitted until 2026-01-22 16:50:07.779404106 +0000 UTC m=+1229.262743391 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c63f01b2-8785-4108-b532-b69bc2407a26-etc-swift") pod "swift-storage-0" (UID: "c63f01b2-8785-4108-b532-b69bc2407a26") : configmap "swift-ring-files" not found Jan 22 16:50:04 crc kubenswrapper[4758]: I0122 16:50:04.823168 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f" path="/var/lib/kubelet/pods/6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f/volumes" Jan 22 16:50:07 crc kubenswrapper[4758]: I0122 16:50:07.855710 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c63f01b2-8785-4108-b532-b69bc2407a26-etc-swift\") pod \"swift-storage-0\" (UID: \"c63f01b2-8785-4108-b532-b69bc2407a26\") " pod="openstack/swift-storage-0" Jan 22 16:50:07 crc kubenswrapper[4758]: E0122 16:50:07.855943 4758 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 22 16:50:07 crc kubenswrapper[4758]: E0122 16:50:07.856396 4758 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 22 16:50:07 crc kubenswrapper[4758]: E0122 16:50:07.856450 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c63f01b2-8785-4108-b532-b69bc2407a26-etc-swift podName:c63f01b2-8785-4108-b532-b69bc2407a26 nodeName:}" failed. No retries permitted until 2026-01-22 16:50:15.85643456 +0000 UTC m=+1237.339773845 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c63f01b2-8785-4108-b532-b69bc2407a26-etc-swift") pod "swift-storage-0" (UID: "c63f01b2-8785-4108-b532-b69bc2407a26") : configmap "swift-ring-files" not found Jan 22 16:50:11 crc kubenswrapper[4758]: I0122 16:50:11.879079 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f52e2571-4001-441f-b7b7-b4746ae1c10d","Type":"ContainerStarted","Data":"2557bffc3fd58d8667a75ee0ebf849ae2de4888ad4b41d43a6fa1aee1ac6eace"} Jan 22 16:50:11 crc kubenswrapper[4758]: I0122 16:50:11.883575 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"772760c9-f1af-44f5-bfc0-9b949a639e9f","Type":"ContainerStarted","Data":"6945f422db816e99c4a31fb4f595ed73d5016f4e61e618d613699a55e148daa7"} Jan 22 16:50:11 crc kubenswrapper[4758]: I0122 16:50:11.884215 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 22 16:50:11 crc kubenswrapper[4758]: I0122 16:50:11.892159 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5335ec54-1c39-41ba-9788-672cde3d164c","Type":"ContainerStarted","Data":"7aa973f5646de8c10fb8ad2e69ab75c78b92a975dcd2ba49a614c0825880d844"} Jan 22 16:50:11 crc kubenswrapper[4758]: I0122 16:50:11.892217 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5335ec54-1c39-41ba-9788-672cde3d164c","Type":"ContainerStarted","Data":"81ecf8a3e24abad2a0d999b1412b6397341038d556df1df1c0e6655674775443"} Jan 22 16:50:11 crc kubenswrapper[4758]: I0122 16:50:11.893408 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 22 16:50:11 crc kubenswrapper[4758]: I0122 16:50:11.896577 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79fb856f67-6q6hs" event={"ID":"b704dfb7-fb7d-422c-82b0-1a0f4ae9b755","Type":"ContainerStarted","Data":"e278e8759fff8b09177920664b40daf990a978ba76dafda5841c71e3d6b1843d"} Jan 22 16:50:11 crc kubenswrapper[4758]: I0122 16:50:11.897317 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79fb856f67-6q6hs" Jan 22 16:50:11 crc kubenswrapper[4758]: I0122 16:50:11.898474 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-q78gl" event={"ID":"3df63c93-1525-4b38-92e3-4d9b15a5c293","Type":"ContainerStarted","Data":"820a71547780914d70b2e12a343c814f96d28854af15f22f8992a6942467c0ed"} Jan 22 16:50:11 crc kubenswrapper[4758]: I0122 16:50:11.904787 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c980e076-b6f7-4713-8b10-08bea2949331","Type":"ContainerStarted","Data":"355d59c59b4ea48f82e5fb788c718362f2da845ff15a07cdcc6d5cdd038ac121"} Jan 22 16:50:11 crc kubenswrapper[4758]: I0122 16:50:11.907703 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf","Type":"ContainerStarted","Data":"afa4a5c57aadb40796ed76ed10940de3d8e76b61359045b6c8b1e71eaf74a471"} Jan 22 16:50:11 crc kubenswrapper[4758]: I0122 16:50:11.908805 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=57.507762876 podStartE2EDuration="1m8.908787815s" podCreationTimestamp="2026-01-22 16:49:03 +0000 UTC" firstStartedPulling="2026-01-22 16:49:27.706303698 +0000 UTC m=+1189.189642983" lastFinishedPulling="2026-01-22 16:49:39.107328637 +0000 UTC m=+1200.590667922" observedRunningTime="2026-01-22 16:50:11.9016454 +0000 UTC m=+1233.384984685" watchObservedRunningTime="2026-01-22 16:50:11.908787815 +0000 UTC m=+1233.392127100" Jan 22 16:50:11 crc kubenswrapper[4758]: I0122 16:50:11.927622 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-q78gl" podStartSLOduration=2.200582103 podStartE2EDuration="11.927552885s" podCreationTimestamp="2026-01-22 16:50:00 +0000 UTC" firstStartedPulling="2026-01-22 16:50:01.167855257 +0000 UTC m=+1222.651194542" lastFinishedPulling="2026-01-22 16:50:10.894826039 +0000 UTC m=+1232.378165324" observedRunningTime="2026-01-22 16:50:11.918343314 +0000 UTC m=+1233.401682599" watchObservedRunningTime="2026-01-22 16:50:11.927552885 +0000 UTC m=+1233.410892170" Jan 22 16:50:11 crc kubenswrapper[4758]: I0122 16:50:11.939339 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79fb856f67-6q6hs" podStartSLOduration=13.939318045 podStartE2EDuration="13.939318045s" podCreationTimestamp="2026-01-22 16:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:50:11.936771197 +0000 UTC m=+1233.420110502" watchObservedRunningTime="2026-01-22 16:50:11.939318045 +0000 UTC m=+1233.422657330" Jan 22 16:50:11 crc kubenswrapper[4758]: I0122 16:50:11.966839 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.2249001059999998 podStartE2EDuration="15.966814184s" podCreationTimestamp="2026-01-22 16:49:56 +0000 UTC" firstStartedPulling="2026-01-22 16:49:57.064468224 +0000 UTC m=+1218.547807509" lastFinishedPulling="2026-01-22 16:50:10.806382282 +0000 UTC m=+1232.289721587" observedRunningTime="2026-01-22 16:50:11.955620259 +0000 UTC m=+1233.438959554" watchObservedRunningTime="2026-01-22 16:50:11.966814184 +0000 UTC m=+1233.450153479" Jan 22 16:50:11 crc kubenswrapper[4758]: I0122 16:50:11.976181 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=20.941342332 podStartE2EDuration="1m3.976161459s" podCreationTimestamp="2026-01-22 16:49:08 +0000 UTC" firstStartedPulling="2026-01-22 16:49:27.862995124 +0000 UTC m=+1189.346334409" lastFinishedPulling="2026-01-22 16:50:10.897814261 +0000 UTC m=+1232.381153536" observedRunningTime="2026-01-22 16:50:11.974425281 +0000 UTC m=+1233.457764576" watchObservedRunningTime="2026-01-22 16:50:11.976161459 +0000 UTC m=+1233.459500744" Jan 22 16:50:12 crc kubenswrapper[4758]: I0122 16:50:12.002348 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=54.742819424 podStartE2EDuration="1m8.002327591s" podCreationTimestamp="2026-01-22 16:49:04 +0000 UTC" firstStartedPulling="2026-01-22 16:49:26.152468095 +0000 UTC m=+1187.635807380" lastFinishedPulling="2026-01-22 16:49:39.411976262 +0000 UTC m=+1200.895315547" observedRunningTime="2026-01-22 16:50:11.995938917 +0000 UTC m=+1233.479278212" watchObservedRunningTime="2026-01-22 16:50:12.002327591 +0000 UTC m=+1233.485666876" Jan 22 16:50:13 crc kubenswrapper[4758]: I0122 16:50:13.837704 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:50:13 crc kubenswrapper[4758]: I0122 16:50:13.839131 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:50:13 crc kubenswrapper[4758]: I0122 16:50:13.925873 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c980e076-b6f7-4713-8b10-08bea2949331","Type":"ContainerStarted","Data":"37d777e0cc8c2dcc68b3e8325d1f4f54cde7a93134d6a10cbe2498dec130ab05"} Jan 22 16:50:14 crc kubenswrapper[4758]: I0122 16:50:14.891690 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 22 16:50:14 crc kubenswrapper[4758]: I0122 16:50:14.892105 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 22 16:50:15 crc kubenswrapper[4758]: I0122 16:50:15.924047 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c63f01b2-8785-4108-b532-b69bc2407a26-etc-swift\") pod \"swift-storage-0\" (UID: \"c63f01b2-8785-4108-b532-b69bc2407a26\") " pod="openstack/swift-storage-0" Jan 22 16:50:15 crc kubenswrapper[4758]: E0122 16:50:15.924257 4758 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 22 16:50:15 crc kubenswrapper[4758]: E0122 16:50:15.924312 4758 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 22 16:50:15 crc kubenswrapper[4758]: E0122 16:50:15.924398 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c63f01b2-8785-4108-b532-b69bc2407a26-etc-swift podName:c63f01b2-8785-4108-b532-b69bc2407a26 nodeName:}" failed. No retries permitted until 2026-01-22 16:50:31.924372494 +0000 UTC m=+1253.407711809 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c63f01b2-8785-4108-b532-b69bc2407a26-etc-swift") pod "swift-storage-0" (UID: "c63f01b2-8785-4108-b532-b69bc2407a26") : configmap "swift-ring-files" not found Jan 22 16:50:16 crc kubenswrapper[4758]: I0122 16:50:16.270693 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 22 16:50:16 crc kubenswrapper[4758]: I0122 16:50:16.271211 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 22 16:50:16 crc kubenswrapper[4758]: I0122 16:50:16.812049 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-mpsgq" podUID="7911c0f6-531a-403c-861f-f9cd3ec18ce4" containerName="ovn-controller" probeResult="failure" output=< Jan 22 16:50:16 crc kubenswrapper[4758]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 22 16:50:16 crc kubenswrapper[4758]: > Jan 22 16:50:18 crc kubenswrapper[4758]: I0122 16:50:18.282047 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 22 16:50:18 crc kubenswrapper[4758]: I0122 16:50:18.402214 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 22 16:50:18 crc kubenswrapper[4758]: I0122 16:50:18.624195 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 22 16:50:18 crc kubenswrapper[4758]: I0122 16:50:18.922787 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-30be-account-create-update-g9s9d"] Jan 22 16:50:18 crc kubenswrapper[4758]: E0122 16:50:18.923434 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f" containerName="init" Jan 22 16:50:18 crc kubenswrapper[4758]: I0122 16:50:18.923469 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f" containerName="init" Jan 22 16:50:18 crc kubenswrapper[4758]: I0122 16:50:18.923677 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d34c50d-e958-4ab9-bdf1-8fbdab8dda8f" containerName="init" Jan 22 16:50:18 crc kubenswrapper[4758]: I0122 16:50:18.924371 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-30be-account-create-update-g9s9d" Jan 22 16:50:18 crc kubenswrapper[4758]: I0122 16:50:18.932194 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-db-secret" Jan 22 16:50:18 crc kubenswrapper[4758]: I0122 16:50:18.936084 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-create-xjlxr"] Jan 22 16:50:18 crc kubenswrapper[4758]: I0122 16:50:18.937105 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-xjlxr" Jan 22 16:50:18 crc kubenswrapper[4758]: I0122 16:50:18.946762 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-30be-account-create-update-g9s9d"] Jan 22 16:50:18 crc kubenswrapper[4758]: I0122 16:50:18.954321 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-xjlxr"] Jan 22 16:50:19 crc kubenswrapper[4758]: I0122 16:50:19.074633 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7b6d798-7571-43c0-8202-0634015602ff-operator-scripts\") pod \"watcher-db-create-xjlxr\" (UID: \"e7b6d798-7571-43c0-8202-0634015602ff\") " pod="openstack/watcher-db-create-xjlxr" Jan 22 16:50:19 crc kubenswrapper[4758]: I0122 16:50:19.075016 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qsd8\" (UniqueName: \"kubernetes.io/projected/29ddf744-aa02-471f-b73c-930924240fa9-kube-api-access-2qsd8\") pod \"watcher-30be-account-create-update-g9s9d\" (UID: \"29ddf744-aa02-471f-b73c-930924240fa9\") " pod="openstack/watcher-30be-account-create-update-g9s9d" Jan 22 16:50:19 crc kubenswrapper[4758]: I0122 16:50:19.075083 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mlbg\" (UniqueName: \"kubernetes.io/projected/e7b6d798-7571-43c0-8202-0634015602ff-kube-api-access-5mlbg\") pod \"watcher-db-create-xjlxr\" (UID: \"e7b6d798-7571-43c0-8202-0634015602ff\") " pod="openstack/watcher-db-create-xjlxr" Jan 22 16:50:19 crc kubenswrapper[4758]: I0122 16:50:19.075366 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29ddf744-aa02-471f-b73c-930924240fa9-operator-scripts\") pod \"watcher-30be-account-create-update-g9s9d\" (UID: \"29ddf744-aa02-471f-b73c-930924240fa9\") " pod="openstack/watcher-30be-account-create-update-g9s9d" Jan 22 16:50:19 crc kubenswrapper[4758]: I0122 16:50:19.085980 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79fb856f67-6q6hs" Jan 22 16:50:19 crc kubenswrapper[4758]: I0122 16:50:19.126871 4758 generic.go:334] "Generic (PLEG): container finished" podID="78374f0a-c7de-486b-9118-fe2dccc5bdca" containerID="8ef43c5864260465182f92e5fb8fcb55f0a5a865cc6f8dd8ac08a77e2cbd0e8e" exitCode=0 Jan 22 16:50:19 crc kubenswrapper[4758]: I0122 16:50:19.126994 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"78374f0a-c7de-486b-9118-fe2dccc5bdca","Type":"ContainerDied","Data":"8ef43c5864260465182f92e5fb8fcb55f0a5a865cc6f8dd8ac08a77e2cbd0e8e"} Jan 22 16:50:19 crc kubenswrapper[4758]: I0122 16:50:19.131872 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c980e076-b6f7-4713-8b10-08bea2949331","Type":"ContainerStarted","Data":"793dabbf2ae914b5fdcb63b06bea812ecf9789d4b3490d5fc117467358faceaa"} Jan 22 16:50:19 crc kubenswrapper[4758]: I0122 16:50:19.161285 4758 generic.go:334] "Generic (PLEG): container finished" podID="f7805c55-6999-45a8-afd4-3fd1fa039b7a" containerID="c26825e462dfa67cd2f638d3befab499bfa4a240e39dfa9dfa58220e27604d5d" exitCode=0 Jan 22 16:50:19 crc kubenswrapper[4758]: I0122 16:50:19.162223 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f7805c55-6999-45a8-afd4-3fd1fa039b7a","Type":"ContainerDied","Data":"c26825e462dfa67cd2f638d3befab499bfa4a240e39dfa9dfa58220e27604d5d"} Jan 22 16:50:19 crc kubenswrapper[4758]: I0122 16:50:19.184611 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7b6d798-7571-43c0-8202-0634015602ff-operator-scripts\") pod \"watcher-db-create-xjlxr\" (UID: \"e7b6d798-7571-43c0-8202-0634015602ff\") " pod="openstack/watcher-db-create-xjlxr" Jan 22 16:50:19 crc kubenswrapper[4758]: I0122 16:50:19.186394 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7b6d798-7571-43c0-8202-0634015602ff-operator-scripts\") pod \"watcher-db-create-xjlxr\" (UID: \"e7b6d798-7571-43c0-8202-0634015602ff\") " pod="openstack/watcher-db-create-xjlxr" Jan 22 16:50:19 crc kubenswrapper[4758]: I0122 16:50:19.202969 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6594fdd9c9-22rg8"] Jan 22 16:50:19 crc kubenswrapper[4758]: I0122 16:50:19.203357 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6594fdd9c9-22rg8" podUID="5a52e45a-35af-4c02-926d-d82f762b39da" containerName="dnsmasq-dns" containerID="cri-o://f9eefb2edd497e8882b25a606d3567d882d0bffaad01240544d12a105b3b70d6" gracePeriod=10 Jan 22 16:50:19 crc kubenswrapper[4758]: I0122 16:50:19.203707 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qsd8\" (UniqueName: \"kubernetes.io/projected/29ddf744-aa02-471f-b73c-930924240fa9-kube-api-access-2qsd8\") pod \"watcher-30be-account-create-update-g9s9d\" (UID: \"29ddf744-aa02-471f-b73c-930924240fa9\") " pod="openstack/watcher-30be-account-create-update-g9s9d" Jan 22 16:50:19 crc kubenswrapper[4758]: I0122 16:50:19.203755 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mlbg\" (UniqueName: \"kubernetes.io/projected/e7b6d798-7571-43c0-8202-0634015602ff-kube-api-access-5mlbg\") pod \"watcher-db-create-xjlxr\" (UID: \"e7b6d798-7571-43c0-8202-0634015602ff\") " pod="openstack/watcher-db-create-xjlxr" Jan 22 16:50:19 crc kubenswrapper[4758]: I0122 16:50:19.203875 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29ddf744-aa02-471f-b73c-930924240fa9-operator-scripts\") pod \"watcher-30be-account-create-update-g9s9d\" (UID: \"29ddf744-aa02-471f-b73c-930924240fa9\") " pod="openstack/watcher-30be-account-create-update-g9s9d" Jan 22 16:50:19 crc kubenswrapper[4758]: I0122 16:50:19.204579 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29ddf744-aa02-471f-b73c-930924240fa9-operator-scripts\") pod \"watcher-30be-account-create-update-g9s9d\" (UID: \"29ddf744-aa02-471f-b73c-930924240fa9\") " pod="openstack/watcher-30be-account-create-update-g9s9d" Jan 22 16:50:19 crc kubenswrapper[4758]: I0122 16:50:19.269026 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mlbg\" (UniqueName: \"kubernetes.io/projected/e7b6d798-7571-43c0-8202-0634015602ff-kube-api-access-5mlbg\") pod \"watcher-db-create-xjlxr\" (UID: \"e7b6d798-7571-43c0-8202-0634015602ff\") " pod="openstack/watcher-db-create-xjlxr" Jan 22 16:50:19 crc kubenswrapper[4758]: I0122 16:50:19.275568 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-xjlxr" Jan 22 16:50:19 crc kubenswrapper[4758]: I0122 16:50:19.286613 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=20.951371046 podStartE2EDuration="1m11.286572826s" podCreationTimestamp="2026-01-22 16:49:08 +0000 UTC" firstStartedPulling="2026-01-22 16:49:27.836688837 +0000 UTC m=+1189.320028132" lastFinishedPulling="2026-01-22 16:50:18.171890627 +0000 UTC m=+1239.655229912" observedRunningTime="2026-01-22 16:50:19.277253932 +0000 UTC m=+1240.760593227" watchObservedRunningTime="2026-01-22 16:50:19.286572826 +0000 UTC m=+1240.769912111" Jan 22 16:50:19 crc kubenswrapper[4758]: I0122 16:50:19.295954 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qsd8\" (UniqueName: \"kubernetes.io/projected/29ddf744-aa02-471f-b73c-930924240fa9-kube-api-access-2qsd8\") pod \"watcher-30be-account-create-update-g9s9d\" (UID: \"29ddf744-aa02-471f-b73c-930924240fa9\") " pod="openstack/watcher-30be-account-create-update-g9s9d" Jan 22 16:50:19 crc kubenswrapper[4758]: I0122 16:50:19.557953 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-30be-account-create-update-g9s9d" Jan 22 16:50:19 crc kubenswrapper[4758]: I0122 16:50:19.795896 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6594fdd9c9-22rg8" Jan 22 16:50:19 crc kubenswrapper[4758]: I0122 16:50:19.881859 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a52e45a-35af-4c02-926d-d82f762b39da-dns-svc\") pod \"5a52e45a-35af-4c02-926d-d82f762b39da\" (UID: \"5a52e45a-35af-4c02-926d-d82f762b39da\") " Jan 22 16:50:19 crc kubenswrapper[4758]: I0122 16:50:19.881965 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqp2b\" (UniqueName: \"kubernetes.io/projected/5a52e45a-35af-4c02-926d-d82f762b39da-kube-api-access-zqp2b\") pod \"5a52e45a-35af-4c02-926d-d82f762b39da\" (UID: \"5a52e45a-35af-4c02-926d-d82f762b39da\") " Jan 22 16:50:19 crc kubenswrapper[4758]: I0122 16:50:19.882060 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a52e45a-35af-4c02-926d-d82f762b39da-config\") pod \"5a52e45a-35af-4c02-926d-d82f762b39da\" (UID: \"5a52e45a-35af-4c02-926d-d82f762b39da\") " Jan 22 16:50:19 crc kubenswrapper[4758]: I0122 16:50:19.888040 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a52e45a-35af-4c02-926d-d82f762b39da-kube-api-access-zqp2b" (OuterVolumeSpecName: "kube-api-access-zqp2b") pod "5a52e45a-35af-4c02-926d-d82f762b39da" (UID: "5a52e45a-35af-4c02-926d-d82f762b39da"). InnerVolumeSpecName "kube-api-access-zqp2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:50:19 crc kubenswrapper[4758]: I0122 16:50:19.931358 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a52e45a-35af-4c02-926d-d82f762b39da-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5a52e45a-35af-4c02-926d-d82f762b39da" (UID: "5a52e45a-35af-4c02-926d-d82f762b39da"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:50:19 crc kubenswrapper[4758]: I0122 16:50:19.937467 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a52e45a-35af-4c02-926d-d82f762b39da-config" (OuterVolumeSpecName: "config") pod "5a52e45a-35af-4c02-926d-d82f762b39da" (UID: "5a52e45a-35af-4c02-926d-d82f762b39da"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:50:19 crc kubenswrapper[4758]: I0122 16:50:19.942513 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-xjlxr"] Jan 22 16:50:19 crc kubenswrapper[4758]: I0122 16:50:19.984461 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a52e45a-35af-4c02-926d-d82f762b39da-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:19 crc kubenswrapper[4758]: I0122 16:50:19.984488 4758 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a52e45a-35af-4c02-926d-d82f762b39da-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:19 crc kubenswrapper[4758]: I0122 16:50:19.984498 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqp2b\" (UniqueName: \"kubernetes.io/projected/5a52e45a-35af-4c02-926d-d82f762b39da-kube-api-access-zqp2b\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:20 crc kubenswrapper[4758]: I0122 16:50:20.091071 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:20 crc kubenswrapper[4758]: I0122 16:50:20.164372 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-30be-account-create-update-g9s9d"] Jan 22 16:50:20 crc kubenswrapper[4758]: W0122 16:50:20.166112 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29ddf744_aa02_471f_b73c_930924240fa9.slice/crio-332246de6f8e00728179b45fd5cb5a60298f1fdc711251f0ad3f3b662c516493 WatchSource:0}: Error finding container 332246de6f8e00728179b45fd5cb5a60298f1fdc711251f0ad3f3b662c516493: Status 404 returned error can't find the container with id 332246de6f8e00728179b45fd5cb5a60298f1fdc711251f0ad3f3b662c516493 Jan 22 16:50:20 crc kubenswrapper[4758]: I0122 16:50:20.170382 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"78374f0a-c7de-486b-9118-fe2dccc5bdca","Type":"ContainerStarted","Data":"fca875c9cea54d51ccfd1cc1dec5c30439a38813cd673c3933e7c8bd9170113c"} Jan 22 16:50:20 crc kubenswrapper[4758]: I0122 16:50:20.170633 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 22 16:50:20 crc kubenswrapper[4758]: I0122 16:50:20.172674 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f7805c55-6999-45a8-afd4-3fd1fa039b7a","Type":"ContainerStarted","Data":"6b5b7187bab226acbf09afeb6305336208961ff049b92436aa22b21b922e9304"} Jan 22 16:50:20 crc kubenswrapper[4758]: I0122 16:50:20.172997 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:50:20 crc kubenswrapper[4758]: I0122 16:50:20.174428 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-xjlxr" event={"ID":"e7b6d798-7571-43c0-8202-0634015602ff","Type":"ContainerStarted","Data":"56c47df6160948fc27279f61dd09ba48f79d049b98b863f4b885e821fb6636e4"} Jan 22 16:50:20 crc kubenswrapper[4758]: I0122 16:50:20.176152 4758 generic.go:334] "Generic (PLEG): container finished" podID="5a52e45a-35af-4c02-926d-d82f762b39da" containerID="f9eefb2edd497e8882b25a606d3567d882d0bffaad01240544d12a105b3b70d6" exitCode=0 Jan 22 16:50:20 crc kubenswrapper[4758]: I0122 16:50:20.176203 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6594fdd9c9-22rg8" event={"ID":"5a52e45a-35af-4c02-926d-d82f762b39da","Type":"ContainerDied","Data":"f9eefb2edd497e8882b25a606d3567d882d0bffaad01240544d12a105b3b70d6"} Jan 22 16:50:20 crc kubenswrapper[4758]: I0122 16:50:20.176225 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6594fdd9c9-22rg8" event={"ID":"5a52e45a-35af-4c02-926d-d82f762b39da","Type":"ContainerDied","Data":"f1408238b11824975f0e0d3d8b6b32cccb873594c417a3714a940b35d0a103bd"} Jan 22 16:50:20 crc kubenswrapper[4758]: I0122 16:50:20.176250 4758 scope.go:117] "RemoveContainer" containerID="f9eefb2edd497e8882b25a606d3567d882d0bffaad01240544d12a105b3b70d6" Jan 22 16:50:20 crc kubenswrapper[4758]: I0122 16:50:20.176270 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6594fdd9c9-22rg8" Jan 22 16:50:20 crc kubenswrapper[4758]: I0122 16:50:20.178048 4758 generic.go:334] "Generic (PLEG): container finished" podID="be871bb7-c028-4788-9769-51685b7290ea" containerID="54f9f53ffb779e716dc852302775c5adee468870eae08cc358cc89f3f4e80bb2" exitCode=0 Jan 22 16:50:20 crc kubenswrapper[4758]: I0122 16:50:20.178071 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"be871bb7-c028-4788-9769-51685b7290ea","Type":"ContainerDied","Data":"54f9f53ffb779e716dc852302775c5adee468870eae08cc358cc89f3f4e80bb2"} Jan 22 16:50:20 crc kubenswrapper[4758]: I0122 16:50:20.249271 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=67.836742393 podStartE2EDuration="1m19.22115669s" podCreationTimestamp="2026-01-22 16:49:01 +0000 UTC" firstStartedPulling="2026-01-22 16:49:28.028108929 +0000 UTC m=+1189.511448214" lastFinishedPulling="2026-01-22 16:49:39.412523226 +0000 UTC m=+1200.895862511" observedRunningTime="2026-01-22 16:50:20.199027148 +0000 UTC m=+1241.682366443" watchObservedRunningTime="2026-01-22 16:50:20.22115669 +0000 UTC m=+1241.704495975" Jan 22 16:50:20 crc kubenswrapper[4758]: I0122 16:50:20.257211 4758 scope.go:117] "RemoveContainer" containerID="cdeea964999a167c1966de302fc92bedb1c73a9b1c922ccb47fdcb715d22cd0f" Jan 22 16:50:20 crc kubenswrapper[4758]: I0122 16:50:20.372585 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=67.902925396 podStartE2EDuration="1m19.372557354s" podCreationTimestamp="2026-01-22 16:49:01 +0000 UTC" firstStartedPulling="2026-01-22 16:49:27.639814247 +0000 UTC m=+1189.123153532" lastFinishedPulling="2026-01-22 16:49:39.109446205 +0000 UTC m=+1200.592785490" observedRunningTime="2026-01-22 16:50:20.335371651 +0000 UTC m=+1241.818710956" watchObservedRunningTime="2026-01-22 16:50:20.372557354 +0000 UTC m=+1241.855896639" Jan 22 16:50:20 crc kubenswrapper[4758]: I0122 16:50:20.379402 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6594fdd9c9-22rg8"] Jan 22 16:50:20 crc kubenswrapper[4758]: I0122 16:50:20.389017 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6594fdd9c9-22rg8"] Jan 22 16:50:20 crc kubenswrapper[4758]: I0122 16:50:20.414925 4758 scope.go:117] "RemoveContainer" containerID="f9eefb2edd497e8882b25a606d3567d882d0bffaad01240544d12a105b3b70d6" Jan 22 16:50:20 crc kubenswrapper[4758]: E0122 16:50:20.415871 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9eefb2edd497e8882b25a606d3567d882d0bffaad01240544d12a105b3b70d6\": container with ID starting with f9eefb2edd497e8882b25a606d3567d882d0bffaad01240544d12a105b3b70d6 not found: ID does not exist" containerID="f9eefb2edd497e8882b25a606d3567d882d0bffaad01240544d12a105b3b70d6" Jan 22 16:50:20 crc kubenswrapper[4758]: I0122 16:50:20.415910 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9eefb2edd497e8882b25a606d3567d882d0bffaad01240544d12a105b3b70d6"} err="failed to get container status \"f9eefb2edd497e8882b25a606d3567d882d0bffaad01240544d12a105b3b70d6\": rpc error: code = NotFound desc = could not find container \"f9eefb2edd497e8882b25a606d3567d882d0bffaad01240544d12a105b3b70d6\": container with ID starting with f9eefb2edd497e8882b25a606d3567d882d0bffaad01240544d12a105b3b70d6 not found: ID does not exist" Jan 22 16:50:20 crc kubenswrapper[4758]: I0122 16:50:20.415935 4758 scope.go:117] "RemoveContainer" containerID="cdeea964999a167c1966de302fc92bedb1c73a9b1c922ccb47fdcb715d22cd0f" Jan 22 16:50:20 crc kubenswrapper[4758]: E0122 16:50:20.420829 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdeea964999a167c1966de302fc92bedb1c73a9b1c922ccb47fdcb715d22cd0f\": container with ID starting with cdeea964999a167c1966de302fc92bedb1c73a9b1c922ccb47fdcb715d22cd0f not found: ID does not exist" containerID="cdeea964999a167c1966de302fc92bedb1c73a9b1c922ccb47fdcb715d22cd0f" Jan 22 16:50:20 crc kubenswrapper[4758]: I0122 16:50:20.420859 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdeea964999a167c1966de302fc92bedb1c73a9b1c922ccb47fdcb715d22cd0f"} err="failed to get container status \"cdeea964999a167c1966de302fc92bedb1c73a9b1c922ccb47fdcb715d22cd0f\": rpc error: code = NotFound desc = could not find container \"cdeea964999a167c1966de302fc92bedb1c73a9b1c922ccb47fdcb715d22cd0f\": container with ID starting with cdeea964999a167c1966de302fc92bedb1c73a9b1c922ccb47fdcb715d22cd0f not found: ID does not exist" Jan 22 16:50:20 crc kubenswrapper[4758]: I0122 16:50:20.820217 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a52e45a-35af-4c02-926d-d82f762b39da" path="/var/lib/kubelet/pods/5a52e45a-35af-4c02-926d-d82f762b39da/volumes" Jan 22 16:50:21 crc kubenswrapper[4758]: I0122 16:50:21.188094 4758 generic.go:334] "Generic (PLEG): container finished" podID="3df63c93-1525-4b38-92e3-4d9b15a5c293" containerID="820a71547780914d70b2e12a343c814f96d28854af15f22f8992a6942467c0ed" exitCode=0 Jan 22 16:50:21 crc kubenswrapper[4758]: I0122 16:50:21.188183 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-q78gl" event={"ID":"3df63c93-1525-4b38-92e3-4d9b15a5c293","Type":"ContainerDied","Data":"820a71547780914d70b2e12a343c814f96d28854af15f22f8992a6942467c0ed"} Jan 22 16:50:21 crc kubenswrapper[4758]: I0122 16:50:21.192425 4758 generic.go:334] "Generic (PLEG): container finished" podID="e7b6d798-7571-43c0-8202-0634015602ff" containerID="f4fdfdff907b80dc70c534d77dded51a1ac543c32451c95b631ccf3415267efd" exitCode=0 Jan 22 16:50:21 crc kubenswrapper[4758]: I0122 16:50:21.192478 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-xjlxr" event={"ID":"e7b6d798-7571-43c0-8202-0634015602ff","Type":"ContainerDied","Data":"f4fdfdff907b80dc70c534d77dded51a1ac543c32451c95b631ccf3415267efd"} Jan 22 16:50:21 crc kubenswrapper[4758]: I0122 16:50:21.196491 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-30be-account-create-update-g9s9d" event={"ID":"29ddf744-aa02-471f-b73c-930924240fa9","Type":"ContainerStarted","Data":"332246de6f8e00728179b45fd5cb5a60298f1fdc711251f0ad3f3b662c516493"} Jan 22 16:50:21 crc kubenswrapper[4758]: I0122 16:50:21.584070 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 22 16:50:21 crc kubenswrapper[4758]: I0122 16:50:21.585868 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 22 16:50:21 crc kubenswrapper[4758]: I0122 16:50:21.786308 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 22 16:50:21 crc kubenswrapper[4758]: I0122 16:50:21.804261 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-mpsgq" podUID="7911c0f6-531a-403c-861f-f9cd3ec18ce4" containerName="ovn-controller" probeResult="failure" output=< Jan 22 16:50:21 crc kubenswrapper[4758]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 22 16:50:21 crc kubenswrapper[4758]: > Jan 22 16:50:22 crc kubenswrapper[4758]: I0122 16:50:22.206147 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"be871bb7-c028-4788-9769-51685b7290ea","Type":"ContainerStarted","Data":"79b9f381a02032b4a5b2f18716f5c65caf38cec3fb249011a4a7dccd11d195ce"} Jan 22 16:50:22 crc kubenswrapper[4758]: I0122 16:50:22.206795 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-notifications-server-0" Jan 22 16:50:22 crc kubenswrapper[4758]: I0122 16:50:22.207716 4758 generic.go:334] "Generic (PLEG): container finished" podID="29ddf744-aa02-471f-b73c-930924240fa9" containerID="e586865eb66267a3854bb5f1f73b70e9e31667c2b02fd183592cbcd018d079f7" exitCode=0 Jan 22 16:50:22 crc kubenswrapper[4758]: I0122 16:50:22.207766 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-30be-account-create-update-g9s9d" event={"ID":"29ddf744-aa02-471f-b73c-930924240fa9","Type":"ContainerDied","Data":"e586865eb66267a3854bb5f1f73b70e9e31667c2b02fd183592cbcd018d079f7"} Jan 22 16:50:22 crc kubenswrapper[4758]: I0122 16:50:22.256547 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-notifications-server-0" podStartSLOduration=68.487699652 podStartE2EDuration="1m20.256532146s" podCreationTimestamp="2026-01-22 16:49:02 +0000 UTC" firstStartedPulling="2026-01-22 16:49:27.643142648 +0000 UTC m=+1189.126481933" lastFinishedPulling="2026-01-22 16:49:39.411975142 +0000 UTC m=+1200.895314427" observedRunningTime="2026-01-22 16:50:22.251468127 +0000 UTC m=+1243.734807442" watchObservedRunningTime="2026-01-22 16:50:22.256532146 +0000 UTC m=+1243.739871431" Jan 22 16:50:22 crc kubenswrapper[4758]: I0122 16:50:22.660193 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-q78gl" Jan 22 16:50:22 crc kubenswrapper[4758]: I0122 16:50:22.667080 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-xjlxr" Jan 22 16:50:22 crc kubenswrapper[4758]: I0122 16:50:22.751786 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mlbg\" (UniqueName: \"kubernetes.io/projected/e7b6d798-7571-43c0-8202-0634015602ff-kube-api-access-5mlbg\") pod \"e7b6d798-7571-43c0-8202-0634015602ff\" (UID: \"e7b6d798-7571-43c0-8202-0634015602ff\") " Jan 22 16:50:22 crc kubenswrapper[4758]: I0122 16:50:22.752262 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3df63c93-1525-4b38-92e3-4d9b15a5c293-ring-data-devices\") pod \"3df63c93-1525-4b38-92e3-4d9b15a5c293\" (UID: \"3df63c93-1525-4b38-92e3-4d9b15a5c293\") " Jan 22 16:50:22 crc kubenswrapper[4758]: I0122 16:50:22.752306 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7b6d798-7571-43c0-8202-0634015602ff-operator-scripts\") pod \"e7b6d798-7571-43c0-8202-0634015602ff\" (UID: \"e7b6d798-7571-43c0-8202-0634015602ff\") " Jan 22 16:50:22 crc kubenswrapper[4758]: I0122 16:50:22.752339 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3df63c93-1525-4b38-92e3-4d9b15a5c293-dispersionconf\") pod \"3df63c93-1525-4b38-92e3-4d9b15a5c293\" (UID: \"3df63c93-1525-4b38-92e3-4d9b15a5c293\") " Jan 22 16:50:22 crc kubenswrapper[4758]: I0122 16:50:22.752391 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3df63c93-1525-4b38-92e3-4d9b15a5c293-swiftconf\") pod \"3df63c93-1525-4b38-92e3-4d9b15a5c293\" (UID: \"3df63c93-1525-4b38-92e3-4d9b15a5c293\") " Jan 22 16:50:22 crc kubenswrapper[4758]: I0122 16:50:22.752489 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3df63c93-1525-4b38-92e3-4d9b15a5c293-etc-swift\") pod \"3df63c93-1525-4b38-92e3-4d9b15a5c293\" (UID: \"3df63c93-1525-4b38-92e3-4d9b15a5c293\") " Jan 22 16:50:22 crc kubenswrapper[4758]: I0122 16:50:22.752538 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3df63c93-1525-4b38-92e3-4d9b15a5c293-combined-ca-bundle\") pod \"3df63c93-1525-4b38-92e3-4d9b15a5c293\" (UID: \"3df63c93-1525-4b38-92e3-4d9b15a5c293\") " Jan 22 16:50:22 crc kubenswrapper[4758]: I0122 16:50:22.752627 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3df63c93-1525-4b38-92e3-4d9b15a5c293-scripts\") pod \"3df63c93-1525-4b38-92e3-4d9b15a5c293\" (UID: \"3df63c93-1525-4b38-92e3-4d9b15a5c293\") " Jan 22 16:50:22 crc kubenswrapper[4758]: I0122 16:50:22.752675 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skv96\" (UniqueName: \"kubernetes.io/projected/3df63c93-1525-4b38-92e3-4d9b15a5c293-kube-api-access-skv96\") pod \"3df63c93-1525-4b38-92e3-4d9b15a5c293\" (UID: \"3df63c93-1525-4b38-92e3-4d9b15a5c293\") " Jan 22 16:50:22 crc kubenswrapper[4758]: I0122 16:50:22.752848 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7b6d798-7571-43c0-8202-0634015602ff-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e7b6d798-7571-43c0-8202-0634015602ff" (UID: "e7b6d798-7571-43c0-8202-0634015602ff"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:50:22 crc kubenswrapper[4758]: I0122 16:50:22.752868 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3df63c93-1525-4b38-92e3-4d9b15a5c293-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "3df63c93-1525-4b38-92e3-4d9b15a5c293" (UID: "3df63c93-1525-4b38-92e3-4d9b15a5c293"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:50:22 crc kubenswrapper[4758]: I0122 16:50:22.753344 4758 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3df63c93-1525-4b38-92e3-4d9b15a5c293-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:22 crc kubenswrapper[4758]: I0122 16:50:22.753370 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7b6d798-7571-43c0-8202-0634015602ff-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:22 crc kubenswrapper[4758]: I0122 16:50:22.753826 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3df63c93-1525-4b38-92e3-4d9b15a5c293-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "3df63c93-1525-4b38-92e3-4d9b15a5c293" (UID: "3df63c93-1525-4b38-92e3-4d9b15a5c293"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:50:22 crc kubenswrapper[4758]: I0122 16:50:22.758508 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3df63c93-1525-4b38-92e3-4d9b15a5c293-kube-api-access-skv96" (OuterVolumeSpecName: "kube-api-access-skv96") pod "3df63c93-1525-4b38-92e3-4d9b15a5c293" (UID: "3df63c93-1525-4b38-92e3-4d9b15a5c293"). InnerVolumeSpecName "kube-api-access-skv96". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:50:22 crc kubenswrapper[4758]: I0122 16:50:22.768002 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7b6d798-7571-43c0-8202-0634015602ff-kube-api-access-5mlbg" (OuterVolumeSpecName: "kube-api-access-5mlbg") pod "e7b6d798-7571-43c0-8202-0634015602ff" (UID: "e7b6d798-7571-43c0-8202-0634015602ff"). InnerVolumeSpecName "kube-api-access-5mlbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:50:22 crc kubenswrapper[4758]: I0122 16:50:22.771828 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3df63c93-1525-4b38-92e3-4d9b15a5c293-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "3df63c93-1525-4b38-92e3-4d9b15a5c293" (UID: "3df63c93-1525-4b38-92e3-4d9b15a5c293"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:50:22 crc kubenswrapper[4758]: I0122 16:50:22.789663 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3df63c93-1525-4b38-92e3-4d9b15a5c293-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3df63c93-1525-4b38-92e3-4d9b15a5c293" (UID: "3df63c93-1525-4b38-92e3-4d9b15a5c293"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:50:22 crc kubenswrapper[4758]: I0122 16:50:22.800962 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3df63c93-1525-4b38-92e3-4d9b15a5c293-scripts" (OuterVolumeSpecName: "scripts") pod "3df63c93-1525-4b38-92e3-4d9b15a5c293" (UID: "3df63c93-1525-4b38-92e3-4d9b15a5c293"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:50:22 crc kubenswrapper[4758]: I0122 16:50:22.804134 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3df63c93-1525-4b38-92e3-4d9b15a5c293-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "3df63c93-1525-4b38-92e3-4d9b15a5c293" (UID: "3df63c93-1525-4b38-92e3-4d9b15a5c293"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:50:22 crc kubenswrapper[4758]: I0122 16:50:22.855709 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3df63c93-1525-4b38-92e3-4d9b15a5c293-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:22 crc kubenswrapper[4758]: I0122 16:50:22.856294 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3df63c93-1525-4b38-92e3-4d9b15a5c293-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:22 crc kubenswrapper[4758]: I0122 16:50:22.856435 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skv96\" (UniqueName: \"kubernetes.io/projected/3df63c93-1525-4b38-92e3-4d9b15a5c293-kube-api-access-skv96\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:22 crc kubenswrapper[4758]: I0122 16:50:22.856560 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mlbg\" (UniqueName: \"kubernetes.io/projected/e7b6d798-7571-43c0-8202-0634015602ff-kube-api-access-5mlbg\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:22 crc kubenswrapper[4758]: I0122 16:50:22.856633 4758 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3df63c93-1525-4b38-92e3-4d9b15a5c293-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:22 crc kubenswrapper[4758]: I0122 16:50:22.856710 4758 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3df63c93-1525-4b38-92e3-4d9b15a5c293-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:22 crc kubenswrapper[4758]: I0122 16:50:22.856783 4758 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3df63c93-1525-4b38-92e3-4d9b15a5c293-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:23 crc kubenswrapper[4758]: I0122 16:50:23.217016 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-xjlxr" Jan 22 16:50:23 crc kubenswrapper[4758]: I0122 16:50:23.217660 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-xjlxr" event={"ID":"e7b6d798-7571-43c0-8202-0634015602ff","Type":"ContainerDied","Data":"56c47df6160948fc27279f61dd09ba48f79d049b98b863f4b885e821fb6636e4"} Jan 22 16:50:23 crc kubenswrapper[4758]: I0122 16:50:23.217685 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56c47df6160948fc27279f61dd09ba48f79d049b98b863f4b885e821fb6636e4" Jan 22 16:50:23 crc kubenswrapper[4758]: I0122 16:50:23.219621 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-q78gl" Jan 22 16:50:23 crc kubenswrapper[4758]: I0122 16:50:23.220396 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-q78gl" event={"ID":"3df63c93-1525-4b38-92e3-4d9b15a5c293","Type":"ContainerDied","Data":"39d8495e9971c2f2a000ce016267f014f89c9fa72088e7acdfd91fc84710c3c0"} Jan 22 16:50:23 crc kubenswrapper[4758]: I0122 16:50:23.220425 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39d8495e9971c2f2a000ce016267f014f89c9fa72088e7acdfd91fc84710c3c0" Jan 22 16:50:23 crc kubenswrapper[4758]: I0122 16:50:23.503844 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-30be-account-create-update-g9s9d" Jan 22 16:50:23 crc kubenswrapper[4758]: I0122 16:50:23.542254 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-5258f"] Jan 22 16:50:23 crc kubenswrapper[4758]: E0122 16:50:23.542805 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29ddf744-aa02-471f-b73c-930924240fa9" containerName="mariadb-account-create-update" Jan 22 16:50:23 crc kubenswrapper[4758]: I0122 16:50:23.542821 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="29ddf744-aa02-471f-b73c-930924240fa9" containerName="mariadb-account-create-update" Jan 22 16:50:23 crc kubenswrapper[4758]: E0122 16:50:23.542838 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a52e45a-35af-4c02-926d-d82f762b39da" containerName="dnsmasq-dns" Jan 22 16:50:23 crc kubenswrapper[4758]: I0122 16:50:23.542845 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a52e45a-35af-4c02-926d-d82f762b39da" containerName="dnsmasq-dns" Jan 22 16:50:23 crc kubenswrapper[4758]: E0122 16:50:23.542875 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a52e45a-35af-4c02-926d-d82f762b39da" containerName="init" Jan 22 16:50:23 crc kubenswrapper[4758]: I0122 16:50:23.542883 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a52e45a-35af-4c02-926d-d82f762b39da" containerName="init" Jan 22 16:50:23 crc kubenswrapper[4758]: E0122 16:50:23.542895 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7b6d798-7571-43c0-8202-0634015602ff" containerName="mariadb-database-create" Jan 22 16:50:23 crc kubenswrapper[4758]: I0122 16:50:23.542901 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7b6d798-7571-43c0-8202-0634015602ff" containerName="mariadb-database-create" Jan 22 16:50:23 crc kubenswrapper[4758]: E0122 16:50:23.542909 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3df63c93-1525-4b38-92e3-4d9b15a5c293" containerName="swift-ring-rebalance" Jan 22 16:50:23 crc kubenswrapper[4758]: I0122 16:50:23.542915 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="3df63c93-1525-4b38-92e3-4d9b15a5c293" containerName="swift-ring-rebalance" Jan 22 16:50:23 crc kubenswrapper[4758]: I0122 16:50:23.543169 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7b6d798-7571-43c0-8202-0634015602ff" containerName="mariadb-database-create" Jan 22 16:50:23 crc kubenswrapper[4758]: I0122 16:50:23.543190 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="29ddf744-aa02-471f-b73c-930924240fa9" containerName="mariadb-account-create-update" Jan 22 16:50:23 crc kubenswrapper[4758]: I0122 16:50:23.543210 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="3df63c93-1525-4b38-92e3-4d9b15a5c293" containerName="swift-ring-rebalance" Jan 22 16:50:23 crc kubenswrapper[4758]: I0122 16:50:23.543219 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a52e45a-35af-4c02-926d-d82f762b39da" containerName="dnsmasq-dns" Jan 22 16:50:23 crc kubenswrapper[4758]: I0122 16:50:23.543963 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5258f" Jan 22 16:50:23 crc kubenswrapper[4758]: I0122 16:50:23.550123 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 22 16:50:23 crc kubenswrapper[4758]: I0122 16:50:23.577800 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qsd8\" (UniqueName: \"kubernetes.io/projected/29ddf744-aa02-471f-b73c-930924240fa9-kube-api-access-2qsd8\") pod \"29ddf744-aa02-471f-b73c-930924240fa9\" (UID: \"29ddf744-aa02-471f-b73c-930924240fa9\") " Jan 22 16:50:23 crc kubenswrapper[4758]: I0122 16:50:23.577891 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29ddf744-aa02-471f-b73c-930924240fa9-operator-scripts\") pod \"29ddf744-aa02-471f-b73c-930924240fa9\" (UID: \"29ddf744-aa02-471f-b73c-930924240fa9\") " Jan 22 16:50:23 crc kubenswrapper[4758]: I0122 16:50:23.584125 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29ddf744-aa02-471f-b73c-930924240fa9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "29ddf744-aa02-471f-b73c-930924240fa9" (UID: "29ddf744-aa02-471f-b73c-930924240fa9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:50:23 crc kubenswrapper[4758]: I0122 16:50:23.594456 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-5258f"] Jan 22 16:50:23 crc kubenswrapper[4758]: I0122 16:50:23.611659 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29ddf744-aa02-471f-b73c-930924240fa9-kube-api-access-2qsd8" (OuterVolumeSpecName: "kube-api-access-2qsd8") pod "29ddf744-aa02-471f-b73c-930924240fa9" (UID: "29ddf744-aa02-471f-b73c-930924240fa9"). InnerVolumeSpecName "kube-api-access-2qsd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:50:23 crc kubenswrapper[4758]: I0122 16:50:23.680171 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c00ff189-8fdb-479b-8722-40dd27196b0e-operator-scripts\") pod \"root-account-create-update-5258f\" (UID: \"c00ff189-8fdb-479b-8722-40dd27196b0e\") " pod="openstack/root-account-create-update-5258f" Jan 22 16:50:23 crc kubenswrapper[4758]: I0122 16:50:23.680229 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9n9n\" (UniqueName: \"kubernetes.io/projected/c00ff189-8fdb-479b-8722-40dd27196b0e-kube-api-access-p9n9n\") pod \"root-account-create-update-5258f\" (UID: \"c00ff189-8fdb-479b-8722-40dd27196b0e\") " pod="openstack/root-account-create-update-5258f" Jan 22 16:50:23 crc kubenswrapper[4758]: I0122 16:50:23.680428 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qsd8\" (UniqueName: \"kubernetes.io/projected/29ddf744-aa02-471f-b73c-930924240fa9-kube-api-access-2qsd8\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:23 crc kubenswrapper[4758]: I0122 16:50:23.680446 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29ddf744-aa02-471f-b73c-930924240fa9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:23 crc kubenswrapper[4758]: I0122 16:50:23.782314 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9n9n\" (UniqueName: \"kubernetes.io/projected/c00ff189-8fdb-479b-8722-40dd27196b0e-kube-api-access-p9n9n\") pod \"root-account-create-update-5258f\" (UID: \"c00ff189-8fdb-479b-8722-40dd27196b0e\") " pod="openstack/root-account-create-update-5258f" Jan 22 16:50:23 crc kubenswrapper[4758]: I0122 16:50:23.782493 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c00ff189-8fdb-479b-8722-40dd27196b0e-operator-scripts\") pod \"root-account-create-update-5258f\" (UID: \"c00ff189-8fdb-479b-8722-40dd27196b0e\") " pod="openstack/root-account-create-update-5258f" Jan 22 16:50:23 crc kubenswrapper[4758]: I0122 16:50:23.783255 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c00ff189-8fdb-479b-8722-40dd27196b0e-operator-scripts\") pod \"root-account-create-update-5258f\" (UID: \"c00ff189-8fdb-479b-8722-40dd27196b0e\") " pod="openstack/root-account-create-update-5258f" Jan 22 16:50:23 crc kubenswrapper[4758]: I0122 16:50:23.797888 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9n9n\" (UniqueName: \"kubernetes.io/projected/c00ff189-8fdb-479b-8722-40dd27196b0e-kube-api-access-p9n9n\") pod \"root-account-create-update-5258f\" (UID: \"c00ff189-8fdb-479b-8722-40dd27196b0e\") " pod="openstack/root-account-create-update-5258f" Jan 22 16:50:23 crc kubenswrapper[4758]: I0122 16:50:23.973643 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5258f" Jan 22 16:50:24 crc kubenswrapper[4758]: I0122 16:50:24.227571 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-30be-account-create-update-g9s9d" event={"ID":"29ddf744-aa02-471f-b73c-930924240fa9","Type":"ContainerDied","Data":"332246de6f8e00728179b45fd5cb5a60298f1fdc711251f0ad3f3b662c516493"} Jan 22 16:50:24 crc kubenswrapper[4758]: I0122 16:50:24.227915 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="332246de6f8e00728179b45fd5cb5a60298f1fdc711251f0ad3f3b662c516493" Jan 22 16:50:24 crc kubenswrapper[4758]: I0122 16:50:24.227972 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-30be-account-create-update-g9s9d" Jan 22 16:50:24 crc kubenswrapper[4758]: I0122 16:50:24.430438 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-5258f"] Jan 22 16:50:24 crc kubenswrapper[4758]: W0122 16:50:24.436780 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc00ff189_8fdb_479b_8722_40dd27196b0e.slice/crio-42109ecfa33a7ab9d34d1be4f70bc06dc70485ae55c3d02f8c249f5e71decfc5 WatchSource:0}: Error finding container 42109ecfa33a7ab9d34d1be4f70bc06dc70485ae55c3d02f8c249f5e71decfc5: Status 404 returned error can't find the container with id 42109ecfa33a7ab9d34d1be4f70bc06dc70485ae55c3d02f8c249f5e71decfc5 Jan 22 16:50:25 crc kubenswrapper[4758]: I0122 16:50:25.090511 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:25 crc kubenswrapper[4758]: I0122 16:50:25.092448 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:25 crc kubenswrapper[4758]: I0122 16:50:25.235897 4758 generic.go:334] "Generic (PLEG): container finished" podID="c00ff189-8fdb-479b-8722-40dd27196b0e" containerID="1d0dd193fe5f1c6b6c78b952d4d11eadc93119951988dacd1373b9ab6e7c6e1a" exitCode=0 Jan 22 16:50:25 crc kubenswrapper[4758]: I0122 16:50:25.237505 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5258f" event={"ID":"c00ff189-8fdb-479b-8722-40dd27196b0e","Type":"ContainerDied","Data":"1d0dd193fe5f1c6b6c78b952d4d11eadc93119951988dacd1373b9ab6e7c6e1a"} Jan 22 16:50:25 crc kubenswrapper[4758]: I0122 16:50:25.237533 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5258f" event={"ID":"c00ff189-8fdb-479b-8722-40dd27196b0e","Type":"ContainerStarted","Data":"42109ecfa33a7ab9d34d1be4f70bc06dc70485ae55c3d02f8c249f5e71decfc5"} Jan 22 16:50:25 crc kubenswrapper[4758]: I0122 16:50:25.239352 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:26 crc kubenswrapper[4758]: I0122 16:50:26.192592 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-l8rd6"] Jan 22 16:50:26 crc kubenswrapper[4758]: I0122 16:50:26.196400 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-l8rd6" Jan 22 16:50:26 crc kubenswrapper[4758]: I0122 16:50:26.218260 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-l8rd6"] Jan 22 16:50:26 crc kubenswrapper[4758]: I0122 16:50:26.288972 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-c3ab-account-create-update-4wv2p"] Jan 22 16:50:26 crc kubenswrapper[4758]: I0122 16:50:26.290054 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c3ab-account-create-update-4wv2p" Jan 22 16:50:26 crc kubenswrapper[4758]: I0122 16:50:26.298505 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 22 16:50:26 crc kubenswrapper[4758]: I0122 16:50:26.305642 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-c3ab-account-create-update-4wv2p"] Jan 22 16:50:26 crc kubenswrapper[4758]: I0122 16:50:26.324344 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgjkg\" (UniqueName: \"kubernetes.io/projected/6088aa85-eb17-48ba-badd-ea46ba4333bb-kube-api-access-lgjkg\") pod \"keystone-db-create-l8rd6\" (UID: \"6088aa85-eb17-48ba-badd-ea46ba4333bb\") " pod="openstack/keystone-db-create-l8rd6" Jan 22 16:50:26 crc kubenswrapper[4758]: I0122 16:50:26.324447 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6088aa85-eb17-48ba-badd-ea46ba4333bb-operator-scripts\") pod \"keystone-db-create-l8rd6\" (UID: \"6088aa85-eb17-48ba-badd-ea46ba4333bb\") " pod="openstack/keystone-db-create-l8rd6" Jan 22 16:50:26 crc kubenswrapper[4758]: I0122 16:50:26.426559 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6088aa85-eb17-48ba-badd-ea46ba4333bb-operator-scripts\") pod \"keystone-db-create-l8rd6\" (UID: \"6088aa85-eb17-48ba-badd-ea46ba4333bb\") " pod="openstack/keystone-db-create-l8rd6" Jan 22 16:50:26 crc kubenswrapper[4758]: I0122 16:50:26.426803 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e50e041-f10a-43dc-9ba9-1b8adf5d0296-operator-scripts\") pod \"keystone-c3ab-account-create-update-4wv2p\" (UID: \"8e50e041-f10a-43dc-9ba9-1b8adf5d0296\") " pod="openstack/keystone-c3ab-account-create-update-4wv2p" Jan 22 16:50:26 crc kubenswrapper[4758]: I0122 16:50:26.426866 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgjkg\" (UniqueName: \"kubernetes.io/projected/6088aa85-eb17-48ba-badd-ea46ba4333bb-kube-api-access-lgjkg\") pod \"keystone-db-create-l8rd6\" (UID: \"6088aa85-eb17-48ba-badd-ea46ba4333bb\") " pod="openstack/keystone-db-create-l8rd6" Jan 22 16:50:26 crc kubenswrapper[4758]: I0122 16:50:26.426887 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r62k5\" (UniqueName: \"kubernetes.io/projected/8e50e041-f10a-43dc-9ba9-1b8adf5d0296-kube-api-access-r62k5\") pod \"keystone-c3ab-account-create-update-4wv2p\" (UID: \"8e50e041-f10a-43dc-9ba9-1b8adf5d0296\") " pod="openstack/keystone-c3ab-account-create-update-4wv2p" Jan 22 16:50:26 crc kubenswrapper[4758]: I0122 16:50:26.428676 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6088aa85-eb17-48ba-badd-ea46ba4333bb-operator-scripts\") pod \"keystone-db-create-l8rd6\" (UID: \"6088aa85-eb17-48ba-badd-ea46ba4333bb\") " pod="openstack/keystone-db-create-l8rd6" Jan 22 16:50:26 crc kubenswrapper[4758]: I0122 16:50:26.447293 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgjkg\" (UniqueName: \"kubernetes.io/projected/6088aa85-eb17-48ba-badd-ea46ba4333bb-kube-api-access-lgjkg\") pod \"keystone-db-create-l8rd6\" (UID: \"6088aa85-eb17-48ba-badd-ea46ba4333bb\") " pod="openstack/keystone-db-create-l8rd6" Jan 22 16:50:26 crc kubenswrapper[4758]: I0122 16:50:26.509709 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-l5sww"] Jan 22 16:50:26 crc kubenswrapper[4758]: I0122 16:50:26.512146 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-l5sww" Jan 22 16:50:26 crc kubenswrapper[4758]: I0122 16:50:26.576587 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-l8rd6" Jan 22 16:50:26 crc kubenswrapper[4758]: I0122 16:50:26.577286 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f206eab-3576-41d8-b0b8-abbf89628582-operator-scripts\") pod \"placement-db-create-l5sww\" (UID: \"6f206eab-3576-41d8-b0b8-abbf89628582\") " pod="openstack/placement-db-create-l5sww" Jan 22 16:50:26 crc kubenswrapper[4758]: I0122 16:50:26.577356 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e50e041-f10a-43dc-9ba9-1b8adf5d0296-operator-scripts\") pod \"keystone-c3ab-account-create-update-4wv2p\" (UID: \"8e50e041-f10a-43dc-9ba9-1b8adf5d0296\") " pod="openstack/keystone-c3ab-account-create-update-4wv2p" Jan 22 16:50:26 crc kubenswrapper[4758]: I0122 16:50:26.577415 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r62k5\" (UniqueName: \"kubernetes.io/projected/8e50e041-f10a-43dc-9ba9-1b8adf5d0296-kube-api-access-r62k5\") pod \"keystone-c3ab-account-create-update-4wv2p\" (UID: \"8e50e041-f10a-43dc-9ba9-1b8adf5d0296\") " pod="openstack/keystone-c3ab-account-create-update-4wv2p" Jan 22 16:50:26 crc kubenswrapper[4758]: I0122 16:50:26.577444 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfffh\" (UniqueName: \"kubernetes.io/projected/6f206eab-3576-41d8-b0b8-abbf89628582-kube-api-access-tfffh\") pod \"placement-db-create-l5sww\" (UID: \"6f206eab-3576-41d8-b0b8-abbf89628582\") " pod="openstack/placement-db-create-l5sww" Jan 22 16:50:26 crc kubenswrapper[4758]: I0122 16:50:26.578136 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e50e041-f10a-43dc-9ba9-1b8adf5d0296-operator-scripts\") pod \"keystone-c3ab-account-create-update-4wv2p\" (UID: \"8e50e041-f10a-43dc-9ba9-1b8adf5d0296\") " pod="openstack/keystone-c3ab-account-create-update-4wv2p" Jan 22 16:50:26 crc kubenswrapper[4758]: I0122 16:50:26.597788 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-l5sww"] Jan 22 16:50:26 crc kubenswrapper[4758]: I0122 16:50:26.666237 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r62k5\" (UniqueName: \"kubernetes.io/projected/8e50e041-f10a-43dc-9ba9-1b8adf5d0296-kube-api-access-r62k5\") pod \"keystone-c3ab-account-create-update-4wv2p\" (UID: \"8e50e041-f10a-43dc-9ba9-1b8adf5d0296\") " pod="openstack/keystone-c3ab-account-create-update-4wv2p" Jan 22 16:50:26 crc kubenswrapper[4758]: I0122 16:50:26.679913 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f206eab-3576-41d8-b0b8-abbf89628582-operator-scripts\") pod \"placement-db-create-l5sww\" (UID: \"6f206eab-3576-41d8-b0b8-abbf89628582\") " pod="openstack/placement-db-create-l5sww" Jan 22 16:50:26 crc kubenswrapper[4758]: I0122 16:50:26.679989 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfffh\" (UniqueName: \"kubernetes.io/projected/6f206eab-3576-41d8-b0b8-abbf89628582-kube-api-access-tfffh\") pod \"placement-db-create-l5sww\" (UID: \"6f206eab-3576-41d8-b0b8-abbf89628582\") " pod="openstack/placement-db-create-l5sww" Jan 22 16:50:26 crc kubenswrapper[4758]: I0122 16:50:26.680944 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-afd0-account-create-update-jtfps"] Jan 22 16:50:26 crc kubenswrapper[4758]: I0122 16:50:26.682142 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-afd0-account-create-update-jtfps" Jan 22 16:50:26 crc kubenswrapper[4758]: I0122 16:50:26.682206 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f206eab-3576-41d8-b0b8-abbf89628582-operator-scripts\") pod \"placement-db-create-l5sww\" (UID: \"6f206eab-3576-41d8-b0b8-abbf89628582\") " pod="openstack/placement-db-create-l5sww" Jan 22 16:50:26 crc kubenswrapper[4758]: I0122 16:50:26.684183 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 22 16:50:26 crc kubenswrapper[4758]: I0122 16:50:26.703053 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfffh\" (UniqueName: \"kubernetes.io/projected/6f206eab-3576-41d8-b0b8-abbf89628582-kube-api-access-tfffh\") pod \"placement-db-create-l5sww\" (UID: \"6f206eab-3576-41d8-b0b8-abbf89628582\") " pod="openstack/placement-db-create-l5sww" Jan 22 16:50:26 crc kubenswrapper[4758]: I0122 16:50:26.717694 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-afd0-account-create-update-jtfps"] Jan 22 16:50:26 crc kubenswrapper[4758]: I0122 16:50:26.781523 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlz8r\" (UniqueName: \"kubernetes.io/projected/b17a8111-b550-4c28-98bf-fe568e5f35f5-kube-api-access-jlz8r\") pod \"placement-afd0-account-create-update-jtfps\" (UID: \"b17a8111-b550-4c28-98bf-fe568e5f35f5\") " pod="openstack/placement-afd0-account-create-update-jtfps" Jan 22 16:50:26 crc kubenswrapper[4758]: I0122 16:50:26.781964 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b17a8111-b550-4c28-98bf-fe568e5f35f5-operator-scripts\") pod \"placement-afd0-account-create-update-jtfps\" (UID: \"b17a8111-b550-4c28-98bf-fe568e5f35f5\") " pod="openstack/placement-afd0-account-create-update-jtfps" Jan 22 16:50:26 crc kubenswrapper[4758]: I0122 16:50:26.791684 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5258f" Jan 22 16:50:26 crc kubenswrapper[4758]: I0122 16:50:26.823543 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-mpsgq" podUID="7911c0f6-531a-403c-861f-f9cd3ec18ce4" containerName="ovn-controller" probeResult="failure" output=< Jan 22 16:50:26 crc kubenswrapper[4758]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 22 16:50:26 crc kubenswrapper[4758]: > Jan 22 16:50:26 crc kubenswrapper[4758]: I0122 16:50:26.861388 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-6sx98" Jan 22 16:50:26 crc kubenswrapper[4758]: I0122 16:50:26.864496 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-6sx98" Jan 22 16:50:26 crc kubenswrapper[4758]: I0122 16:50:26.883829 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlz8r\" (UniqueName: \"kubernetes.io/projected/b17a8111-b550-4c28-98bf-fe568e5f35f5-kube-api-access-jlz8r\") pod \"placement-afd0-account-create-update-jtfps\" (UID: \"b17a8111-b550-4c28-98bf-fe568e5f35f5\") " pod="openstack/placement-afd0-account-create-update-jtfps" Jan 22 16:50:26 crc kubenswrapper[4758]: I0122 16:50:26.883958 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b17a8111-b550-4c28-98bf-fe568e5f35f5-operator-scripts\") pod \"placement-afd0-account-create-update-jtfps\" (UID: \"b17a8111-b550-4c28-98bf-fe568e5f35f5\") " pod="openstack/placement-afd0-account-create-update-jtfps" Jan 22 16:50:26 crc kubenswrapper[4758]: I0122 16:50:26.889139 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b17a8111-b550-4c28-98bf-fe568e5f35f5-operator-scripts\") pod \"placement-afd0-account-create-update-jtfps\" (UID: \"b17a8111-b550-4c28-98bf-fe568e5f35f5\") " pod="openstack/placement-afd0-account-create-update-jtfps" Jan 22 16:50:26 crc kubenswrapper[4758]: I0122 16:50:26.905972 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-l5sww" Jan 22 16:50:26 crc kubenswrapper[4758]: I0122 16:50:26.911908 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlz8r\" (UniqueName: \"kubernetes.io/projected/b17a8111-b550-4c28-98bf-fe568e5f35f5-kube-api-access-jlz8r\") pod \"placement-afd0-account-create-update-jtfps\" (UID: \"b17a8111-b550-4c28-98bf-fe568e5f35f5\") " pod="openstack/placement-afd0-account-create-update-jtfps" Jan 22 16:50:26 crc kubenswrapper[4758]: I0122 16:50:26.921310 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c3ab-account-create-update-4wv2p" Jan 22 16:50:26 crc kubenswrapper[4758]: I0122 16:50:26.985438 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9n9n\" (UniqueName: \"kubernetes.io/projected/c00ff189-8fdb-479b-8722-40dd27196b0e-kube-api-access-p9n9n\") pod \"c00ff189-8fdb-479b-8722-40dd27196b0e\" (UID: \"c00ff189-8fdb-479b-8722-40dd27196b0e\") " Jan 22 16:50:26 crc kubenswrapper[4758]: I0122 16:50:26.985661 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c00ff189-8fdb-479b-8722-40dd27196b0e-operator-scripts\") pod \"c00ff189-8fdb-479b-8722-40dd27196b0e\" (UID: \"c00ff189-8fdb-479b-8722-40dd27196b0e\") " Jan 22 16:50:26 crc kubenswrapper[4758]: I0122 16:50:26.986062 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c00ff189-8fdb-479b-8722-40dd27196b0e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c00ff189-8fdb-479b-8722-40dd27196b0e" (UID: "c00ff189-8fdb-479b-8722-40dd27196b0e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:50:26 crc kubenswrapper[4758]: I0122 16:50:26.987426 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c00ff189-8fdb-479b-8722-40dd27196b0e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:26 crc kubenswrapper[4758]: I0122 16:50:26.991003 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c00ff189-8fdb-479b-8722-40dd27196b0e-kube-api-access-p9n9n" (OuterVolumeSpecName: "kube-api-access-p9n9n") pod "c00ff189-8fdb-479b-8722-40dd27196b0e" (UID: "c00ff189-8fdb-479b-8722-40dd27196b0e"). InnerVolumeSpecName "kube-api-access-p9n9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:50:27 crc kubenswrapper[4758]: I0122 16:50:27.090042 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9n9n\" (UniqueName: \"kubernetes.io/projected/c00ff189-8fdb-479b-8722-40dd27196b0e-kube-api-access-p9n9n\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:27 crc kubenswrapper[4758]: I0122 16:50:27.090169 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-afd0-account-create-update-jtfps" Jan 22 16:50:27 crc kubenswrapper[4758]: I0122 16:50:27.105352 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-mpsgq-config-9qfgh"] Jan 22 16:50:27 crc kubenswrapper[4758]: E0122 16:50:27.105734 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c00ff189-8fdb-479b-8722-40dd27196b0e" containerName="mariadb-account-create-update" Jan 22 16:50:27 crc kubenswrapper[4758]: I0122 16:50:27.105769 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="c00ff189-8fdb-479b-8722-40dd27196b0e" containerName="mariadb-account-create-update" Jan 22 16:50:27 crc kubenswrapper[4758]: I0122 16:50:27.105955 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="c00ff189-8fdb-479b-8722-40dd27196b0e" containerName="mariadb-account-create-update" Jan 22 16:50:27 crc kubenswrapper[4758]: I0122 16:50:27.106545 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mpsgq-config-9qfgh" Jan 22 16:50:27 crc kubenswrapper[4758]: I0122 16:50:27.119507 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 22 16:50:27 crc kubenswrapper[4758]: I0122 16:50:27.125357 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mpsgq-config-9qfgh"] Jan 22 16:50:27 crc kubenswrapper[4758]: I0122 16:50:27.165723 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-l8rd6"] Jan 22 16:50:27 crc kubenswrapper[4758]: I0122 16:50:27.273045 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-l8rd6" event={"ID":"6088aa85-eb17-48ba-badd-ea46ba4333bb","Type":"ContainerStarted","Data":"fddc2584eca10c6089170a5d88ed7a63ef1b14a8f37388b17c2ed62490672c08"} Jan 22 16:50:27 crc kubenswrapper[4758]: I0122 16:50:27.275848 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5258f" event={"ID":"c00ff189-8fdb-479b-8722-40dd27196b0e","Type":"ContainerDied","Data":"42109ecfa33a7ab9d34d1be4f70bc06dc70485ae55c3d02f8c249f5e71decfc5"} Jan 22 16:50:27 crc kubenswrapper[4758]: I0122 16:50:27.275905 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42109ecfa33a7ab9d34d1be4f70bc06dc70485ae55c3d02f8c249f5e71decfc5" Jan 22 16:50:27 crc kubenswrapper[4758]: I0122 16:50:27.275968 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5258f" Jan 22 16:50:27 crc kubenswrapper[4758]: I0122 16:50:27.293512 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0916e0a0-54bc-43e8-bc04-547f2d5ad2d4-additional-scripts\") pod \"ovn-controller-mpsgq-config-9qfgh\" (UID: \"0916e0a0-54bc-43e8-bc04-547f2d5ad2d4\") " pod="openstack/ovn-controller-mpsgq-config-9qfgh" Jan 22 16:50:27 crc kubenswrapper[4758]: I0122 16:50:27.293579 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0916e0a0-54bc-43e8-bc04-547f2d5ad2d4-var-log-ovn\") pod \"ovn-controller-mpsgq-config-9qfgh\" (UID: \"0916e0a0-54bc-43e8-bc04-547f2d5ad2d4\") " pod="openstack/ovn-controller-mpsgq-config-9qfgh" Jan 22 16:50:27 crc kubenswrapper[4758]: I0122 16:50:27.293614 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0916e0a0-54bc-43e8-bc04-547f2d5ad2d4-var-run\") pod \"ovn-controller-mpsgq-config-9qfgh\" (UID: \"0916e0a0-54bc-43e8-bc04-547f2d5ad2d4\") " pod="openstack/ovn-controller-mpsgq-config-9qfgh" Jan 22 16:50:27 crc kubenswrapper[4758]: I0122 16:50:27.293639 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j57p7\" (UniqueName: \"kubernetes.io/projected/0916e0a0-54bc-43e8-bc04-547f2d5ad2d4-kube-api-access-j57p7\") pod \"ovn-controller-mpsgq-config-9qfgh\" (UID: \"0916e0a0-54bc-43e8-bc04-547f2d5ad2d4\") " pod="openstack/ovn-controller-mpsgq-config-9qfgh" Jan 22 16:50:27 crc kubenswrapper[4758]: I0122 16:50:27.293660 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0916e0a0-54bc-43e8-bc04-547f2d5ad2d4-scripts\") pod \"ovn-controller-mpsgq-config-9qfgh\" (UID: \"0916e0a0-54bc-43e8-bc04-547f2d5ad2d4\") " pod="openstack/ovn-controller-mpsgq-config-9qfgh" Jan 22 16:50:27 crc kubenswrapper[4758]: I0122 16:50:27.293706 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0916e0a0-54bc-43e8-bc04-547f2d5ad2d4-var-run-ovn\") pod \"ovn-controller-mpsgq-config-9qfgh\" (UID: \"0916e0a0-54bc-43e8-bc04-547f2d5ad2d4\") " pod="openstack/ovn-controller-mpsgq-config-9qfgh" Jan 22 16:50:27 crc kubenswrapper[4758]: I0122 16:50:27.395838 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0916e0a0-54bc-43e8-bc04-547f2d5ad2d4-additional-scripts\") pod \"ovn-controller-mpsgq-config-9qfgh\" (UID: \"0916e0a0-54bc-43e8-bc04-547f2d5ad2d4\") " pod="openstack/ovn-controller-mpsgq-config-9qfgh" Jan 22 16:50:27 crc kubenswrapper[4758]: I0122 16:50:27.395943 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0916e0a0-54bc-43e8-bc04-547f2d5ad2d4-var-log-ovn\") pod \"ovn-controller-mpsgq-config-9qfgh\" (UID: \"0916e0a0-54bc-43e8-bc04-547f2d5ad2d4\") " pod="openstack/ovn-controller-mpsgq-config-9qfgh" Jan 22 16:50:27 crc kubenswrapper[4758]: I0122 16:50:27.395977 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0916e0a0-54bc-43e8-bc04-547f2d5ad2d4-var-run\") pod \"ovn-controller-mpsgq-config-9qfgh\" (UID: \"0916e0a0-54bc-43e8-bc04-547f2d5ad2d4\") " pod="openstack/ovn-controller-mpsgq-config-9qfgh" Jan 22 16:50:27 crc kubenswrapper[4758]: I0122 16:50:27.396005 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j57p7\" (UniqueName: \"kubernetes.io/projected/0916e0a0-54bc-43e8-bc04-547f2d5ad2d4-kube-api-access-j57p7\") pod \"ovn-controller-mpsgq-config-9qfgh\" (UID: \"0916e0a0-54bc-43e8-bc04-547f2d5ad2d4\") " pod="openstack/ovn-controller-mpsgq-config-9qfgh" Jan 22 16:50:27 crc kubenswrapper[4758]: I0122 16:50:27.396043 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0916e0a0-54bc-43e8-bc04-547f2d5ad2d4-scripts\") pod \"ovn-controller-mpsgq-config-9qfgh\" (UID: \"0916e0a0-54bc-43e8-bc04-547f2d5ad2d4\") " pod="openstack/ovn-controller-mpsgq-config-9qfgh" Jan 22 16:50:27 crc kubenswrapper[4758]: I0122 16:50:27.396098 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0916e0a0-54bc-43e8-bc04-547f2d5ad2d4-var-run-ovn\") pod \"ovn-controller-mpsgq-config-9qfgh\" (UID: \"0916e0a0-54bc-43e8-bc04-547f2d5ad2d4\") " pod="openstack/ovn-controller-mpsgq-config-9qfgh" Jan 22 16:50:27 crc kubenswrapper[4758]: I0122 16:50:27.397113 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0916e0a0-54bc-43e8-bc04-547f2d5ad2d4-var-run\") pod \"ovn-controller-mpsgq-config-9qfgh\" (UID: \"0916e0a0-54bc-43e8-bc04-547f2d5ad2d4\") " pod="openstack/ovn-controller-mpsgq-config-9qfgh" Jan 22 16:50:27 crc kubenswrapper[4758]: I0122 16:50:27.397576 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0916e0a0-54bc-43e8-bc04-547f2d5ad2d4-var-log-ovn\") pod \"ovn-controller-mpsgq-config-9qfgh\" (UID: \"0916e0a0-54bc-43e8-bc04-547f2d5ad2d4\") " pod="openstack/ovn-controller-mpsgq-config-9qfgh" Jan 22 16:50:27 crc kubenswrapper[4758]: I0122 16:50:27.397883 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0916e0a0-54bc-43e8-bc04-547f2d5ad2d4-var-run-ovn\") pod \"ovn-controller-mpsgq-config-9qfgh\" (UID: \"0916e0a0-54bc-43e8-bc04-547f2d5ad2d4\") " pod="openstack/ovn-controller-mpsgq-config-9qfgh" Jan 22 16:50:27 crc kubenswrapper[4758]: I0122 16:50:27.398325 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0916e0a0-54bc-43e8-bc04-547f2d5ad2d4-additional-scripts\") pod \"ovn-controller-mpsgq-config-9qfgh\" (UID: \"0916e0a0-54bc-43e8-bc04-547f2d5ad2d4\") " pod="openstack/ovn-controller-mpsgq-config-9qfgh" Jan 22 16:50:27 crc kubenswrapper[4758]: I0122 16:50:27.399233 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0916e0a0-54bc-43e8-bc04-547f2d5ad2d4-scripts\") pod \"ovn-controller-mpsgq-config-9qfgh\" (UID: \"0916e0a0-54bc-43e8-bc04-547f2d5ad2d4\") " pod="openstack/ovn-controller-mpsgq-config-9qfgh" Jan 22 16:50:27 crc kubenswrapper[4758]: I0122 16:50:27.420496 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j57p7\" (UniqueName: \"kubernetes.io/projected/0916e0a0-54bc-43e8-bc04-547f2d5ad2d4-kube-api-access-j57p7\") pod \"ovn-controller-mpsgq-config-9qfgh\" (UID: \"0916e0a0-54bc-43e8-bc04-547f2d5ad2d4\") " pod="openstack/ovn-controller-mpsgq-config-9qfgh" Jan 22 16:50:27 crc kubenswrapper[4758]: I0122 16:50:27.441331 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mpsgq-config-9qfgh" Jan 22 16:50:27 crc kubenswrapper[4758]: I0122 16:50:27.554074 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-l5sww"] Jan 22 16:50:27 crc kubenswrapper[4758]: W0122 16:50:27.563749 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f206eab_3576_41d8_b0b8_abbf89628582.slice/crio-34de667cefd297223925d188089f902737da32b4c2b44e0688b768f8368be378 WatchSource:0}: Error finding container 34de667cefd297223925d188089f902737da32b4c2b44e0688b768f8368be378: Status 404 returned error can't find the container with id 34de667cefd297223925d188089f902737da32b4c2b44e0688b768f8368be378 Jan 22 16:50:27 crc kubenswrapper[4758]: I0122 16:50:27.622941 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-c3ab-account-create-update-4wv2p"] Jan 22 16:50:27 crc kubenswrapper[4758]: W0122 16:50:27.637926 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e50e041_f10a_43dc_9ba9_1b8adf5d0296.slice/crio-c51a2d3c14b56225f8f3125baf746c3e5fd8c8269d5da8619ab1cb2b7a411cff WatchSource:0}: Error finding container c51a2d3c14b56225f8f3125baf746c3e5fd8c8269d5da8619ab1cb2b7a411cff: Status 404 returned error can't find the container with id c51a2d3c14b56225f8f3125baf746c3e5fd8c8269d5da8619ab1cb2b7a411cff Jan 22 16:50:27 crc kubenswrapper[4758]: I0122 16:50:27.692758 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-afd0-account-create-update-jtfps"] Jan 22 16:50:27 crc kubenswrapper[4758]: W0122 16:50:27.705834 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb17a8111_b550_4c28_98bf_fe568e5f35f5.slice/crio-3fcfca7d6c38d7fcc24d14f3f33a62b9b6fe6301fae9f2c174175d292e1a41da WatchSource:0}: Error finding container 3fcfca7d6c38d7fcc24d14f3f33a62b9b6fe6301fae9f2c174175d292e1a41da: Status 404 returned error can't find the container with id 3fcfca7d6c38d7fcc24d14f3f33a62b9b6fe6301fae9f2c174175d292e1a41da Jan 22 16:50:27 crc kubenswrapper[4758]: I0122 16:50:27.744852 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mpsgq-config-9qfgh"] Jan 22 16:50:28 crc kubenswrapper[4758]: I0122 16:50:28.299678 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 22 16:50:28 crc kubenswrapper[4758]: I0122 16:50:28.300544 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="c980e076-b6f7-4713-8b10-08bea2949331" containerName="prometheus" containerID="cri-o://355d59c59b4ea48f82e5fb788c718362f2da845ff15a07cdcc6d5cdd038ac121" gracePeriod=600 Jan 22 16:50:28 crc kubenswrapper[4758]: I0122 16:50:28.301235 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="c980e076-b6f7-4713-8b10-08bea2949331" containerName="thanos-sidecar" containerID="cri-o://793dabbf2ae914b5fdcb63b06bea812ecf9789d4b3490d5fc117467358faceaa" gracePeriod=600 Jan 22 16:50:28 crc kubenswrapper[4758]: I0122 16:50:28.301290 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="c980e076-b6f7-4713-8b10-08bea2949331" containerName="config-reloader" containerID="cri-o://37d777e0cc8c2dcc68b3e8325d1f4f54cde7a93134d6a10cbe2498dec130ab05" gracePeriod=600 Jan 22 16:50:28 crc kubenswrapper[4758]: I0122 16:50:28.307637 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-afd0-account-create-update-jtfps" event={"ID":"b17a8111-b550-4c28-98bf-fe568e5f35f5","Type":"ContainerStarted","Data":"9524b56c17eb2be9b8ab61ae60d1b7412aa8f1c20eca9f8a67bfb978da9b521a"} Jan 22 16:50:28 crc kubenswrapper[4758]: I0122 16:50:28.307690 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-afd0-account-create-update-jtfps" event={"ID":"b17a8111-b550-4c28-98bf-fe568e5f35f5","Type":"ContainerStarted","Data":"3fcfca7d6c38d7fcc24d14f3f33a62b9b6fe6301fae9f2c174175d292e1a41da"} Jan 22 16:50:28 crc kubenswrapper[4758]: I0122 16:50:28.329906 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mpsgq-config-9qfgh" event={"ID":"0916e0a0-54bc-43e8-bc04-547f2d5ad2d4","Type":"ContainerStarted","Data":"de8d835b77f0252773e03ee0b650c5bc3ff09343adeb3efb346389c76a40bd8f"} Jan 22 16:50:28 crc kubenswrapper[4758]: I0122 16:50:28.329973 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mpsgq-config-9qfgh" event={"ID":"0916e0a0-54bc-43e8-bc04-547f2d5ad2d4","Type":"ContainerStarted","Data":"4eb0974e9eeed0af06e5258674c455b526330089be1d89cd4495a67205f5ae12"} Jan 22 16:50:28 crc kubenswrapper[4758]: I0122 16:50:28.333551 4758 generic.go:334] "Generic (PLEG): container finished" podID="6f206eab-3576-41d8-b0b8-abbf89628582" containerID="0d899bbe793e7a5b80e44ff2448fbbba283d3f13640a7753a1fd7485004810a6" exitCode=0 Jan 22 16:50:28 crc kubenswrapper[4758]: I0122 16:50:28.333710 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-l5sww" event={"ID":"6f206eab-3576-41d8-b0b8-abbf89628582","Type":"ContainerDied","Data":"0d899bbe793e7a5b80e44ff2448fbbba283d3f13640a7753a1fd7485004810a6"} Jan 22 16:50:28 crc kubenswrapper[4758]: I0122 16:50:28.334005 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-l5sww" event={"ID":"6f206eab-3576-41d8-b0b8-abbf89628582","Type":"ContainerStarted","Data":"34de667cefd297223925d188089f902737da32b4c2b44e0688b768f8368be378"} Jan 22 16:50:28 crc kubenswrapper[4758]: I0122 16:50:28.340711 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c3ab-account-create-update-4wv2p" event={"ID":"8e50e041-f10a-43dc-9ba9-1b8adf5d0296","Type":"ContainerStarted","Data":"9a928b13a6fc9a2c3c13e543346e2f247b7402f20c59f02530e85bedd2444b50"} Jan 22 16:50:28 crc kubenswrapper[4758]: I0122 16:50:28.340844 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c3ab-account-create-update-4wv2p" event={"ID":"8e50e041-f10a-43dc-9ba9-1b8adf5d0296","Type":"ContainerStarted","Data":"c51a2d3c14b56225f8f3125baf746c3e5fd8c8269d5da8619ab1cb2b7a411cff"} Jan 22 16:50:28 crc kubenswrapper[4758]: I0122 16:50:28.346269 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-afd0-account-create-update-jtfps" podStartSLOduration=2.346252255 podStartE2EDuration="2.346252255s" podCreationTimestamp="2026-01-22 16:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:50:28.340691183 +0000 UTC m=+1249.824030478" watchObservedRunningTime="2026-01-22 16:50:28.346252255 +0000 UTC m=+1249.829591540" Jan 22 16:50:28 crc kubenswrapper[4758]: I0122 16:50:28.347701 4758 generic.go:334] "Generic (PLEG): container finished" podID="6088aa85-eb17-48ba-badd-ea46ba4333bb" containerID="57dc96b60b41bef411c51e22d61f3a99adfd8b0a25b87b1c688415879ba8b0c8" exitCode=0 Jan 22 16:50:28 crc kubenswrapper[4758]: I0122 16:50:28.347789 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-l8rd6" event={"ID":"6088aa85-eb17-48ba-badd-ea46ba4333bb","Type":"ContainerDied","Data":"57dc96b60b41bef411c51e22d61f3a99adfd8b0a25b87b1c688415879ba8b0c8"} Jan 22 16:50:28 crc kubenswrapper[4758]: I0122 16:50:28.393906 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-mpsgq-config-9qfgh" podStartSLOduration=1.393876882 podStartE2EDuration="1.393876882s" podCreationTimestamp="2026-01-22 16:50:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:50:28.391535389 +0000 UTC m=+1249.874874684" watchObservedRunningTime="2026-01-22 16:50:28.393876882 +0000 UTC m=+1249.877216167" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.339259 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.362321 4758 generic.go:334] "Generic (PLEG): container finished" podID="8e50e041-f10a-43dc-9ba9-1b8adf5d0296" containerID="9a928b13a6fc9a2c3c13e543346e2f247b7402f20c59f02530e85bedd2444b50" exitCode=0 Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.362383 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c3ab-account-create-update-4wv2p" event={"ID":"8e50e041-f10a-43dc-9ba9-1b8adf5d0296","Type":"ContainerDied","Data":"9a928b13a6fc9a2c3c13e543346e2f247b7402f20c59f02530e85bedd2444b50"} Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.363641 4758 generic.go:334] "Generic (PLEG): container finished" podID="b17a8111-b550-4c28-98bf-fe568e5f35f5" containerID="9524b56c17eb2be9b8ab61ae60d1b7412aa8f1c20eca9f8a67bfb978da9b521a" exitCode=0 Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.363681 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-afd0-account-create-update-jtfps" event={"ID":"b17a8111-b550-4c28-98bf-fe568e5f35f5","Type":"ContainerDied","Data":"9524b56c17eb2be9b8ab61ae60d1b7412aa8f1c20eca9f8a67bfb978da9b521a"} Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.364892 4758 generic.go:334] "Generic (PLEG): container finished" podID="0916e0a0-54bc-43e8-bc04-547f2d5ad2d4" containerID="de8d835b77f0252773e03ee0b650c5bc3ff09343adeb3efb346389c76a40bd8f" exitCode=0 Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.364946 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mpsgq-config-9qfgh" event={"ID":"0916e0a0-54bc-43e8-bc04-547f2d5ad2d4","Type":"ContainerDied","Data":"de8d835b77f0252773e03ee0b650c5bc3ff09343adeb3efb346389c76a40bd8f"} Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.368413 4758 generic.go:334] "Generic (PLEG): container finished" podID="c980e076-b6f7-4713-8b10-08bea2949331" containerID="793dabbf2ae914b5fdcb63b06bea812ecf9789d4b3490d5fc117467358faceaa" exitCode=0 Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.368434 4758 generic.go:334] "Generic (PLEG): container finished" podID="c980e076-b6f7-4713-8b10-08bea2949331" containerID="37d777e0cc8c2dcc68b3e8325d1f4f54cde7a93134d6a10cbe2498dec130ab05" exitCode=0 Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.368442 4758 generic.go:334] "Generic (PLEG): container finished" podID="c980e076-b6f7-4713-8b10-08bea2949331" containerID="355d59c59b4ea48f82e5fb788c718362f2da845ff15a07cdcc6d5cdd038ac121" exitCode=0 Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.368515 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c980e076-b6f7-4713-8b10-08bea2949331","Type":"ContainerDied","Data":"793dabbf2ae914b5fdcb63b06bea812ecf9789d4b3490d5fc117467358faceaa"} Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.368536 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c980e076-b6f7-4713-8b10-08bea2949331","Type":"ContainerDied","Data":"37d777e0cc8c2dcc68b3e8325d1f4f54cde7a93134d6a10cbe2498dec130ab05"} Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.368547 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c980e076-b6f7-4713-8b10-08bea2949331","Type":"ContainerDied","Data":"355d59c59b4ea48f82e5fb788c718362f2da845ff15a07cdcc6d5cdd038ac121"} Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.368557 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c980e076-b6f7-4713-8b10-08bea2949331","Type":"ContainerDied","Data":"6ac6f9402bbe61d74f7f9a4bdddab60fb5e210ad048b4ceb9052bc11747df09f"} Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.368608 4758 scope.go:117] "RemoveContainer" containerID="793dabbf2ae914b5fdcb63b06bea812ecf9789d4b3490d5fc117467358faceaa" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.368811 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.370661 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-c3ab-account-create-update-4wv2p" podStartSLOduration=3.370650876 podStartE2EDuration="3.370650876s" podCreationTimestamp="2026-01-22 16:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:50:28.440592625 +0000 UTC m=+1249.923931910" watchObservedRunningTime="2026-01-22 16:50:29.370650876 +0000 UTC m=+1250.853990161" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.455734 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4d84\" (UniqueName: \"kubernetes.io/projected/c980e076-b6f7-4713-8b10-08bea2949331-kube-api-access-v4d84\") pod \"c980e076-b6f7-4713-8b10-08bea2949331\" (UID: \"c980e076-b6f7-4713-8b10-08bea2949331\") " Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.455830 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c980e076-b6f7-4713-8b10-08bea2949331-web-config\") pod \"c980e076-b6f7-4713-8b10-08bea2949331\" (UID: \"c980e076-b6f7-4713-8b10-08bea2949331\") " Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.455877 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c980e076-b6f7-4713-8b10-08bea2949331-prometheus-metric-storage-rulefiles-0\") pod \"c980e076-b6f7-4713-8b10-08bea2949331\" (UID: \"c980e076-b6f7-4713-8b10-08bea2949331\") " Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.455934 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c980e076-b6f7-4713-8b10-08bea2949331-prometheus-metric-storage-rulefiles-2\") pod \"c980e076-b6f7-4713-8b10-08bea2949331\" (UID: \"c980e076-b6f7-4713-8b10-08bea2949331\") " Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.455967 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c980e076-b6f7-4713-8b10-08bea2949331-config\") pod \"c980e076-b6f7-4713-8b10-08bea2949331\" (UID: \"c980e076-b6f7-4713-8b10-08bea2949331\") " Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.456054 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c980e076-b6f7-4713-8b10-08bea2949331-thanos-prometheus-http-client-file\") pod \"c980e076-b6f7-4713-8b10-08bea2949331\" (UID: \"c980e076-b6f7-4713-8b10-08bea2949331\") " Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.456092 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c980e076-b6f7-4713-8b10-08bea2949331-tls-assets\") pod \"c980e076-b6f7-4713-8b10-08bea2949331\" (UID: \"c980e076-b6f7-4713-8b10-08bea2949331\") " Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.456130 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c980e076-b6f7-4713-8b10-08bea2949331-prometheus-metric-storage-rulefiles-1\") pod \"c980e076-b6f7-4713-8b10-08bea2949331\" (UID: \"c980e076-b6f7-4713-8b10-08bea2949331\") " Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.456152 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c980e076-b6f7-4713-8b10-08bea2949331-config-out\") pod \"c980e076-b6f7-4713-8b10-08bea2949331\" (UID: \"c980e076-b6f7-4713-8b10-08bea2949331\") " Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.456376 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-90012821-fb2f-4f8d-aaca-e2d78515e50d\") pod \"c980e076-b6f7-4713-8b10-08bea2949331\" (UID: \"c980e076-b6f7-4713-8b10-08bea2949331\") " Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.463556 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c980e076-b6f7-4713-8b10-08bea2949331-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "c980e076-b6f7-4713-8b10-08bea2949331" (UID: "c980e076-b6f7-4713-8b10-08bea2949331"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.463566 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c980e076-b6f7-4713-8b10-08bea2949331-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "c980e076-b6f7-4713-8b10-08bea2949331" (UID: "c980e076-b6f7-4713-8b10-08bea2949331"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.463969 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c980e076-b6f7-4713-8b10-08bea2949331-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "c980e076-b6f7-4713-8b10-08bea2949331" (UID: "c980e076-b6f7-4713-8b10-08bea2949331"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.466419 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c980e076-b6f7-4713-8b10-08bea2949331-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "c980e076-b6f7-4713-8b10-08bea2949331" (UID: "c980e076-b6f7-4713-8b10-08bea2949331"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.468287 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c980e076-b6f7-4713-8b10-08bea2949331-kube-api-access-v4d84" (OuterVolumeSpecName: "kube-api-access-v4d84") pod "c980e076-b6f7-4713-8b10-08bea2949331" (UID: "c980e076-b6f7-4713-8b10-08bea2949331"). InnerVolumeSpecName "kube-api-access-v4d84". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.475976 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c980e076-b6f7-4713-8b10-08bea2949331-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "c980e076-b6f7-4713-8b10-08bea2949331" (UID: "c980e076-b6f7-4713-8b10-08bea2949331"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.479031 4758 scope.go:117] "RemoveContainer" containerID="37d777e0cc8c2dcc68b3e8325d1f4f54cde7a93134d6a10cbe2498dec130ab05" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.499223 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c980e076-b6f7-4713-8b10-08bea2949331-config" (OuterVolumeSpecName: "config") pod "c980e076-b6f7-4713-8b10-08bea2949331" (UID: "c980e076-b6f7-4713-8b10-08bea2949331"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.499228 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c980e076-b6f7-4713-8b10-08bea2949331-config-out" (OuterVolumeSpecName: "config-out") pod "c980e076-b6f7-4713-8b10-08bea2949331" (UID: "c980e076-b6f7-4713-8b10-08bea2949331"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.499947 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-90012821-fb2f-4f8d-aaca-e2d78515e50d" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "c980e076-b6f7-4713-8b10-08bea2949331" (UID: "c980e076-b6f7-4713-8b10-08bea2949331"). InnerVolumeSpecName "pvc-90012821-fb2f-4f8d-aaca-e2d78515e50d". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.549421 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c980e076-b6f7-4713-8b10-08bea2949331-web-config" (OuterVolumeSpecName: "web-config") pod "c980e076-b6f7-4713-8b10-08bea2949331" (UID: "c980e076-b6f7-4713-8b10-08bea2949331"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.559340 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4d84\" (UniqueName: \"kubernetes.io/projected/c980e076-b6f7-4713-8b10-08bea2949331-kube-api-access-v4d84\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.559369 4758 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c980e076-b6f7-4713-8b10-08bea2949331-web-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.559380 4758 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c980e076-b6f7-4713-8b10-08bea2949331-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.559394 4758 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c980e076-b6f7-4713-8b10-08bea2949331-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.559403 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c980e076-b6f7-4713-8b10-08bea2949331-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.559413 4758 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c980e076-b6f7-4713-8b10-08bea2949331-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.559424 4758 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c980e076-b6f7-4713-8b10-08bea2949331-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.559433 4758 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c980e076-b6f7-4713-8b10-08bea2949331-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.559442 4758 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c980e076-b6f7-4713-8b10-08bea2949331-config-out\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.559466 4758 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-90012821-fb2f-4f8d-aaca-e2d78515e50d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-90012821-fb2f-4f8d-aaca-e2d78515e50d\") on node \"crc\" " Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.597179 4758 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.597361 4758 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-90012821-fb2f-4f8d-aaca-e2d78515e50d" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-90012821-fb2f-4f8d-aaca-e2d78515e50d") on node "crc" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.627165 4758 scope.go:117] "RemoveContainer" containerID="355d59c59b4ea48f82e5fb788c718362f2da845ff15a07cdcc6d5cdd038ac121" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.649450 4758 scope.go:117] "RemoveContainer" containerID="35f5e597c5f37af1104495c7dd0f4e746f2800bfdfb3055db1a816191fcd15d4" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.661302 4758 reconciler_common.go:293] "Volume detached for volume \"pvc-90012821-fb2f-4f8d-aaca-e2d78515e50d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-90012821-fb2f-4f8d-aaca-e2d78515e50d\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.672066 4758 scope.go:117] "RemoveContainer" containerID="793dabbf2ae914b5fdcb63b06bea812ecf9789d4b3490d5fc117467358faceaa" Jan 22 16:50:29 crc kubenswrapper[4758]: E0122 16:50:29.676136 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"793dabbf2ae914b5fdcb63b06bea812ecf9789d4b3490d5fc117467358faceaa\": container with ID starting with 793dabbf2ae914b5fdcb63b06bea812ecf9789d4b3490d5fc117467358faceaa not found: ID does not exist" containerID="793dabbf2ae914b5fdcb63b06bea812ecf9789d4b3490d5fc117467358faceaa" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.676170 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"793dabbf2ae914b5fdcb63b06bea812ecf9789d4b3490d5fc117467358faceaa"} err="failed to get container status \"793dabbf2ae914b5fdcb63b06bea812ecf9789d4b3490d5fc117467358faceaa\": rpc error: code = NotFound desc = could not find container \"793dabbf2ae914b5fdcb63b06bea812ecf9789d4b3490d5fc117467358faceaa\": container with ID starting with 793dabbf2ae914b5fdcb63b06bea812ecf9789d4b3490d5fc117467358faceaa not found: ID does not exist" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.676191 4758 scope.go:117] "RemoveContainer" containerID="37d777e0cc8c2dcc68b3e8325d1f4f54cde7a93134d6a10cbe2498dec130ab05" Jan 22 16:50:29 crc kubenswrapper[4758]: E0122 16:50:29.677251 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37d777e0cc8c2dcc68b3e8325d1f4f54cde7a93134d6a10cbe2498dec130ab05\": container with ID starting with 37d777e0cc8c2dcc68b3e8325d1f4f54cde7a93134d6a10cbe2498dec130ab05 not found: ID does not exist" containerID="37d777e0cc8c2dcc68b3e8325d1f4f54cde7a93134d6a10cbe2498dec130ab05" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.677321 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37d777e0cc8c2dcc68b3e8325d1f4f54cde7a93134d6a10cbe2498dec130ab05"} err="failed to get container status \"37d777e0cc8c2dcc68b3e8325d1f4f54cde7a93134d6a10cbe2498dec130ab05\": rpc error: code = NotFound desc = could not find container \"37d777e0cc8c2dcc68b3e8325d1f4f54cde7a93134d6a10cbe2498dec130ab05\": container with ID starting with 37d777e0cc8c2dcc68b3e8325d1f4f54cde7a93134d6a10cbe2498dec130ab05 not found: ID does not exist" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.677337 4758 scope.go:117] "RemoveContainer" containerID="355d59c59b4ea48f82e5fb788c718362f2da845ff15a07cdcc6d5cdd038ac121" Jan 22 16:50:29 crc kubenswrapper[4758]: E0122 16:50:29.679652 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"355d59c59b4ea48f82e5fb788c718362f2da845ff15a07cdcc6d5cdd038ac121\": container with ID starting with 355d59c59b4ea48f82e5fb788c718362f2da845ff15a07cdcc6d5cdd038ac121 not found: ID does not exist" containerID="355d59c59b4ea48f82e5fb788c718362f2da845ff15a07cdcc6d5cdd038ac121" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.679692 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"355d59c59b4ea48f82e5fb788c718362f2da845ff15a07cdcc6d5cdd038ac121"} err="failed to get container status \"355d59c59b4ea48f82e5fb788c718362f2da845ff15a07cdcc6d5cdd038ac121\": rpc error: code = NotFound desc = could not find container \"355d59c59b4ea48f82e5fb788c718362f2da845ff15a07cdcc6d5cdd038ac121\": container with ID starting with 355d59c59b4ea48f82e5fb788c718362f2da845ff15a07cdcc6d5cdd038ac121 not found: ID does not exist" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.679719 4758 scope.go:117] "RemoveContainer" containerID="35f5e597c5f37af1104495c7dd0f4e746f2800bfdfb3055db1a816191fcd15d4" Jan 22 16:50:29 crc kubenswrapper[4758]: E0122 16:50:29.682980 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35f5e597c5f37af1104495c7dd0f4e746f2800bfdfb3055db1a816191fcd15d4\": container with ID starting with 35f5e597c5f37af1104495c7dd0f4e746f2800bfdfb3055db1a816191fcd15d4 not found: ID does not exist" containerID="35f5e597c5f37af1104495c7dd0f4e746f2800bfdfb3055db1a816191fcd15d4" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.683014 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35f5e597c5f37af1104495c7dd0f4e746f2800bfdfb3055db1a816191fcd15d4"} err="failed to get container status \"35f5e597c5f37af1104495c7dd0f4e746f2800bfdfb3055db1a816191fcd15d4\": rpc error: code = NotFound desc = could not find container \"35f5e597c5f37af1104495c7dd0f4e746f2800bfdfb3055db1a816191fcd15d4\": container with ID starting with 35f5e597c5f37af1104495c7dd0f4e746f2800bfdfb3055db1a816191fcd15d4 not found: ID does not exist" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.683035 4758 scope.go:117] "RemoveContainer" containerID="793dabbf2ae914b5fdcb63b06bea812ecf9789d4b3490d5fc117467358faceaa" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.683800 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"793dabbf2ae914b5fdcb63b06bea812ecf9789d4b3490d5fc117467358faceaa"} err="failed to get container status \"793dabbf2ae914b5fdcb63b06bea812ecf9789d4b3490d5fc117467358faceaa\": rpc error: code = NotFound desc = could not find container \"793dabbf2ae914b5fdcb63b06bea812ecf9789d4b3490d5fc117467358faceaa\": container with ID starting with 793dabbf2ae914b5fdcb63b06bea812ecf9789d4b3490d5fc117467358faceaa not found: ID does not exist" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.683824 4758 scope.go:117] "RemoveContainer" containerID="37d777e0cc8c2dcc68b3e8325d1f4f54cde7a93134d6a10cbe2498dec130ab05" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.684147 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37d777e0cc8c2dcc68b3e8325d1f4f54cde7a93134d6a10cbe2498dec130ab05"} err="failed to get container status \"37d777e0cc8c2dcc68b3e8325d1f4f54cde7a93134d6a10cbe2498dec130ab05\": rpc error: code = NotFound desc = could not find container \"37d777e0cc8c2dcc68b3e8325d1f4f54cde7a93134d6a10cbe2498dec130ab05\": container with ID starting with 37d777e0cc8c2dcc68b3e8325d1f4f54cde7a93134d6a10cbe2498dec130ab05 not found: ID does not exist" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.684168 4758 scope.go:117] "RemoveContainer" containerID="355d59c59b4ea48f82e5fb788c718362f2da845ff15a07cdcc6d5cdd038ac121" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.684347 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"355d59c59b4ea48f82e5fb788c718362f2da845ff15a07cdcc6d5cdd038ac121"} err="failed to get container status \"355d59c59b4ea48f82e5fb788c718362f2da845ff15a07cdcc6d5cdd038ac121\": rpc error: code = NotFound desc = could not find container \"355d59c59b4ea48f82e5fb788c718362f2da845ff15a07cdcc6d5cdd038ac121\": container with ID starting with 355d59c59b4ea48f82e5fb788c718362f2da845ff15a07cdcc6d5cdd038ac121 not found: ID does not exist" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.684365 4758 scope.go:117] "RemoveContainer" containerID="35f5e597c5f37af1104495c7dd0f4e746f2800bfdfb3055db1a816191fcd15d4" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.684589 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35f5e597c5f37af1104495c7dd0f4e746f2800bfdfb3055db1a816191fcd15d4"} err="failed to get container status \"35f5e597c5f37af1104495c7dd0f4e746f2800bfdfb3055db1a816191fcd15d4\": rpc error: code = NotFound desc = could not find container \"35f5e597c5f37af1104495c7dd0f4e746f2800bfdfb3055db1a816191fcd15d4\": container with ID starting with 35f5e597c5f37af1104495c7dd0f4e746f2800bfdfb3055db1a816191fcd15d4 not found: ID does not exist" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.684607 4758 scope.go:117] "RemoveContainer" containerID="793dabbf2ae914b5fdcb63b06bea812ecf9789d4b3490d5fc117467358faceaa" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.684899 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"793dabbf2ae914b5fdcb63b06bea812ecf9789d4b3490d5fc117467358faceaa"} err="failed to get container status \"793dabbf2ae914b5fdcb63b06bea812ecf9789d4b3490d5fc117467358faceaa\": rpc error: code = NotFound desc = could not find container \"793dabbf2ae914b5fdcb63b06bea812ecf9789d4b3490d5fc117467358faceaa\": container with ID starting with 793dabbf2ae914b5fdcb63b06bea812ecf9789d4b3490d5fc117467358faceaa not found: ID does not exist" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.684914 4758 scope.go:117] "RemoveContainer" containerID="37d777e0cc8c2dcc68b3e8325d1f4f54cde7a93134d6a10cbe2498dec130ab05" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.685079 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37d777e0cc8c2dcc68b3e8325d1f4f54cde7a93134d6a10cbe2498dec130ab05"} err="failed to get container status \"37d777e0cc8c2dcc68b3e8325d1f4f54cde7a93134d6a10cbe2498dec130ab05\": rpc error: code = NotFound desc = could not find container \"37d777e0cc8c2dcc68b3e8325d1f4f54cde7a93134d6a10cbe2498dec130ab05\": container with ID starting with 37d777e0cc8c2dcc68b3e8325d1f4f54cde7a93134d6a10cbe2498dec130ab05 not found: ID does not exist" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.685091 4758 scope.go:117] "RemoveContainer" containerID="355d59c59b4ea48f82e5fb788c718362f2da845ff15a07cdcc6d5cdd038ac121" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.685470 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"355d59c59b4ea48f82e5fb788c718362f2da845ff15a07cdcc6d5cdd038ac121"} err="failed to get container status \"355d59c59b4ea48f82e5fb788c718362f2da845ff15a07cdcc6d5cdd038ac121\": rpc error: code = NotFound desc = could not find container \"355d59c59b4ea48f82e5fb788c718362f2da845ff15a07cdcc6d5cdd038ac121\": container with ID starting with 355d59c59b4ea48f82e5fb788c718362f2da845ff15a07cdcc6d5cdd038ac121 not found: ID does not exist" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.685500 4758 scope.go:117] "RemoveContainer" containerID="35f5e597c5f37af1104495c7dd0f4e746f2800bfdfb3055db1a816191fcd15d4" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.685733 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35f5e597c5f37af1104495c7dd0f4e746f2800bfdfb3055db1a816191fcd15d4"} err="failed to get container status \"35f5e597c5f37af1104495c7dd0f4e746f2800bfdfb3055db1a816191fcd15d4\": rpc error: code = NotFound desc = could not find container \"35f5e597c5f37af1104495c7dd0f4e746f2800bfdfb3055db1a816191fcd15d4\": container with ID starting with 35f5e597c5f37af1104495c7dd0f4e746f2800bfdfb3055db1a816191fcd15d4 not found: ID does not exist" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.722822 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.734720 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.783662 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 22 16:50:29 crc kubenswrapper[4758]: E0122 16:50:29.784185 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c980e076-b6f7-4713-8b10-08bea2949331" containerName="thanos-sidecar" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.784211 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="c980e076-b6f7-4713-8b10-08bea2949331" containerName="thanos-sidecar" Jan 22 16:50:29 crc kubenswrapper[4758]: E0122 16:50:29.784225 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c980e076-b6f7-4713-8b10-08bea2949331" containerName="prometheus" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.784233 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="c980e076-b6f7-4713-8b10-08bea2949331" containerName="prometheus" Jan 22 16:50:29 crc kubenswrapper[4758]: E0122 16:50:29.784246 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c980e076-b6f7-4713-8b10-08bea2949331" containerName="init-config-reloader" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.784255 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="c980e076-b6f7-4713-8b10-08bea2949331" containerName="init-config-reloader" Jan 22 16:50:29 crc kubenswrapper[4758]: E0122 16:50:29.784265 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c980e076-b6f7-4713-8b10-08bea2949331" containerName="config-reloader" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.784271 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="c980e076-b6f7-4713-8b10-08bea2949331" containerName="config-reloader" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.784478 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="c980e076-b6f7-4713-8b10-08bea2949331" containerName="prometheus" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.784495 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="c980e076-b6f7-4713-8b10-08bea2949331" containerName="config-reloader" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.784514 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="c980e076-b6f7-4713-8b10-08bea2949331" containerName="thanos-sidecar" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.786659 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.797563 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.800562 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.800581 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.800765 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.800866 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.800915 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.805294 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.805301 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-4ftsd" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.807415 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.809288 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.976867 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-5258f"] Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.978185 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.978231 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.978276 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.978308 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmltc\" (UniqueName: \"kubernetes.io/projected/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-kube-api-access-tmltc\") pod \"prometheus-metric-storage-0\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.978336 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.978376 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-config\") pod \"prometheus-metric-storage-0\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.978449 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.978474 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.978517 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.978554 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.978570 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.979072 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.979390 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-90012821-fb2f-4f8d-aaca-e2d78515e50d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-90012821-fb2f-4f8d-aaca-e2d78515e50d\") pod \"prometheus-metric-storage-0\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.986961 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-5258f"] Jan 22 16:50:29 crc kubenswrapper[4758]: I0122 16:50:29.987343 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-l8rd6" Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.024642 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-l5sww" Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.081031 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgjkg\" (UniqueName: \"kubernetes.io/projected/6088aa85-eb17-48ba-badd-ea46ba4333bb-kube-api-access-lgjkg\") pod \"6088aa85-eb17-48ba-badd-ea46ba4333bb\" (UID: \"6088aa85-eb17-48ba-badd-ea46ba4333bb\") " Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.081310 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6088aa85-eb17-48ba-badd-ea46ba4333bb-operator-scripts\") pod \"6088aa85-eb17-48ba-badd-ea46ba4333bb\" (UID: \"6088aa85-eb17-48ba-badd-ea46ba4333bb\") " Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.081578 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-config\") pod \"prometheus-metric-storage-0\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.081669 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.081711 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.083894 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.083965 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.083956 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6088aa85-eb17-48ba-badd-ea46ba4333bb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6088aa85-eb17-48ba-badd-ea46ba4333bb" (UID: "6088aa85-eb17-48ba-badd-ea46ba4333bb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.083991 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.084025 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.084068 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-90012821-fb2f-4f8d-aaca-e2d78515e50d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-90012821-fb2f-4f8d-aaca-e2d78515e50d\") pod \"prometheus-metric-storage-0\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.084127 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.084154 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.084196 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.084240 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmltc\" (UniqueName: \"kubernetes.io/projected/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-kube-api-access-tmltc\") pod \"prometheus-metric-storage-0\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.084258 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.084346 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6088aa85-eb17-48ba-badd-ea46ba4333bb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.089049 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6088aa85-eb17-48ba-badd-ea46ba4333bb-kube-api-access-lgjkg" (OuterVolumeSpecName: "kube-api-access-lgjkg") pod "6088aa85-eb17-48ba-badd-ea46ba4333bb" (UID: "6088aa85-eb17-48ba-badd-ea46ba4333bb"). InnerVolumeSpecName "kube-api-access-lgjkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.090614 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.092034 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.092070 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.092679 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.093208 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.093883 4758 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.093916 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-90012821-fb2f-4f8d-aaca-e2d78515e50d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-90012821-fb2f-4f8d-aaca-e2d78515e50d\") pod \"prometheus-metric-storage-0\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/51d824e7b7431a599087fae5dbad8d5d5ded71f29385012a23b0aa020d358d8d/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.097469 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.097998 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.099698 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.100835 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.102362 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-config\") pod \"prometheus-metric-storage-0\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.105440 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.115605 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmltc\" (UniqueName: \"kubernetes.io/projected/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-kube-api-access-tmltc\") pod \"prometheus-metric-storage-0\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.144318 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-90012821-fb2f-4f8d-aaca-e2d78515e50d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-90012821-fb2f-4f8d-aaca-e2d78515e50d\") pod \"prometheus-metric-storage-0\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.185995 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f206eab-3576-41d8-b0b8-abbf89628582-operator-scripts\") pod \"6f206eab-3576-41d8-b0b8-abbf89628582\" (UID: \"6f206eab-3576-41d8-b0b8-abbf89628582\") " Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.186105 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfffh\" (UniqueName: \"kubernetes.io/projected/6f206eab-3576-41d8-b0b8-abbf89628582-kube-api-access-tfffh\") pod \"6f206eab-3576-41d8-b0b8-abbf89628582\" (UID: \"6f206eab-3576-41d8-b0b8-abbf89628582\") " Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.186472 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgjkg\" (UniqueName: \"kubernetes.io/projected/6088aa85-eb17-48ba-badd-ea46ba4333bb-kube-api-access-lgjkg\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.186482 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f206eab-3576-41d8-b0b8-abbf89628582-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6f206eab-3576-41d8-b0b8-abbf89628582" (UID: "6f206eab-3576-41d8-b0b8-abbf89628582"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.188910 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f206eab-3576-41d8-b0b8-abbf89628582-kube-api-access-tfffh" (OuterVolumeSpecName: "kube-api-access-tfffh") pod "6f206eab-3576-41d8-b0b8-abbf89628582" (UID: "6f206eab-3576-41d8-b0b8-abbf89628582"). InnerVolumeSpecName "kube-api-access-tfffh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.288562 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f206eab-3576-41d8-b0b8-abbf89628582-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.288600 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfffh\" (UniqueName: \"kubernetes.io/projected/6f206eab-3576-41d8-b0b8-abbf89628582-kube-api-access-tfffh\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.378225 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-l8rd6" Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.378277 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-l8rd6" event={"ID":"6088aa85-eb17-48ba-badd-ea46ba4333bb","Type":"ContainerDied","Data":"fddc2584eca10c6089170a5d88ed7a63ef1b14a8f37388b17c2ed62490672c08"} Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.378316 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fddc2584eca10c6089170a5d88ed7a63ef1b14a8f37388b17c2ed62490672c08" Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.379915 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-l5sww" event={"ID":"6f206eab-3576-41d8-b0b8-abbf89628582","Type":"ContainerDied","Data":"34de667cefd297223925d188089f902737da32b4c2b44e0688b768f8368be378"} Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.379950 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34de667cefd297223925d188089f902737da32b4c2b44e0688b768f8368be378" Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.379982 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-l5sww" Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.415031 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.777844 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c3ab-account-create-update-4wv2p" Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.818439 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c00ff189-8fdb-479b-8722-40dd27196b0e" path="/var/lib/kubelet/pods/c00ff189-8fdb-479b-8722-40dd27196b0e/volumes" Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.819065 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c980e076-b6f7-4713-8b10-08bea2949331" path="/var/lib/kubelet/pods/c980e076-b6f7-4713-8b10-08bea2949331/volumes" Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.880129 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-afd0-account-create-update-jtfps" Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.886557 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mpsgq-config-9qfgh" Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.901292 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r62k5\" (UniqueName: \"kubernetes.io/projected/8e50e041-f10a-43dc-9ba9-1b8adf5d0296-kube-api-access-r62k5\") pod \"8e50e041-f10a-43dc-9ba9-1b8adf5d0296\" (UID: \"8e50e041-f10a-43dc-9ba9-1b8adf5d0296\") " Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.901404 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e50e041-f10a-43dc-9ba9-1b8adf5d0296-operator-scripts\") pod \"8e50e041-f10a-43dc-9ba9-1b8adf5d0296\" (UID: \"8e50e041-f10a-43dc-9ba9-1b8adf5d0296\") " Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.902292 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e50e041-f10a-43dc-9ba9-1b8adf5d0296-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8e50e041-f10a-43dc-9ba9-1b8adf5d0296" (UID: "8e50e041-f10a-43dc-9ba9-1b8adf5d0296"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:50:30 crc kubenswrapper[4758]: I0122 16:50:30.907064 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e50e041-f10a-43dc-9ba9-1b8adf5d0296-kube-api-access-r62k5" (OuterVolumeSpecName: "kube-api-access-r62k5") pod "8e50e041-f10a-43dc-9ba9-1b8adf5d0296" (UID: "8e50e041-f10a-43dc-9ba9-1b8adf5d0296"). InnerVolumeSpecName "kube-api-access-r62k5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:50:31 crc kubenswrapper[4758]: I0122 16:50:31.003361 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0916e0a0-54bc-43e8-bc04-547f2d5ad2d4-scripts\") pod \"0916e0a0-54bc-43e8-bc04-547f2d5ad2d4\" (UID: \"0916e0a0-54bc-43e8-bc04-547f2d5ad2d4\") " Jan 22 16:50:31 crc kubenswrapper[4758]: I0122 16:50:31.003606 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0916e0a0-54bc-43e8-bc04-547f2d5ad2d4-additional-scripts\") pod \"0916e0a0-54bc-43e8-bc04-547f2d5ad2d4\" (UID: \"0916e0a0-54bc-43e8-bc04-547f2d5ad2d4\") " Jan 22 16:50:31 crc kubenswrapper[4758]: I0122 16:50:31.003699 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0916e0a0-54bc-43e8-bc04-547f2d5ad2d4-var-run\") pod \"0916e0a0-54bc-43e8-bc04-547f2d5ad2d4\" (UID: \"0916e0a0-54bc-43e8-bc04-547f2d5ad2d4\") " Jan 22 16:50:31 crc kubenswrapper[4758]: I0122 16:50:31.003768 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0916e0a0-54bc-43e8-bc04-547f2d5ad2d4-var-run-ovn\") pod \"0916e0a0-54bc-43e8-bc04-547f2d5ad2d4\" (UID: \"0916e0a0-54bc-43e8-bc04-547f2d5ad2d4\") " Jan 22 16:50:31 crc kubenswrapper[4758]: I0122 16:50:31.003831 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j57p7\" (UniqueName: \"kubernetes.io/projected/0916e0a0-54bc-43e8-bc04-547f2d5ad2d4-kube-api-access-j57p7\") pod \"0916e0a0-54bc-43e8-bc04-547f2d5ad2d4\" (UID: \"0916e0a0-54bc-43e8-bc04-547f2d5ad2d4\") " Jan 22 16:50:31 crc kubenswrapper[4758]: I0122 16:50:31.003915 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlz8r\" (UniqueName: \"kubernetes.io/projected/b17a8111-b550-4c28-98bf-fe568e5f35f5-kube-api-access-jlz8r\") pod \"b17a8111-b550-4c28-98bf-fe568e5f35f5\" (UID: \"b17a8111-b550-4c28-98bf-fe568e5f35f5\") " Jan 22 16:50:31 crc kubenswrapper[4758]: I0122 16:50:31.003909 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0916e0a0-54bc-43e8-bc04-547f2d5ad2d4-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "0916e0a0-54bc-43e8-bc04-547f2d5ad2d4" (UID: "0916e0a0-54bc-43e8-bc04-547f2d5ad2d4"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:50:31 crc kubenswrapper[4758]: I0122 16:50:31.003900 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0916e0a0-54bc-43e8-bc04-547f2d5ad2d4-var-run" (OuterVolumeSpecName: "var-run") pod "0916e0a0-54bc-43e8-bc04-547f2d5ad2d4" (UID: "0916e0a0-54bc-43e8-bc04-547f2d5ad2d4"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:50:31 crc kubenswrapper[4758]: I0122 16:50:31.004613 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0916e0a0-54bc-43e8-bc04-547f2d5ad2d4-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "0916e0a0-54bc-43e8-bc04-547f2d5ad2d4" (UID: "0916e0a0-54bc-43e8-bc04-547f2d5ad2d4"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:50:31 crc kubenswrapper[4758]: I0122 16:50:31.004983 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0916e0a0-54bc-43e8-bc04-547f2d5ad2d4-scripts" (OuterVolumeSpecName: "scripts") pod "0916e0a0-54bc-43e8-bc04-547f2d5ad2d4" (UID: "0916e0a0-54bc-43e8-bc04-547f2d5ad2d4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:50:31 crc kubenswrapper[4758]: I0122 16:50:31.013895 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b17a8111-b550-4c28-98bf-fe568e5f35f5-operator-scripts\") pod \"b17a8111-b550-4c28-98bf-fe568e5f35f5\" (UID: \"b17a8111-b550-4c28-98bf-fe568e5f35f5\") " Jan 22 16:50:31 crc kubenswrapper[4758]: I0122 16:50:31.013944 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0916e0a0-54bc-43e8-bc04-547f2d5ad2d4-var-log-ovn\") pod \"0916e0a0-54bc-43e8-bc04-547f2d5ad2d4\" (UID: \"0916e0a0-54bc-43e8-bc04-547f2d5ad2d4\") " Jan 22 16:50:31 crc kubenswrapper[4758]: I0122 16:50:31.014334 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0916e0a0-54bc-43e8-bc04-547f2d5ad2d4-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "0916e0a0-54bc-43e8-bc04-547f2d5ad2d4" (UID: "0916e0a0-54bc-43e8-bc04-547f2d5ad2d4"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:50:31 crc kubenswrapper[4758]: I0122 16:50:31.014536 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b17a8111-b550-4c28-98bf-fe568e5f35f5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b17a8111-b550-4c28-98bf-fe568e5f35f5" (UID: "b17a8111-b550-4c28-98bf-fe568e5f35f5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:50:31 crc kubenswrapper[4758]: I0122 16:50:31.014774 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b17a8111-b550-4c28-98bf-fe568e5f35f5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:31 crc kubenswrapper[4758]: I0122 16:50:31.014800 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e50e041-f10a-43dc-9ba9-1b8adf5d0296-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:31 crc kubenswrapper[4758]: I0122 16:50:31.014814 4758 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0916e0a0-54bc-43e8-bc04-547f2d5ad2d4-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:31 crc kubenswrapper[4758]: I0122 16:50:31.014825 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0916e0a0-54bc-43e8-bc04-547f2d5ad2d4-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:31 crc kubenswrapper[4758]: I0122 16:50:31.014838 4758 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0916e0a0-54bc-43e8-bc04-547f2d5ad2d4-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:31 crc kubenswrapper[4758]: I0122 16:50:31.014850 4758 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0916e0a0-54bc-43e8-bc04-547f2d5ad2d4-var-run\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:31 crc kubenswrapper[4758]: I0122 16:50:31.014862 4758 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0916e0a0-54bc-43e8-bc04-547f2d5ad2d4-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:31 crc kubenswrapper[4758]: I0122 16:50:31.014878 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r62k5\" (UniqueName: \"kubernetes.io/projected/8e50e041-f10a-43dc-9ba9-1b8adf5d0296-kube-api-access-r62k5\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:31 crc kubenswrapper[4758]: I0122 16:50:31.016714 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b17a8111-b550-4c28-98bf-fe568e5f35f5-kube-api-access-jlz8r" (OuterVolumeSpecName: "kube-api-access-jlz8r") pod "b17a8111-b550-4c28-98bf-fe568e5f35f5" (UID: "b17a8111-b550-4c28-98bf-fe568e5f35f5"). InnerVolumeSpecName "kube-api-access-jlz8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:50:31 crc kubenswrapper[4758]: I0122 16:50:31.017957 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0916e0a0-54bc-43e8-bc04-547f2d5ad2d4-kube-api-access-j57p7" (OuterVolumeSpecName: "kube-api-access-j57p7") pod "0916e0a0-54bc-43e8-bc04-547f2d5ad2d4" (UID: "0916e0a0-54bc-43e8-bc04-547f2d5ad2d4"). InnerVolumeSpecName "kube-api-access-j57p7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:50:31 crc kubenswrapper[4758]: W0122 16:50:31.042689 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7a10f61_441f_4ec1_a6fa_c34ff9a75956.slice/crio-cd2e3ab4842ba9acedd02ccea2520aa7383259dfbbdc8651d3d46ea9f99551cb WatchSource:0}: Error finding container cd2e3ab4842ba9acedd02ccea2520aa7383259dfbbdc8651d3d46ea9f99551cb: Status 404 returned error can't find the container with id cd2e3ab4842ba9acedd02ccea2520aa7383259dfbbdc8651d3d46ea9f99551cb Jan 22 16:50:31 crc kubenswrapper[4758]: I0122 16:50:31.048218 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 22 16:50:31 crc kubenswrapper[4758]: I0122 16:50:31.116806 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlz8r\" (UniqueName: \"kubernetes.io/projected/b17a8111-b550-4c28-98bf-fe568e5f35f5-kube-api-access-jlz8r\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:31 crc kubenswrapper[4758]: I0122 16:50:31.116851 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j57p7\" (UniqueName: \"kubernetes.io/projected/0916e0a0-54bc-43e8-bc04-547f2d5ad2d4-kube-api-access-j57p7\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:31 crc kubenswrapper[4758]: I0122 16:50:31.398578 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mpsgq-config-9qfgh" Jan 22 16:50:31 crc kubenswrapper[4758]: I0122 16:50:31.398576 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mpsgq-config-9qfgh" event={"ID":"0916e0a0-54bc-43e8-bc04-547f2d5ad2d4","Type":"ContainerDied","Data":"4eb0974e9eeed0af06e5258674c455b526330089be1d89cd4495a67205f5ae12"} Jan 22 16:50:31 crc kubenswrapper[4758]: I0122 16:50:31.398708 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4eb0974e9eeed0af06e5258674c455b526330089be1d89cd4495a67205f5ae12" Jan 22 16:50:31 crc kubenswrapper[4758]: I0122 16:50:31.400103 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c3ab-account-create-update-4wv2p" event={"ID":"8e50e041-f10a-43dc-9ba9-1b8adf5d0296","Type":"ContainerDied","Data":"c51a2d3c14b56225f8f3125baf746c3e5fd8c8269d5da8619ab1cb2b7a411cff"} Jan 22 16:50:31 crc kubenswrapper[4758]: I0122 16:50:31.400146 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c51a2d3c14b56225f8f3125baf746c3e5fd8c8269d5da8619ab1cb2b7a411cff" Jan 22 16:50:31 crc kubenswrapper[4758]: I0122 16:50:31.400124 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c3ab-account-create-update-4wv2p" Jan 22 16:50:31 crc kubenswrapper[4758]: I0122 16:50:31.401126 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d7a10f61-441f-4ec1-a6fa-c34ff9a75956","Type":"ContainerStarted","Data":"cd2e3ab4842ba9acedd02ccea2520aa7383259dfbbdc8651d3d46ea9f99551cb"} Jan 22 16:50:31 crc kubenswrapper[4758]: I0122 16:50:31.406897 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-afd0-account-create-update-jtfps" event={"ID":"b17a8111-b550-4c28-98bf-fe568e5f35f5","Type":"ContainerDied","Data":"3fcfca7d6c38d7fcc24d14f3f33a62b9b6fe6301fae9f2c174175d292e1a41da"} Jan 22 16:50:31 crc kubenswrapper[4758]: I0122 16:50:31.406931 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fcfca7d6c38d7fcc24d14f3f33a62b9b6fe6301fae9f2c174175d292e1a41da" Jan 22 16:50:31 crc kubenswrapper[4758]: I0122 16:50:31.406977 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-afd0-account-create-update-jtfps" Jan 22 16:50:31 crc kubenswrapper[4758]: I0122 16:50:31.542904 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-mpsgq-config-9qfgh"] Jan 22 16:50:31 crc kubenswrapper[4758]: I0122 16:50:31.584116 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-mpsgq-config-9qfgh"] Jan 22 16:50:31 crc kubenswrapper[4758]: I0122 16:50:31.841027 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-mpsgq" Jan 22 16:50:31 crc kubenswrapper[4758]: I0122 16:50:31.929974 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c63f01b2-8785-4108-b532-b69bc2407a26-etc-swift\") pod \"swift-storage-0\" (UID: \"c63f01b2-8785-4108-b532-b69bc2407a26\") " pod="openstack/swift-storage-0" Jan 22 16:50:31 crc kubenswrapper[4758]: I0122 16:50:31.938350 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c63f01b2-8785-4108-b532-b69bc2407a26-etc-swift\") pod \"swift-storage-0\" (UID: \"c63f01b2-8785-4108-b532-b69bc2407a26\") " pod="openstack/swift-storage-0" Jan 22 16:50:32 crc kubenswrapper[4758]: I0122 16:50:32.060175 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 22 16:50:32 crc kubenswrapper[4758]: W0122 16:50:32.608193 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc63f01b2_8785_4108_b532_b69bc2407a26.slice/crio-5e996aaf3ffc96bca959f931b6d832d06858f3d27d90c40ec57569890c5214bf WatchSource:0}: Error finding container 5e996aaf3ffc96bca959f931b6d832d06858f3d27d90c40ec57569890c5214bf: Status 404 returned error can't find the container with id 5e996aaf3ffc96bca959f931b6d832d06858f3d27d90c40ec57569890c5214bf Jan 22 16:50:32 crc kubenswrapper[4758]: I0122 16:50:32.614122 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 22 16:50:32 crc kubenswrapper[4758]: I0122 16:50:32.819302 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0916e0a0-54bc-43e8-bc04-547f2d5ad2d4" path="/var/lib/kubelet/pods/0916e0a0-54bc-43e8-bc04-547f2d5ad2d4/volumes" Jan 22 16:50:32 crc kubenswrapper[4758]: I0122 16:50:32.931307 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="78374f0a-c7de-486b-9118-fe2dccc5bdca" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.105:5671: connect: connection refused" Jan 22 16:50:33 crc kubenswrapper[4758]: I0122 16:50:33.316401 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="f7805c55-6999-45a8-afd4-3fd1fa039b7a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Jan 22 16:50:33 crc kubenswrapper[4758]: I0122 16:50:33.423844 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c63f01b2-8785-4108-b532-b69bc2407a26","Type":"ContainerStarted","Data":"5e996aaf3ffc96bca959f931b6d832d06858f3d27d90c40ec57569890c5214bf"} Jan 22 16:50:33 crc kubenswrapper[4758]: I0122 16:50:33.670256 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-notifications-server-0" podUID="be871bb7-c028-4788-9769-51685b7290ea" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.107:5671: connect: connection refused" Jan 22 16:50:34 crc kubenswrapper[4758]: I0122 16:50:34.435822 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c63f01b2-8785-4108-b532-b69bc2407a26","Type":"ContainerStarted","Data":"6bf697c7a60ee44decb9b9d86308a882418f89a56922bf1624a5b55eefb5d1dd"} Jan 22 16:50:34 crc kubenswrapper[4758]: I0122 16:50:34.435866 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c63f01b2-8785-4108-b532-b69bc2407a26","Type":"ContainerStarted","Data":"b7ad98abcfb508759c741962d1432882ce7af7a2fadd6f41aa77047c878555eb"} Jan 22 16:50:34 crc kubenswrapper[4758]: I0122 16:50:34.435875 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c63f01b2-8785-4108-b532-b69bc2407a26","Type":"ContainerStarted","Data":"a0b411c319634965a12bde2a0f0eb59ef06a43772efcf0d85f9b5ffaef662733"} Jan 22 16:50:34 crc kubenswrapper[4758]: I0122 16:50:34.437442 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d7a10f61-441f-4ec1-a6fa-c34ff9a75956","Type":"ContainerStarted","Data":"46011a6b7ef00eea6019191c83e55d400ae06da48727b373c7d23e640120b934"} Jan 22 16:50:34 crc kubenswrapper[4758]: I0122 16:50:34.980676 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-nkd9g"] Jan 22 16:50:34 crc kubenswrapper[4758]: E0122 16:50:34.981632 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e50e041-f10a-43dc-9ba9-1b8adf5d0296" containerName="mariadb-account-create-update" Jan 22 16:50:34 crc kubenswrapper[4758]: I0122 16:50:34.981666 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e50e041-f10a-43dc-9ba9-1b8adf5d0296" containerName="mariadb-account-create-update" Jan 22 16:50:34 crc kubenswrapper[4758]: E0122 16:50:34.981700 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0916e0a0-54bc-43e8-bc04-547f2d5ad2d4" containerName="ovn-config" Jan 22 16:50:34 crc kubenswrapper[4758]: I0122 16:50:34.981714 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="0916e0a0-54bc-43e8-bc04-547f2d5ad2d4" containerName="ovn-config" Jan 22 16:50:34 crc kubenswrapper[4758]: E0122 16:50:34.981736 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f206eab-3576-41d8-b0b8-abbf89628582" containerName="mariadb-database-create" Jan 22 16:50:34 crc kubenswrapper[4758]: I0122 16:50:34.981777 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f206eab-3576-41d8-b0b8-abbf89628582" containerName="mariadb-database-create" Jan 22 16:50:34 crc kubenswrapper[4758]: E0122 16:50:34.981803 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6088aa85-eb17-48ba-badd-ea46ba4333bb" containerName="mariadb-database-create" Jan 22 16:50:34 crc kubenswrapper[4758]: I0122 16:50:34.981813 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="6088aa85-eb17-48ba-badd-ea46ba4333bb" containerName="mariadb-database-create" Jan 22 16:50:34 crc kubenswrapper[4758]: E0122 16:50:34.981829 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b17a8111-b550-4c28-98bf-fe568e5f35f5" containerName="mariadb-account-create-update" Jan 22 16:50:34 crc kubenswrapper[4758]: I0122 16:50:34.981840 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="b17a8111-b550-4c28-98bf-fe568e5f35f5" containerName="mariadb-account-create-update" Jan 22 16:50:34 crc kubenswrapper[4758]: I0122 16:50:34.982147 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e50e041-f10a-43dc-9ba9-1b8adf5d0296" containerName="mariadb-account-create-update" Jan 22 16:50:34 crc kubenswrapper[4758]: I0122 16:50:34.982178 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="b17a8111-b550-4c28-98bf-fe568e5f35f5" containerName="mariadb-account-create-update" Jan 22 16:50:34 crc kubenswrapper[4758]: I0122 16:50:34.982202 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="6088aa85-eb17-48ba-badd-ea46ba4333bb" containerName="mariadb-database-create" Jan 22 16:50:34 crc kubenswrapper[4758]: I0122 16:50:34.982222 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="0916e0a0-54bc-43e8-bc04-547f2d5ad2d4" containerName="ovn-config" Jan 22 16:50:34 crc kubenswrapper[4758]: I0122 16:50:34.982248 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f206eab-3576-41d8-b0b8-abbf89628582" containerName="mariadb-database-create" Jan 22 16:50:34 crc kubenswrapper[4758]: I0122 16:50:34.983143 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nkd9g" Jan 22 16:50:34 crc kubenswrapper[4758]: I0122 16:50:34.989807 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-nkd9g"] Jan 22 16:50:35 crc kubenswrapper[4758]: I0122 16:50:35.033591 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtxmp\" (UniqueName: \"kubernetes.io/projected/62ee1a0a-56ed-4b6f-9331-9794a22bf5dd-kube-api-access-xtxmp\") pod \"root-account-create-update-nkd9g\" (UID: \"62ee1a0a-56ed-4b6f-9331-9794a22bf5dd\") " pod="openstack/root-account-create-update-nkd9g" Jan 22 16:50:35 crc kubenswrapper[4758]: I0122 16:50:35.033920 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 22 16:50:35 crc kubenswrapper[4758]: I0122 16:50:35.033933 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62ee1a0a-56ed-4b6f-9331-9794a22bf5dd-operator-scripts\") pod \"root-account-create-update-nkd9g\" (UID: \"62ee1a0a-56ed-4b6f-9331-9794a22bf5dd\") " pod="openstack/root-account-create-update-nkd9g" Jan 22 16:50:35 crc kubenswrapper[4758]: I0122 16:50:35.136017 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62ee1a0a-56ed-4b6f-9331-9794a22bf5dd-operator-scripts\") pod \"root-account-create-update-nkd9g\" (UID: \"62ee1a0a-56ed-4b6f-9331-9794a22bf5dd\") " pod="openstack/root-account-create-update-nkd9g" Jan 22 16:50:35 crc kubenswrapper[4758]: I0122 16:50:35.136216 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtxmp\" (UniqueName: \"kubernetes.io/projected/62ee1a0a-56ed-4b6f-9331-9794a22bf5dd-kube-api-access-xtxmp\") pod \"root-account-create-update-nkd9g\" (UID: \"62ee1a0a-56ed-4b6f-9331-9794a22bf5dd\") " pod="openstack/root-account-create-update-nkd9g" Jan 22 16:50:35 crc kubenswrapper[4758]: I0122 16:50:35.137225 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62ee1a0a-56ed-4b6f-9331-9794a22bf5dd-operator-scripts\") pod \"root-account-create-update-nkd9g\" (UID: \"62ee1a0a-56ed-4b6f-9331-9794a22bf5dd\") " pod="openstack/root-account-create-update-nkd9g" Jan 22 16:50:35 crc kubenswrapper[4758]: I0122 16:50:35.166407 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtxmp\" (UniqueName: \"kubernetes.io/projected/62ee1a0a-56ed-4b6f-9331-9794a22bf5dd-kube-api-access-xtxmp\") pod \"root-account-create-update-nkd9g\" (UID: \"62ee1a0a-56ed-4b6f-9331-9794a22bf5dd\") " pod="openstack/root-account-create-update-nkd9g" Jan 22 16:50:35 crc kubenswrapper[4758]: I0122 16:50:35.358570 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nkd9g" Jan 22 16:50:35 crc kubenswrapper[4758]: I0122 16:50:35.462997 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c63f01b2-8785-4108-b532-b69bc2407a26","Type":"ContainerStarted","Data":"f98134bc46fe77d73b2468889262b98d8841fac8222e2d623e75f0e66e9e3718"} Jan 22 16:50:35 crc kubenswrapper[4758]: I0122 16:50:35.793909 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-nkd9g"] Jan 22 16:50:36 crc kubenswrapper[4758]: I0122 16:50:36.473393 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-nkd9g" event={"ID":"62ee1a0a-56ed-4b6f-9331-9794a22bf5dd","Type":"ContainerStarted","Data":"18e25043ab877bb2458ff5807e35169391dfd0b5fdb2cf2112b01d99b511d337"} Jan 22 16:50:36 crc kubenswrapper[4758]: I0122 16:50:36.473777 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-nkd9g" event={"ID":"62ee1a0a-56ed-4b6f-9331-9794a22bf5dd","Type":"ContainerStarted","Data":"1f32352067fe71f9ded0b5b548bda2506f13b2ff956304c7410600dc90ac0c79"} Jan 22 16:50:36 crc kubenswrapper[4758]: I0122 16:50:36.500272 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-nkd9g" podStartSLOduration=2.500249018 podStartE2EDuration="2.500249018s" podCreationTimestamp="2026-01-22 16:50:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:50:36.488717314 +0000 UTC m=+1257.972056609" watchObservedRunningTime="2026-01-22 16:50:36.500249018 +0000 UTC m=+1257.983588313" Jan 22 16:50:37 crc kubenswrapper[4758]: I0122 16:50:37.481926 4758 generic.go:334] "Generic (PLEG): container finished" podID="62ee1a0a-56ed-4b6f-9331-9794a22bf5dd" containerID="18e25043ab877bb2458ff5807e35169391dfd0b5fdb2cf2112b01d99b511d337" exitCode=0 Jan 22 16:50:37 crc kubenswrapper[4758]: I0122 16:50:37.482088 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-nkd9g" event={"ID":"62ee1a0a-56ed-4b6f-9331-9794a22bf5dd","Type":"ContainerDied","Data":"18e25043ab877bb2458ff5807e35169391dfd0b5fdb2cf2112b01d99b511d337"} Jan 22 16:50:37 crc kubenswrapper[4758]: I0122 16:50:37.494485 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c63f01b2-8785-4108-b532-b69bc2407a26","Type":"ContainerStarted","Data":"ecd8541de6339890f41fb726428a16fd8cd42661c3589db99ed37c5d2447f436"} Jan 22 16:50:38 crc kubenswrapper[4758]: I0122 16:50:38.505251 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c63f01b2-8785-4108-b532-b69bc2407a26","Type":"ContainerStarted","Data":"d33f291017ae901b064c6db3eeff8e1bebad1d2b1176bb3fbceb77bf2cb27b00"} Jan 22 16:50:38 crc kubenswrapper[4758]: I0122 16:50:38.505538 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c63f01b2-8785-4108-b532-b69bc2407a26","Type":"ContainerStarted","Data":"9697d68e03147c06d5130dc6510a1efd52183711cdea2f4d00480c21f15a2141"} Jan 22 16:50:38 crc kubenswrapper[4758]: I0122 16:50:38.505549 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c63f01b2-8785-4108-b532-b69bc2407a26","Type":"ContainerStarted","Data":"097732a6d0eb1849227d15451588d813ed30ea0a9158f308eadf9aba6736d674"} Jan 22 16:50:38 crc kubenswrapper[4758]: I0122 16:50:38.909416 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nkd9g" Jan 22 16:50:39 crc kubenswrapper[4758]: I0122 16:50:39.089600 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62ee1a0a-56ed-4b6f-9331-9794a22bf5dd-operator-scripts\") pod \"62ee1a0a-56ed-4b6f-9331-9794a22bf5dd\" (UID: \"62ee1a0a-56ed-4b6f-9331-9794a22bf5dd\") " Jan 22 16:50:39 crc kubenswrapper[4758]: I0122 16:50:39.089820 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtxmp\" (UniqueName: \"kubernetes.io/projected/62ee1a0a-56ed-4b6f-9331-9794a22bf5dd-kube-api-access-xtxmp\") pod \"62ee1a0a-56ed-4b6f-9331-9794a22bf5dd\" (UID: \"62ee1a0a-56ed-4b6f-9331-9794a22bf5dd\") " Jan 22 16:50:39 crc kubenswrapper[4758]: I0122 16:50:39.090765 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62ee1a0a-56ed-4b6f-9331-9794a22bf5dd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "62ee1a0a-56ed-4b6f-9331-9794a22bf5dd" (UID: "62ee1a0a-56ed-4b6f-9331-9794a22bf5dd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:50:39 crc kubenswrapper[4758]: I0122 16:50:39.094114 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62ee1a0a-56ed-4b6f-9331-9794a22bf5dd-kube-api-access-xtxmp" (OuterVolumeSpecName: "kube-api-access-xtxmp") pod "62ee1a0a-56ed-4b6f-9331-9794a22bf5dd" (UID: "62ee1a0a-56ed-4b6f-9331-9794a22bf5dd"). InnerVolumeSpecName "kube-api-access-xtxmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:50:39 crc kubenswrapper[4758]: I0122 16:50:39.192969 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62ee1a0a-56ed-4b6f-9331-9794a22bf5dd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:39 crc kubenswrapper[4758]: I0122 16:50:39.193037 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtxmp\" (UniqueName: \"kubernetes.io/projected/62ee1a0a-56ed-4b6f-9331-9794a22bf5dd-kube-api-access-xtxmp\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:39 crc kubenswrapper[4758]: I0122 16:50:39.662128 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c63f01b2-8785-4108-b532-b69bc2407a26","Type":"ContainerStarted","Data":"53694dbe1d2deaebc4e197536831e041e061ada6fd266aba360444a0f3367b54"} Jan 22 16:50:39 crc kubenswrapper[4758]: I0122 16:50:39.662200 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c63f01b2-8785-4108-b532-b69bc2407a26","Type":"ContainerStarted","Data":"52c7d8894447696aaacb654b75533df65461482bd5560e626e6d84ed61f968ca"} Jan 22 16:50:39 crc kubenswrapper[4758]: I0122 16:50:39.662220 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c63f01b2-8785-4108-b532-b69bc2407a26","Type":"ContainerStarted","Data":"915a11e50de21163e3333d6743e1a704753c8d44ef8632dd3f4c0f9da82a89a7"} Jan 22 16:50:39 crc kubenswrapper[4758]: I0122 16:50:39.678044 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-nkd9g" event={"ID":"62ee1a0a-56ed-4b6f-9331-9794a22bf5dd","Type":"ContainerDied","Data":"1f32352067fe71f9ded0b5b548bda2506f13b2ff956304c7410600dc90ac0c79"} Jan 22 16:50:39 crc kubenswrapper[4758]: I0122 16:50:39.678082 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f32352067fe71f9ded0b5b548bda2506f13b2ff956304c7410600dc90ac0c79" Jan 22 16:50:39 crc kubenswrapper[4758]: I0122 16:50:39.678151 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nkd9g" Jan 22 16:50:40 crc kubenswrapper[4758]: I0122 16:50:40.694250 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c63f01b2-8785-4108-b532-b69bc2407a26","Type":"ContainerStarted","Data":"c0534331f610cbec0e9babd710242d8f707f8e9b43fe87ba6618c4a1ddb5d88a"} Jan 22 16:50:40 crc kubenswrapper[4758]: I0122 16:50:40.694653 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c63f01b2-8785-4108-b532-b69bc2407a26","Type":"ContainerStarted","Data":"002c7a133d48fc30cf244328df8e5d45f618cbad573aa4dacef68d67d11a5772"} Jan 22 16:50:40 crc kubenswrapper[4758]: I0122 16:50:40.694672 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c63f01b2-8785-4108-b532-b69bc2407a26","Type":"ContainerStarted","Data":"f6e4f667f5054d0618d6a9f68150b127c00853f755f6576bbbd7adf054b23cd9"} Jan 22 16:50:40 crc kubenswrapper[4758]: I0122 16:50:40.698569 4758 generic.go:334] "Generic (PLEG): container finished" podID="d7a10f61-441f-4ec1-a6fa-c34ff9a75956" containerID="46011a6b7ef00eea6019191c83e55d400ae06da48727b373c7d23e640120b934" exitCode=0 Jan 22 16:50:40 crc kubenswrapper[4758]: I0122 16:50:40.698607 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d7a10f61-441f-4ec1-a6fa-c34ff9a75956","Type":"ContainerDied","Data":"46011a6b7ef00eea6019191c83e55d400ae06da48727b373c7d23e640120b934"} Jan 22 16:50:41 crc kubenswrapper[4758]: I0122 16:50:41.713763 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c63f01b2-8785-4108-b532-b69bc2407a26","Type":"ContainerStarted","Data":"a59aba57c0d54ed9a660b208169e126153285818a50de8c5dc7d4bc467f7006a"} Jan 22 16:50:41 crc kubenswrapper[4758]: I0122 16:50:41.715906 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d7a10f61-441f-4ec1-a6fa-c34ff9a75956","Type":"ContainerStarted","Data":"bbafea354bc8ab01a2fcb8bfb3408bcbad92c9e0c5610e8f2ca2556cf992d016"} Jan 22 16:50:41 crc kubenswrapper[4758]: I0122 16:50:41.769121 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=37.52590547 podStartE2EDuration="43.76909501s" podCreationTimestamp="2026-01-22 16:49:58 +0000 UTC" firstStartedPulling="2026-01-22 16:50:32.610876616 +0000 UTC m=+1254.094215901" lastFinishedPulling="2026-01-22 16:50:38.854066156 +0000 UTC m=+1260.337405441" observedRunningTime="2026-01-22 16:50:41.761437102 +0000 UTC m=+1263.244776427" watchObservedRunningTime="2026-01-22 16:50:41.76909501 +0000 UTC m=+1263.252434315" Jan 22 16:50:42 crc kubenswrapper[4758]: I0122 16:50:42.074386 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b65dddd8f-twdkl"] Jan 22 16:50:42 crc kubenswrapper[4758]: E0122 16:50:42.074832 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62ee1a0a-56ed-4b6f-9331-9794a22bf5dd" containerName="mariadb-account-create-update" Jan 22 16:50:42 crc kubenswrapper[4758]: I0122 16:50:42.074855 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="62ee1a0a-56ed-4b6f-9331-9794a22bf5dd" containerName="mariadb-account-create-update" Jan 22 16:50:42 crc kubenswrapper[4758]: I0122 16:50:42.075081 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="62ee1a0a-56ed-4b6f-9331-9794a22bf5dd" containerName="mariadb-account-create-update" Jan 22 16:50:42 crc kubenswrapper[4758]: I0122 16:50:42.076168 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b65dddd8f-twdkl" Jan 22 16:50:42 crc kubenswrapper[4758]: I0122 16:50:42.084806 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 22 16:50:42 crc kubenswrapper[4758]: I0122 16:50:42.090274 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b65dddd8f-twdkl"] Jan 22 16:50:42 crc kubenswrapper[4758]: I0122 16:50:42.150212 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/287aac2e-b390-416b-be0e-4b8b07e5e486-dns-svc\") pod \"dnsmasq-dns-6b65dddd8f-twdkl\" (UID: \"287aac2e-b390-416b-be0e-4b8b07e5e486\") " pod="openstack/dnsmasq-dns-6b65dddd8f-twdkl" Jan 22 16:50:42 crc kubenswrapper[4758]: I0122 16:50:42.150259 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjxs5\" (UniqueName: \"kubernetes.io/projected/287aac2e-b390-416b-be0e-4b8b07e5e486-kube-api-access-zjxs5\") pod \"dnsmasq-dns-6b65dddd8f-twdkl\" (UID: \"287aac2e-b390-416b-be0e-4b8b07e5e486\") " pod="openstack/dnsmasq-dns-6b65dddd8f-twdkl" Jan 22 16:50:42 crc kubenswrapper[4758]: I0122 16:50:42.150291 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/287aac2e-b390-416b-be0e-4b8b07e5e486-ovsdbserver-sb\") pod \"dnsmasq-dns-6b65dddd8f-twdkl\" (UID: \"287aac2e-b390-416b-be0e-4b8b07e5e486\") " pod="openstack/dnsmasq-dns-6b65dddd8f-twdkl" Jan 22 16:50:42 crc kubenswrapper[4758]: I0122 16:50:42.150372 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/287aac2e-b390-416b-be0e-4b8b07e5e486-ovsdbserver-nb\") pod \"dnsmasq-dns-6b65dddd8f-twdkl\" (UID: \"287aac2e-b390-416b-be0e-4b8b07e5e486\") " pod="openstack/dnsmasq-dns-6b65dddd8f-twdkl" Jan 22 16:50:42 crc kubenswrapper[4758]: I0122 16:50:42.150442 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/287aac2e-b390-416b-be0e-4b8b07e5e486-config\") pod \"dnsmasq-dns-6b65dddd8f-twdkl\" (UID: \"287aac2e-b390-416b-be0e-4b8b07e5e486\") " pod="openstack/dnsmasq-dns-6b65dddd8f-twdkl" Jan 22 16:50:42 crc kubenswrapper[4758]: I0122 16:50:42.150479 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/287aac2e-b390-416b-be0e-4b8b07e5e486-dns-swift-storage-0\") pod \"dnsmasq-dns-6b65dddd8f-twdkl\" (UID: \"287aac2e-b390-416b-be0e-4b8b07e5e486\") " pod="openstack/dnsmasq-dns-6b65dddd8f-twdkl" Jan 22 16:50:42 crc kubenswrapper[4758]: I0122 16:50:42.252222 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/287aac2e-b390-416b-be0e-4b8b07e5e486-config\") pod \"dnsmasq-dns-6b65dddd8f-twdkl\" (UID: \"287aac2e-b390-416b-be0e-4b8b07e5e486\") " pod="openstack/dnsmasq-dns-6b65dddd8f-twdkl" Jan 22 16:50:42 crc kubenswrapper[4758]: I0122 16:50:42.252289 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/287aac2e-b390-416b-be0e-4b8b07e5e486-dns-swift-storage-0\") pod \"dnsmasq-dns-6b65dddd8f-twdkl\" (UID: \"287aac2e-b390-416b-be0e-4b8b07e5e486\") " pod="openstack/dnsmasq-dns-6b65dddd8f-twdkl" Jan 22 16:50:42 crc kubenswrapper[4758]: I0122 16:50:42.252345 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/287aac2e-b390-416b-be0e-4b8b07e5e486-dns-svc\") pod \"dnsmasq-dns-6b65dddd8f-twdkl\" (UID: \"287aac2e-b390-416b-be0e-4b8b07e5e486\") " pod="openstack/dnsmasq-dns-6b65dddd8f-twdkl" Jan 22 16:50:42 crc kubenswrapper[4758]: I0122 16:50:42.252364 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjxs5\" (UniqueName: \"kubernetes.io/projected/287aac2e-b390-416b-be0e-4b8b07e5e486-kube-api-access-zjxs5\") pod \"dnsmasq-dns-6b65dddd8f-twdkl\" (UID: \"287aac2e-b390-416b-be0e-4b8b07e5e486\") " pod="openstack/dnsmasq-dns-6b65dddd8f-twdkl" Jan 22 16:50:42 crc kubenswrapper[4758]: I0122 16:50:42.252386 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/287aac2e-b390-416b-be0e-4b8b07e5e486-ovsdbserver-sb\") pod \"dnsmasq-dns-6b65dddd8f-twdkl\" (UID: \"287aac2e-b390-416b-be0e-4b8b07e5e486\") " pod="openstack/dnsmasq-dns-6b65dddd8f-twdkl" Jan 22 16:50:42 crc kubenswrapper[4758]: I0122 16:50:42.252458 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/287aac2e-b390-416b-be0e-4b8b07e5e486-ovsdbserver-nb\") pod \"dnsmasq-dns-6b65dddd8f-twdkl\" (UID: \"287aac2e-b390-416b-be0e-4b8b07e5e486\") " pod="openstack/dnsmasq-dns-6b65dddd8f-twdkl" Jan 22 16:50:42 crc kubenswrapper[4758]: I0122 16:50:42.253261 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/287aac2e-b390-416b-be0e-4b8b07e5e486-ovsdbserver-nb\") pod \"dnsmasq-dns-6b65dddd8f-twdkl\" (UID: \"287aac2e-b390-416b-be0e-4b8b07e5e486\") " pod="openstack/dnsmasq-dns-6b65dddd8f-twdkl" Jan 22 16:50:42 crc kubenswrapper[4758]: I0122 16:50:42.253943 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/287aac2e-b390-416b-be0e-4b8b07e5e486-config\") pod \"dnsmasq-dns-6b65dddd8f-twdkl\" (UID: \"287aac2e-b390-416b-be0e-4b8b07e5e486\") " pod="openstack/dnsmasq-dns-6b65dddd8f-twdkl" Jan 22 16:50:42 crc kubenswrapper[4758]: I0122 16:50:42.254498 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/287aac2e-b390-416b-be0e-4b8b07e5e486-dns-swift-storage-0\") pod \"dnsmasq-dns-6b65dddd8f-twdkl\" (UID: \"287aac2e-b390-416b-be0e-4b8b07e5e486\") " pod="openstack/dnsmasq-dns-6b65dddd8f-twdkl" Jan 22 16:50:42 crc kubenswrapper[4758]: I0122 16:50:42.255070 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/287aac2e-b390-416b-be0e-4b8b07e5e486-dns-svc\") pod \"dnsmasq-dns-6b65dddd8f-twdkl\" (UID: \"287aac2e-b390-416b-be0e-4b8b07e5e486\") " pod="openstack/dnsmasq-dns-6b65dddd8f-twdkl" Jan 22 16:50:42 crc kubenswrapper[4758]: I0122 16:50:42.255926 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/287aac2e-b390-416b-be0e-4b8b07e5e486-ovsdbserver-sb\") pod \"dnsmasq-dns-6b65dddd8f-twdkl\" (UID: \"287aac2e-b390-416b-be0e-4b8b07e5e486\") " pod="openstack/dnsmasq-dns-6b65dddd8f-twdkl" Jan 22 16:50:42 crc kubenswrapper[4758]: I0122 16:50:42.273048 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjxs5\" (UniqueName: \"kubernetes.io/projected/287aac2e-b390-416b-be0e-4b8b07e5e486-kube-api-access-zjxs5\") pod \"dnsmasq-dns-6b65dddd8f-twdkl\" (UID: \"287aac2e-b390-416b-be0e-4b8b07e5e486\") " pod="openstack/dnsmasq-dns-6b65dddd8f-twdkl" Jan 22 16:50:42 crc kubenswrapper[4758]: I0122 16:50:42.400786 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b65dddd8f-twdkl" Jan 22 16:50:42 crc kubenswrapper[4758]: I0122 16:50:42.773276 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b65dddd8f-twdkl"] Jan 22 16:50:42 crc kubenswrapper[4758]: W0122 16:50:42.799481 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod287aac2e_b390_416b_be0e_4b8b07e5e486.slice/crio-96efa6b78ec96646c06e67bce773705e39b77cd54bcd42152da02482720349f3 WatchSource:0}: Error finding container 96efa6b78ec96646c06e67bce773705e39b77cd54bcd42152da02482720349f3: Status 404 returned error can't find the container with id 96efa6b78ec96646c06e67bce773705e39b77cd54bcd42152da02482720349f3 Jan 22 16:50:42 crc kubenswrapper[4758]: I0122 16:50:42.930000 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.254583 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-2zsn9"] Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.256258 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-2zsn9" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.276032 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-2zsn9"] Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.328699 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.372620 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23eda699-be19-45a4-8fac-2f3c8d1f38f6-operator-scripts\") pod \"cinder-db-create-2zsn9\" (UID: \"23eda699-be19-45a4-8fac-2f3c8d1f38f6\") " pod="openstack/cinder-db-create-2zsn9" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.373307 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4k47\" (UniqueName: \"kubernetes.io/projected/23eda699-be19-45a4-8fac-2f3c8d1f38f6-kube-api-access-s4k47\") pod \"cinder-db-create-2zsn9\" (UID: \"23eda699-be19-45a4-8fac-2f3c8d1f38f6\") " pod="openstack/cinder-db-create-2zsn9" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.379303 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-vh5np"] Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.380521 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vh5np" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.392805 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-sync-zftxl"] Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.394060 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-zftxl" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.396493 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-config-data" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.397709 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-bvchw" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.407408 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-vh5np"] Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.415478 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-zftxl"] Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.474480 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9cm5\" (UniqueName: \"kubernetes.io/projected/0b6f4b9a-54d9-440f-853b-b1e3a7b6069b-kube-api-access-x9cm5\") pod \"watcher-db-sync-zftxl\" (UID: \"0b6f4b9a-54d9-440f-853b-b1e3a7b6069b\") " pod="openstack/watcher-db-sync-zftxl" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.474565 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0b6f4b9a-54d9-440f-853b-b1e3a7b6069b-db-sync-config-data\") pod \"watcher-db-sync-zftxl\" (UID: \"0b6f4b9a-54d9-440f-853b-b1e3a7b6069b\") " pod="openstack/watcher-db-sync-zftxl" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.474592 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqdzm\" (UniqueName: \"kubernetes.io/projected/6623f30f-8f61-4f19-962f-de3e10559547-kube-api-access-jqdzm\") pod \"barbican-db-create-vh5np\" (UID: \"6623f30f-8f61-4f19-962f-de3e10559547\") " pod="openstack/barbican-db-create-vh5np" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.474626 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23eda699-be19-45a4-8fac-2f3c8d1f38f6-operator-scripts\") pod \"cinder-db-create-2zsn9\" (UID: \"23eda699-be19-45a4-8fac-2f3c8d1f38f6\") " pod="openstack/cinder-db-create-2zsn9" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.474654 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4k47\" (UniqueName: \"kubernetes.io/projected/23eda699-be19-45a4-8fac-2f3c8d1f38f6-kube-api-access-s4k47\") pod \"cinder-db-create-2zsn9\" (UID: \"23eda699-be19-45a4-8fac-2f3c8d1f38f6\") " pod="openstack/cinder-db-create-2zsn9" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.474690 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b6f4b9a-54d9-440f-853b-b1e3a7b6069b-combined-ca-bundle\") pod \"watcher-db-sync-zftxl\" (UID: \"0b6f4b9a-54d9-440f-853b-b1e3a7b6069b\") " pod="openstack/watcher-db-sync-zftxl" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.474964 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6623f30f-8f61-4f19-962f-de3e10559547-operator-scripts\") pod \"barbican-db-create-vh5np\" (UID: \"6623f30f-8f61-4f19-962f-de3e10559547\") " pod="openstack/barbican-db-create-vh5np" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.474997 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b6f4b9a-54d9-440f-853b-b1e3a7b6069b-config-data\") pod \"watcher-db-sync-zftxl\" (UID: \"0b6f4b9a-54d9-440f-853b-b1e3a7b6069b\") " pod="openstack/watcher-db-sync-zftxl" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.475785 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23eda699-be19-45a4-8fac-2f3c8d1f38f6-operator-scripts\") pod \"cinder-db-create-2zsn9\" (UID: \"23eda699-be19-45a4-8fac-2f3c8d1f38f6\") " pod="openstack/cinder-db-create-2zsn9" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.484242 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-3a43-account-create-update-bljp2"] Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.485413 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3a43-account-create-update-bljp2" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.487554 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.493400 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-3a43-account-create-update-bljp2"] Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.501678 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4k47\" (UniqueName: \"kubernetes.io/projected/23eda699-be19-45a4-8fac-2f3c8d1f38f6-kube-api-access-s4k47\") pod \"cinder-db-create-2zsn9\" (UID: \"23eda699-be19-45a4-8fac-2f3c8d1f38f6\") " pod="openstack/cinder-db-create-2zsn9" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.566091 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b2f3-account-create-update-fw588"] Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.567146 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b2f3-account-create-update-fw588" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.572015 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.573774 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-2zsn9" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.575948 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6623f30f-8f61-4f19-962f-de3e10559547-operator-scripts\") pod \"barbican-db-create-vh5np\" (UID: \"6623f30f-8f61-4f19-962f-de3e10559547\") " pod="openstack/barbican-db-create-vh5np" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.575982 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b6f4b9a-54d9-440f-853b-b1e3a7b6069b-config-data\") pod \"watcher-db-sync-zftxl\" (UID: \"0b6f4b9a-54d9-440f-853b-b1e3a7b6069b\") " pod="openstack/watcher-db-sync-zftxl" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.576059 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c76zh\" (UniqueName: \"kubernetes.io/projected/fff1f29f-f0e8-4fff-b964-9e9c9dc7f60b-kube-api-access-c76zh\") pod \"barbican-3a43-account-create-update-bljp2\" (UID: \"fff1f29f-f0e8-4fff-b964-9e9c9dc7f60b\") " pod="openstack/barbican-3a43-account-create-update-bljp2" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.576111 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9cm5\" (UniqueName: \"kubernetes.io/projected/0b6f4b9a-54d9-440f-853b-b1e3a7b6069b-kube-api-access-x9cm5\") pod \"watcher-db-sync-zftxl\" (UID: \"0b6f4b9a-54d9-440f-853b-b1e3a7b6069b\") " pod="openstack/watcher-db-sync-zftxl" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.576156 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0b6f4b9a-54d9-440f-853b-b1e3a7b6069b-db-sync-config-data\") pod \"watcher-db-sync-zftxl\" (UID: \"0b6f4b9a-54d9-440f-853b-b1e3a7b6069b\") " pod="openstack/watcher-db-sync-zftxl" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.576172 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqdzm\" (UniqueName: \"kubernetes.io/projected/6623f30f-8f61-4f19-962f-de3e10559547-kube-api-access-jqdzm\") pod \"barbican-db-create-vh5np\" (UID: \"6623f30f-8f61-4f19-962f-de3e10559547\") " pod="openstack/barbican-db-create-vh5np" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.576194 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fff1f29f-f0e8-4fff-b964-9e9c9dc7f60b-operator-scripts\") pod \"barbican-3a43-account-create-update-bljp2\" (UID: \"fff1f29f-f0e8-4fff-b964-9e9c9dc7f60b\") " pod="openstack/barbican-3a43-account-create-update-bljp2" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.576216 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b6f4b9a-54d9-440f-853b-b1e3a7b6069b-combined-ca-bundle\") pod \"watcher-db-sync-zftxl\" (UID: \"0b6f4b9a-54d9-440f-853b-b1e3a7b6069b\") " pod="openstack/watcher-db-sync-zftxl" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.577058 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6623f30f-8f61-4f19-962f-de3e10559547-operator-scripts\") pod \"barbican-db-create-vh5np\" (UID: \"6623f30f-8f61-4f19-962f-de3e10559547\") " pod="openstack/barbican-db-create-vh5np" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.579584 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0b6f4b9a-54d9-440f-853b-b1e3a7b6069b-db-sync-config-data\") pod \"watcher-db-sync-zftxl\" (UID: \"0b6f4b9a-54d9-440f-853b-b1e3a7b6069b\") " pod="openstack/watcher-db-sync-zftxl" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.588645 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b2f3-account-create-update-fw588"] Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.589328 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b6f4b9a-54d9-440f-853b-b1e3a7b6069b-combined-ca-bundle\") pod \"watcher-db-sync-zftxl\" (UID: \"0b6f4b9a-54d9-440f-853b-b1e3a7b6069b\") " pod="openstack/watcher-db-sync-zftxl" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.595534 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b6f4b9a-54d9-440f-853b-b1e3a7b6069b-config-data\") pod \"watcher-db-sync-zftxl\" (UID: \"0b6f4b9a-54d9-440f-853b-b1e3a7b6069b\") " pod="openstack/watcher-db-sync-zftxl" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.599400 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9cm5\" (UniqueName: \"kubernetes.io/projected/0b6f4b9a-54d9-440f-853b-b1e3a7b6069b-kube-api-access-x9cm5\") pod \"watcher-db-sync-zftxl\" (UID: \"0b6f4b9a-54d9-440f-853b-b1e3a7b6069b\") " pod="openstack/watcher-db-sync-zftxl" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.600419 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqdzm\" (UniqueName: \"kubernetes.io/projected/6623f30f-8f61-4f19-962f-de3e10559547-kube-api-access-jqdzm\") pod \"barbican-db-create-vh5np\" (UID: \"6623f30f-8f61-4f19-962f-de3e10559547\") " pod="openstack/barbican-db-create-vh5np" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.669979 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-notifications-server-0" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.677272 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c76zh\" (UniqueName: \"kubernetes.io/projected/fff1f29f-f0e8-4fff-b964-9e9c9dc7f60b-kube-api-access-c76zh\") pod \"barbican-3a43-account-create-update-bljp2\" (UID: \"fff1f29f-f0e8-4fff-b964-9e9c9dc7f60b\") " pod="openstack/barbican-3a43-account-create-update-bljp2" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.677356 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4788613-d2cb-49ab-89de-a8c4492d02fb-operator-scripts\") pod \"cinder-b2f3-account-create-update-fw588\" (UID: \"d4788613-d2cb-49ab-89de-a8c4492d02fb\") " pod="openstack/cinder-b2f3-account-create-update-fw588" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.677398 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fff1f29f-f0e8-4fff-b964-9e9c9dc7f60b-operator-scripts\") pod \"barbican-3a43-account-create-update-bljp2\" (UID: \"fff1f29f-f0e8-4fff-b964-9e9c9dc7f60b\") " pod="openstack/barbican-3a43-account-create-update-bljp2" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.677553 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpkmr\" (UniqueName: \"kubernetes.io/projected/d4788613-d2cb-49ab-89de-a8c4492d02fb-kube-api-access-lpkmr\") pod \"cinder-b2f3-account-create-update-fw588\" (UID: \"d4788613-d2cb-49ab-89de-a8c4492d02fb\") " pod="openstack/cinder-b2f3-account-create-update-fw588" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.678210 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fff1f29f-f0e8-4fff-b964-9e9c9dc7f60b-operator-scripts\") pod \"barbican-3a43-account-create-update-bljp2\" (UID: \"fff1f29f-f0e8-4fff-b964-9e9c9dc7f60b\") " pod="openstack/barbican-3a43-account-create-update-bljp2" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.697788 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c76zh\" (UniqueName: \"kubernetes.io/projected/fff1f29f-f0e8-4fff-b964-9e9c9dc7f60b-kube-api-access-c76zh\") pod \"barbican-3a43-account-create-update-bljp2\" (UID: \"fff1f29f-f0e8-4fff-b964-9e9c9dc7f60b\") " pod="openstack/barbican-3a43-account-create-update-bljp2" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.740251 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vh5np" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.741481 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-zftxl" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.745958 4758 generic.go:334] "Generic (PLEG): container finished" podID="287aac2e-b390-416b-be0e-4b8b07e5e486" containerID="badce9d7df947968779dba417b95d0ae50df8188051bae34446e57fe7f2c0266" exitCode=0 Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.746004 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b65dddd8f-twdkl" event={"ID":"287aac2e-b390-416b-be0e-4b8b07e5e486","Type":"ContainerDied","Data":"badce9d7df947968779dba417b95d0ae50df8188051bae34446e57fe7f2c0266"} Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.746029 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b65dddd8f-twdkl" event={"ID":"287aac2e-b390-416b-be0e-4b8b07e5e486","Type":"ContainerStarted","Data":"96efa6b78ec96646c06e67bce773705e39b77cd54bcd42152da02482720349f3"} Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.779452 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpkmr\" (UniqueName: \"kubernetes.io/projected/d4788613-d2cb-49ab-89de-a8c4492d02fb-kube-api-access-lpkmr\") pod \"cinder-b2f3-account-create-update-fw588\" (UID: \"d4788613-d2cb-49ab-89de-a8c4492d02fb\") " pod="openstack/cinder-b2f3-account-create-update-fw588" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.792929 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4788613-d2cb-49ab-89de-a8c4492d02fb-operator-scripts\") pod \"cinder-b2f3-account-create-update-fw588\" (UID: \"d4788613-d2cb-49ab-89de-a8c4492d02fb\") " pod="openstack/cinder-b2f3-account-create-update-fw588" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.795148 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4788613-d2cb-49ab-89de-a8c4492d02fb-operator-scripts\") pod \"cinder-b2f3-account-create-update-fw588\" (UID: \"d4788613-d2cb-49ab-89de-a8c4492d02fb\") " pod="openstack/cinder-b2f3-account-create-update-fw588" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.804707 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3a43-account-create-update-bljp2" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.810813 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpkmr\" (UniqueName: \"kubernetes.io/projected/d4788613-d2cb-49ab-89de-a8c4492d02fb-kube-api-access-lpkmr\") pod \"cinder-b2f3-account-create-update-fw588\" (UID: \"d4788613-d2cb-49ab-89de-a8c4492d02fb\") " pod="openstack/cinder-b2f3-account-create-update-fw588" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.837043 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.837095 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.837142 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.838147 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b601f6fca756de859a726aaa8ab0d3554a8d02de3dc2055608cf851a04506590"} pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.838214 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" containerID="cri-o://b601f6fca756de859a726aaa8ab0d3554a8d02de3dc2055608cf851a04506590" gracePeriod=600 Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.847595 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-fdqxw"] Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.848684 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-fdqxw" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.852523 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.852731 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.852867 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.852883 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-q7l7k" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.868144 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-fdqxw"] Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.894976 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c34cee78-07e7-4762-98ed-56f4f0ffc257-config-data\") pod \"keystone-db-sync-fdqxw\" (UID: \"c34cee78-07e7-4762-98ed-56f4f0ffc257\") " pod="openstack/keystone-db-sync-fdqxw" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.895064 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jbmr\" (UniqueName: \"kubernetes.io/projected/c34cee78-07e7-4762-98ed-56f4f0ffc257-kube-api-access-4jbmr\") pod \"keystone-db-sync-fdqxw\" (UID: \"c34cee78-07e7-4762-98ed-56f4f0ffc257\") " pod="openstack/keystone-db-sync-fdqxw" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.895195 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c34cee78-07e7-4762-98ed-56f4f0ffc257-combined-ca-bundle\") pod \"keystone-db-sync-fdqxw\" (UID: \"c34cee78-07e7-4762-98ed-56f4f0ffc257\") " pod="openstack/keystone-db-sync-fdqxw" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.979657 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-2zsn9"] Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.990115 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b2f3-account-create-update-fw588" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.996603 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c34cee78-07e7-4762-98ed-56f4f0ffc257-config-data\") pod \"keystone-db-sync-fdqxw\" (UID: \"c34cee78-07e7-4762-98ed-56f4f0ffc257\") " pod="openstack/keystone-db-sync-fdqxw" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.996668 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jbmr\" (UniqueName: \"kubernetes.io/projected/c34cee78-07e7-4762-98ed-56f4f0ffc257-kube-api-access-4jbmr\") pod \"keystone-db-sync-fdqxw\" (UID: \"c34cee78-07e7-4762-98ed-56f4f0ffc257\") " pod="openstack/keystone-db-sync-fdqxw" Jan 22 16:50:43 crc kubenswrapper[4758]: I0122 16:50:43.996729 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c34cee78-07e7-4762-98ed-56f4f0ffc257-combined-ca-bundle\") pod \"keystone-db-sync-fdqxw\" (UID: \"c34cee78-07e7-4762-98ed-56f4f0ffc257\") " pod="openstack/keystone-db-sync-fdqxw" Jan 22 16:50:44 crc kubenswrapper[4758]: I0122 16:50:44.006651 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c34cee78-07e7-4762-98ed-56f4f0ffc257-combined-ca-bundle\") pod \"keystone-db-sync-fdqxw\" (UID: \"c34cee78-07e7-4762-98ed-56f4f0ffc257\") " pod="openstack/keystone-db-sync-fdqxw" Jan 22 16:50:44 crc kubenswrapper[4758]: I0122 16:50:44.006901 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c34cee78-07e7-4762-98ed-56f4f0ffc257-config-data\") pod \"keystone-db-sync-fdqxw\" (UID: \"c34cee78-07e7-4762-98ed-56f4f0ffc257\") " pod="openstack/keystone-db-sync-fdqxw" Jan 22 16:50:44 crc kubenswrapper[4758]: I0122 16:50:44.033920 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jbmr\" (UniqueName: \"kubernetes.io/projected/c34cee78-07e7-4762-98ed-56f4f0ffc257-kube-api-access-4jbmr\") pod \"keystone-db-sync-fdqxw\" (UID: \"c34cee78-07e7-4762-98ed-56f4f0ffc257\") " pod="openstack/keystone-db-sync-fdqxw" Jan 22 16:50:44 crc kubenswrapper[4758]: I0122 16:50:44.186160 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-fdqxw" Jan 22 16:50:44 crc kubenswrapper[4758]: I0122 16:50:44.237548 4758 scope.go:117] "RemoveContainer" containerID="6f0e87874139bf5c77823efdcfbb6114f7cec10c37383d6d56f66d1151f47839" Jan 22 16:50:44 crc kubenswrapper[4758]: I0122 16:50:44.628810 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-zftxl"] Jan 22 16:50:44 crc kubenswrapper[4758]: W0122 16:50:44.630903 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b6f4b9a_54d9_440f_853b_b1e3a7b6069b.slice/crio-f3b714e76219fea96254d4e3a41a0d23845acebfbc47518021c28ec8baad6145 WatchSource:0}: Error finding container f3b714e76219fea96254d4e3a41a0d23845acebfbc47518021c28ec8baad6145: Status 404 returned error can't find the container with id f3b714e76219fea96254d4e3a41a0d23845acebfbc47518021c28ec8baad6145 Jan 22 16:50:44 crc kubenswrapper[4758]: I0122 16:50:44.717487 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-3a43-account-create-update-bljp2"] Jan 22 16:50:44 crc kubenswrapper[4758]: I0122 16:50:44.729049 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-vh5np"] Jan 22 16:50:44 crc kubenswrapper[4758]: I0122 16:50:44.768113 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3a43-account-create-update-bljp2" event={"ID":"fff1f29f-f0e8-4fff-b964-9e9c9dc7f60b","Type":"ContainerStarted","Data":"77daada17a87da93ae22884bb7ac5bb2592c59fc38f31bdd2aa155bffad291b4"} Jan 22 16:50:44 crc kubenswrapper[4758]: I0122 16:50:44.774348 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-2zsn9" event={"ID":"23eda699-be19-45a4-8fac-2f3c8d1f38f6","Type":"ContainerStarted","Data":"3fe78ddb8dfabbeacf6f86657d4b5eb27420a594a65d6447bc24e45a87b0a904"} Jan 22 16:50:44 crc kubenswrapper[4758]: I0122 16:50:44.774392 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-2zsn9" event={"ID":"23eda699-be19-45a4-8fac-2f3c8d1f38f6","Type":"ContainerStarted","Data":"3a3eca830f86d432a5067769984703c160e390d4cd9a67a207cd255430f16627"} Jan 22 16:50:44 crc kubenswrapper[4758]: I0122 16:50:44.789809 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d7a10f61-441f-4ec1-a6fa-c34ff9a75956","Type":"ContainerStarted","Data":"92e384cbef786adb26478411db336ab78d74400a9286cc97b00a8476a2853b59"} Jan 22 16:50:44 crc kubenswrapper[4758]: I0122 16:50:44.789886 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d7a10f61-441f-4ec1-a6fa-c34ff9a75956","Type":"ContainerStarted","Data":"547f95b8f34477876f50da9b88337e0c77beed73657f61dd5718d36d559d828b"} Jan 22 16:50:44 crc kubenswrapper[4758]: I0122 16:50:44.804796 4758 generic.go:334] "Generic (PLEG): container finished" podID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerID="b601f6fca756de859a726aaa8ab0d3554a8d02de3dc2055608cf851a04506590" exitCode=0 Jan 22 16:50:44 crc kubenswrapper[4758]: I0122 16:50:44.804878 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerDied","Data":"b601f6fca756de859a726aaa8ab0d3554a8d02de3dc2055608cf851a04506590"} Jan 22 16:50:44 crc kubenswrapper[4758]: I0122 16:50:44.804908 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerStarted","Data":"199c6be88db26753015fa9e30b754aa271b4aa087623fd5be9e93878eddbc087"} Jan 22 16:50:44 crc kubenswrapper[4758]: I0122 16:50:44.804925 4758 scope.go:117] "RemoveContainer" containerID="4e70c152f84eff4ec2f397a05d06e518ec83c49b8fe5a577f81aa8dda8239367" Jan 22 16:50:44 crc kubenswrapper[4758]: I0122 16:50:44.818448 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-2zsn9" podStartSLOduration=1.8184208210000001 podStartE2EDuration="1.818420821s" podCreationTimestamp="2026-01-22 16:50:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:50:44.800847793 +0000 UTC m=+1266.284187078" watchObservedRunningTime="2026-01-22 16:50:44.818420821 +0000 UTC m=+1266.301760106" Jan 22 16:50:44 crc kubenswrapper[4758]: I0122 16:50:44.848296 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=15.848270814 podStartE2EDuration="15.848270814s" podCreationTimestamp="2026-01-22 16:50:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:50:44.836962066 +0000 UTC m=+1266.320301351" watchObservedRunningTime="2026-01-22 16:50:44.848270814 +0000 UTC m=+1266.331610109" Jan 22 16:50:44 crc kubenswrapper[4758]: I0122 16:50:44.893876 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b65dddd8f-twdkl" Jan 22 16:50:44 crc kubenswrapper[4758]: I0122 16:50:44.893930 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-vh5np" event={"ID":"6623f30f-8f61-4f19-962f-de3e10559547","Type":"ContainerStarted","Data":"77ecb6f724e99c5ab1f0a24e7bb32195b2219efe9fd8ecbece3344de34c1bca6"} Jan 22 16:50:44 crc kubenswrapper[4758]: I0122 16:50:44.893961 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b65dddd8f-twdkl" event={"ID":"287aac2e-b390-416b-be0e-4b8b07e5e486","Type":"ContainerStarted","Data":"e65a760ba49ab042f2f458c16ce75b91642ab76eef2cc9b7a3371ff2c6b18aa5"} Jan 22 16:50:44 crc kubenswrapper[4758]: I0122 16:50:44.893982 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-zftxl" event={"ID":"0b6f4b9a-54d9-440f-853b-b1e3a7b6069b","Type":"ContainerStarted","Data":"f3b714e76219fea96254d4e3a41a0d23845acebfbc47518021c28ec8baad6145"} Jan 22 16:50:44 crc kubenswrapper[4758]: I0122 16:50:44.932145 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-fdqxw"] Jan 22 16:50:44 crc kubenswrapper[4758]: I0122 16:50:44.935960 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b65dddd8f-twdkl" podStartSLOduration=2.935940242 podStartE2EDuration="2.935940242s" podCreationTimestamp="2026-01-22 16:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:50:44.912481223 +0000 UTC m=+1266.395820518" watchObservedRunningTime="2026-01-22 16:50:44.935940242 +0000 UTC m=+1266.419279527" Jan 22 16:50:44 crc kubenswrapper[4758]: W0122 16:50:44.973605 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4788613_d2cb_49ab_89de_a8c4492d02fb.slice/crio-10146ee208e6cb5b5beb63a7a6150851b185fa37863435374063903f57a0b558 WatchSource:0}: Error finding container 10146ee208e6cb5b5beb63a7a6150851b185fa37863435374063903f57a0b558: Status 404 returned error can't find the container with id 10146ee208e6cb5b5beb63a7a6150851b185fa37863435374063903f57a0b558 Jan 22 16:50:44 crc kubenswrapper[4758]: I0122 16:50:44.976229 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b2f3-account-create-update-fw588"] Jan 22 16:50:45 crc kubenswrapper[4758]: I0122 16:50:45.416479 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:45 crc kubenswrapper[4758]: I0122 16:50:45.416809 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:45 crc kubenswrapper[4758]: I0122 16:50:45.425436 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:45 crc kubenswrapper[4758]: I0122 16:50:45.899924 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-fdqxw" event={"ID":"c34cee78-07e7-4762-98ed-56f4f0ffc257","Type":"ContainerStarted","Data":"796dd498bf6ea5eaa691bc0789fdf656d92647cdd14a87acda60825e846b0859"} Jan 22 16:50:45 crc kubenswrapper[4758]: I0122 16:50:45.909485 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3a43-account-create-update-bljp2" event={"ID":"fff1f29f-f0e8-4fff-b964-9e9c9dc7f60b","Type":"ContainerDied","Data":"1c0441153a28ebae1ae1fa6b73cb559e2b13d1e47a67db0d817f5d93516c0cfc"} Jan 22 16:50:45 crc kubenswrapper[4758]: I0122 16:50:45.909412 4758 generic.go:334] "Generic (PLEG): container finished" podID="fff1f29f-f0e8-4fff-b964-9e9c9dc7f60b" containerID="1c0441153a28ebae1ae1fa6b73cb559e2b13d1e47a67db0d817f5d93516c0cfc" exitCode=0 Jan 22 16:50:45 crc kubenswrapper[4758]: I0122 16:50:45.913569 4758 generic.go:334] "Generic (PLEG): container finished" podID="6623f30f-8f61-4f19-962f-de3e10559547" containerID="818cc3b404e4fd2fb5e62d703eb4b4c480fdc26206d367b5df320f8ff6b3df0c" exitCode=0 Jan 22 16:50:45 crc kubenswrapper[4758]: I0122 16:50:45.913691 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-vh5np" event={"ID":"6623f30f-8f61-4f19-962f-de3e10559547","Type":"ContainerDied","Data":"818cc3b404e4fd2fb5e62d703eb4b4c480fdc26206d367b5df320f8ff6b3df0c"} Jan 22 16:50:45 crc kubenswrapper[4758]: I0122 16:50:45.920929 4758 generic.go:334] "Generic (PLEG): container finished" podID="d4788613-d2cb-49ab-89de-a8c4492d02fb" containerID="64c0454d4a55b6a22687f960b3ae8567b4cf7d59a0792efca16181efdc7529ee" exitCode=0 Jan 22 16:50:45 crc kubenswrapper[4758]: I0122 16:50:45.921011 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b2f3-account-create-update-fw588" event={"ID":"d4788613-d2cb-49ab-89de-a8c4492d02fb","Type":"ContainerDied","Data":"64c0454d4a55b6a22687f960b3ae8567b4cf7d59a0792efca16181efdc7529ee"} Jan 22 16:50:45 crc kubenswrapper[4758]: I0122 16:50:45.921039 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b2f3-account-create-update-fw588" event={"ID":"d4788613-d2cb-49ab-89de-a8c4492d02fb","Type":"ContainerStarted","Data":"10146ee208e6cb5b5beb63a7a6150851b185fa37863435374063903f57a0b558"} Jan 22 16:50:45 crc kubenswrapper[4758]: I0122 16:50:45.930664 4758 generic.go:334] "Generic (PLEG): container finished" podID="23eda699-be19-45a4-8fac-2f3c8d1f38f6" containerID="3fe78ddb8dfabbeacf6f86657d4b5eb27420a594a65d6447bc24e45a87b0a904" exitCode=0 Jan 22 16:50:45 crc kubenswrapper[4758]: I0122 16:50:45.930876 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-2zsn9" event={"ID":"23eda699-be19-45a4-8fac-2f3c8d1f38f6","Type":"ContainerDied","Data":"3fe78ddb8dfabbeacf6f86657d4b5eb27420a594a65d6447bc24e45a87b0a904"} Jan 22 16:50:45 crc kubenswrapper[4758]: I0122 16:50:45.952105 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 22 16:50:46 crc kubenswrapper[4758]: I0122 16:50:46.464672 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-bnlhc"] Jan 22 16:50:46 crc kubenswrapper[4758]: I0122 16:50:46.466853 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-bnlhc" Jan 22 16:50:46 crc kubenswrapper[4758]: I0122 16:50:46.498886 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-bnlhc"] Jan 22 16:50:46 crc kubenswrapper[4758]: I0122 16:50:46.550208 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d7a40ed-25c4-4645-aaf7-3aa28db8a4d9-operator-scripts\") pod \"glance-db-create-bnlhc\" (UID: \"2d7a40ed-25c4-4645-aaf7-3aa28db8a4d9\") " pod="openstack/glance-db-create-bnlhc" Jan 22 16:50:46 crc kubenswrapper[4758]: I0122 16:50:46.550276 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm48k\" (UniqueName: \"kubernetes.io/projected/2d7a40ed-25c4-4645-aaf7-3aa28db8a4d9-kube-api-access-gm48k\") pod \"glance-db-create-bnlhc\" (UID: \"2d7a40ed-25c4-4645-aaf7-3aa28db8a4d9\") " pod="openstack/glance-db-create-bnlhc" Jan 22 16:50:46 crc kubenswrapper[4758]: I0122 16:50:46.578882 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-da06-account-create-update-bbdrz"] Jan 22 16:50:46 crc kubenswrapper[4758]: I0122 16:50:46.580183 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-da06-account-create-update-bbdrz" Jan 22 16:50:46 crc kubenswrapper[4758]: I0122 16:50:46.584553 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 22 16:50:46 crc kubenswrapper[4758]: I0122 16:50:46.588685 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-da06-account-create-update-bbdrz"] Jan 22 16:50:46 crc kubenswrapper[4758]: I0122 16:50:46.652702 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d7a40ed-25c4-4645-aaf7-3aa28db8a4d9-operator-scripts\") pod \"glance-db-create-bnlhc\" (UID: \"2d7a40ed-25c4-4645-aaf7-3aa28db8a4d9\") " pod="openstack/glance-db-create-bnlhc" Jan 22 16:50:46 crc kubenswrapper[4758]: I0122 16:50:46.651848 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d7a40ed-25c4-4645-aaf7-3aa28db8a4d9-operator-scripts\") pod \"glance-db-create-bnlhc\" (UID: \"2d7a40ed-25c4-4645-aaf7-3aa28db8a4d9\") " pod="openstack/glance-db-create-bnlhc" Jan 22 16:50:46 crc kubenswrapper[4758]: I0122 16:50:46.652827 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm48k\" (UniqueName: \"kubernetes.io/projected/2d7a40ed-25c4-4645-aaf7-3aa28db8a4d9-kube-api-access-gm48k\") pod \"glance-db-create-bnlhc\" (UID: \"2d7a40ed-25c4-4645-aaf7-3aa28db8a4d9\") " pod="openstack/glance-db-create-bnlhc" Jan 22 16:50:46 crc kubenswrapper[4758]: I0122 16:50:46.669422 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm48k\" (UniqueName: \"kubernetes.io/projected/2d7a40ed-25c4-4645-aaf7-3aa28db8a4d9-kube-api-access-gm48k\") pod \"glance-db-create-bnlhc\" (UID: \"2d7a40ed-25c4-4645-aaf7-3aa28db8a4d9\") " pod="openstack/glance-db-create-bnlhc" Jan 22 16:50:46 crc kubenswrapper[4758]: I0122 16:50:46.754414 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9phb\" (UniqueName: \"kubernetes.io/projected/d309a140-33cc-4a62-b068-8ebc4797ee7e-kube-api-access-h9phb\") pod \"glance-da06-account-create-update-bbdrz\" (UID: \"d309a140-33cc-4a62-b068-8ebc4797ee7e\") " pod="openstack/glance-da06-account-create-update-bbdrz" Jan 22 16:50:46 crc kubenswrapper[4758]: I0122 16:50:46.754803 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d309a140-33cc-4a62-b068-8ebc4797ee7e-operator-scripts\") pod \"glance-da06-account-create-update-bbdrz\" (UID: \"d309a140-33cc-4a62-b068-8ebc4797ee7e\") " pod="openstack/glance-da06-account-create-update-bbdrz" Jan 22 16:50:46 crc kubenswrapper[4758]: I0122 16:50:46.761126 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-t9c62"] Jan 22 16:50:46 crc kubenswrapper[4758]: I0122 16:50:46.771310 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-fc5a-account-create-update-rxqdl"] Jan 22 16:50:46 crc kubenswrapper[4758]: I0122 16:50:46.771538 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-t9c62" Jan 22 16:50:46 crc kubenswrapper[4758]: I0122 16:50:46.773551 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-fc5a-account-create-update-rxqdl" Jan 22 16:50:46 crc kubenswrapper[4758]: I0122 16:50:46.775934 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 22 16:50:46 crc kubenswrapper[4758]: I0122 16:50:46.784785 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-t9c62"] Jan 22 16:50:46 crc kubenswrapper[4758]: I0122 16:50:46.786449 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-fc5a-account-create-update-rxqdl"] Jan 22 16:50:46 crc kubenswrapper[4758]: I0122 16:50:46.803081 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-bnlhc" Jan 22 16:50:46 crc kubenswrapper[4758]: I0122 16:50:46.871124 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rrk7\" (UniqueName: \"kubernetes.io/projected/09e5cd9a-2eae-49cd-b9a8-37b9d0a109dd-kube-api-access-7rrk7\") pod \"neutron-fc5a-account-create-update-rxqdl\" (UID: \"09e5cd9a-2eae-49cd-b9a8-37b9d0a109dd\") " pod="openstack/neutron-fc5a-account-create-update-rxqdl" Jan 22 16:50:46 crc kubenswrapper[4758]: I0122 16:50:46.871204 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09e5cd9a-2eae-49cd-b9a8-37b9d0a109dd-operator-scripts\") pod \"neutron-fc5a-account-create-update-rxqdl\" (UID: \"09e5cd9a-2eae-49cd-b9a8-37b9d0a109dd\") " pod="openstack/neutron-fc5a-account-create-update-rxqdl" Jan 22 16:50:46 crc kubenswrapper[4758]: I0122 16:50:46.871350 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9phb\" (UniqueName: \"kubernetes.io/projected/d309a140-33cc-4a62-b068-8ebc4797ee7e-kube-api-access-h9phb\") pod \"glance-da06-account-create-update-bbdrz\" (UID: \"d309a140-33cc-4a62-b068-8ebc4797ee7e\") " pod="openstack/glance-da06-account-create-update-bbdrz" Jan 22 16:50:46 crc kubenswrapper[4758]: I0122 16:50:46.871398 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksnzw\" (UniqueName: \"kubernetes.io/projected/d6339d32-557a-4f41-9d09-47d3d469615b-kube-api-access-ksnzw\") pod \"neutron-db-create-t9c62\" (UID: \"d6339d32-557a-4f41-9d09-47d3d469615b\") " pod="openstack/neutron-db-create-t9c62" Jan 22 16:50:46 crc kubenswrapper[4758]: I0122 16:50:46.871430 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d309a140-33cc-4a62-b068-8ebc4797ee7e-operator-scripts\") pod \"glance-da06-account-create-update-bbdrz\" (UID: \"d309a140-33cc-4a62-b068-8ebc4797ee7e\") " pod="openstack/glance-da06-account-create-update-bbdrz" Jan 22 16:50:46 crc kubenswrapper[4758]: I0122 16:50:46.871514 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6339d32-557a-4f41-9d09-47d3d469615b-operator-scripts\") pod \"neutron-db-create-t9c62\" (UID: \"d6339d32-557a-4f41-9d09-47d3d469615b\") " pod="openstack/neutron-db-create-t9c62" Jan 22 16:50:46 crc kubenswrapper[4758]: I0122 16:50:46.872706 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d309a140-33cc-4a62-b068-8ebc4797ee7e-operator-scripts\") pod \"glance-da06-account-create-update-bbdrz\" (UID: \"d309a140-33cc-4a62-b068-8ebc4797ee7e\") " pod="openstack/glance-da06-account-create-update-bbdrz" Jan 22 16:50:46 crc kubenswrapper[4758]: I0122 16:50:46.889086 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9phb\" (UniqueName: \"kubernetes.io/projected/d309a140-33cc-4a62-b068-8ebc4797ee7e-kube-api-access-h9phb\") pod \"glance-da06-account-create-update-bbdrz\" (UID: \"d309a140-33cc-4a62-b068-8ebc4797ee7e\") " pod="openstack/glance-da06-account-create-update-bbdrz" Jan 22 16:50:46 crc kubenswrapper[4758]: I0122 16:50:46.912819 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-da06-account-create-update-bbdrz" Jan 22 16:50:46 crc kubenswrapper[4758]: I0122 16:50:46.973557 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6339d32-557a-4f41-9d09-47d3d469615b-operator-scripts\") pod \"neutron-db-create-t9c62\" (UID: \"d6339d32-557a-4f41-9d09-47d3d469615b\") " pod="openstack/neutron-db-create-t9c62" Jan 22 16:50:46 crc kubenswrapper[4758]: I0122 16:50:46.973722 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rrk7\" (UniqueName: \"kubernetes.io/projected/09e5cd9a-2eae-49cd-b9a8-37b9d0a109dd-kube-api-access-7rrk7\") pod \"neutron-fc5a-account-create-update-rxqdl\" (UID: \"09e5cd9a-2eae-49cd-b9a8-37b9d0a109dd\") " pod="openstack/neutron-fc5a-account-create-update-rxqdl" Jan 22 16:50:46 crc kubenswrapper[4758]: I0122 16:50:46.973776 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09e5cd9a-2eae-49cd-b9a8-37b9d0a109dd-operator-scripts\") pod \"neutron-fc5a-account-create-update-rxqdl\" (UID: \"09e5cd9a-2eae-49cd-b9a8-37b9d0a109dd\") " pod="openstack/neutron-fc5a-account-create-update-rxqdl" Jan 22 16:50:46 crc kubenswrapper[4758]: I0122 16:50:46.973913 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksnzw\" (UniqueName: \"kubernetes.io/projected/d6339d32-557a-4f41-9d09-47d3d469615b-kube-api-access-ksnzw\") pod \"neutron-db-create-t9c62\" (UID: \"d6339d32-557a-4f41-9d09-47d3d469615b\") " pod="openstack/neutron-db-create-t9c62" Jan 22 16:50:46 crc kubenswrapper[4758]: I0122 16:50:46.974668 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6339d32-557a-4f41-9d09-47d3d469615b-operator-scripts\") pod \"neutron-db-create-t9c62\" (UID: \"d6339d32-557a-4f41-9d09-47d3d469615b\") " pod="openstack/neutron-db-create-t9c62" Jan 22 16:50:46 crc kubenswrapper[4758]: I0122 16:50:46.974914 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09e5cd9a-2eae-49cd-b9a8-37b9d0a109dd-operator-scripts\") pod \"neutron-fc5a-account-create-update-rxqdl\" (UID: \"09e5cd9a-2eae-49cd-b9a8-37b9d0a109dd\") " pod="openstack/neutron-fc5a-account-create-update-rxqdl" Jan 22 16:50:47 crc kubenswrapper[4758]: I0122 16:50:47.001136 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksnzw\" (UniqueName: \"kubernetes.io/projected/d6339d32-557a-4f41-9d09-47d3d469615b-kube-api-access-ksnzw\") pod \"neutron-db-create-t9c62\" (UID: \"d6339d32-557a-4f41-9d09-47d3d469615b\") " pod="openstack/neutron-db-create-t9c62" Jan 22 16:50:47 crc kubenswrapper[4758]: I0122 16:50:47.010114 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rrk7\" (UniqueName: \"kubernetes.io/projected/09e5cd9a-2eae-49cd-b9a8-37b9d0a109dd-kube-api-access-7rrk7\") pod \"neutron-fc5a-account-create-update-rxqdl\" (UID: \"09e5cd9a-2eae-49cd-b9a8-37b9d0a109dd\") " pod="openstack/neutron-fc5a-account-create-update-rxqdl" Jan 22 16:50:47 crc kubenswrapper[4758]: I0122 16:50:47.106628 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-t9c62" Jan 22 16:50:47 crc kubenswrapper[4758]: I0122 16:50:47.127351 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-fc5a-account-create-update-rxqdl" Jan 22 16:50:47 crc kubenswrapper[4758]: I0122 16:50:47.513342 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3a43-account-create-update-bljp2" Jan 22 16:50:47 crc kubenswrapper[4758]: I0122 16:50:47.546887 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-2zsn9" Jan 22 16:50:47 crc kubenswrapper[4758]: I0122 16:50:47.569274 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b2f3-account-create-update-fw588" Jan 22 16:50:47 crc kubenswrapper[4758]: I0122 16:50:47.691663 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpkmr\" (UniqueName: \"kubernetes.io/projected/d4788613-d2cb-49ab-89de-a8c4492d02fb-kube-api-access-lpkmr\") pod \"d4788613-d2cb-49ab-89de-a8c4492d02fb\" (UID: \"d4788613-d2cb-49ab-89de-a8c4492d02fb\") " Jan 22 16:50:47 crc kubenswrapper[4758]: I0122 16:50:47.692374 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23eda699-be19-45a4-8fac-2f3c8d1f38f6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "23eda699-be19-45a4-8fac-2f3c8d1f38f6" (UID: "23eda699-be19-45a4-8fac-2f3c8d1f38f6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:50:47 crc kubenswrapper[4758]: I0122 16:50:47.692424 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23eda699-be19-45a4-8fac-2f3c8d1f38f6-operator-scripts\") pod \"23eda699-be19-45a4-8fac-2f3c8d1f38f6\" (UID: \"23eda699-be19-45a4-8fac-2f3c8d1f38f6\") " Jan 22 16:50:47 crc kubenswrapper[4758]: I0122 16:50:47.692502 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4k47\" (UniqueName: \"kubernetes.io/projected/23eda699-be19-45a4-8fac-2f3c8d1f38f6-kube-api-access-s4k47\") pod \"23eda699-be19-45a4-8fac-2f3c8d1f38f6\" (UID: \"23eda699-be19-45a4-8fac-2f3c8d1f38f6\") " Jan 22 16:50:47 crc kubenswrapper[4758]: I0122 16:50:47.692888 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4788613-d2cb-49ab-89de-a8c4492d02fb-operator-scripts\") pod \"d4788613-d2cb-49ab-89de-a8c4492d02fb\" (UID: \"d4788613-d2cb-49ab-89de-a8c4492d02fb\") " Jan 22 16:50:47 crc kubenswrapper[4758]: I0122 16:50:47.693123 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fff1f29f-f0e8-4fff-b964-9e9c9dc7f60b-operator-scripts\") pod \"fff1f29f-f0e8-4fff-b964-9e9c9dc7f60b\" (UID: \"fff1f29f-f0e8-4fff-b964-9e9c9dc7f60b\") " Jan 22 16:50:47 crc kubenswrapper[4758]: I0122 16:50:47.693302 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c76zh\" (UniqueName: \"kubernetes.io/projected/fff1f29f-f0e8-4fff-b964-9e9c9dc7f60b-kube-api-access-c76zh\") pod \"fff1f29f-f0e8-4fff-b964-9e9c9dc7f60b\" (UID: \"fff1f29f-f0e8-4fff-b964-9e9c9dc7f60b\") " Jan 22 16:50:47 crc kubenswrapper[4758]: I0122 16:50:47.693350 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4788613-d2cb-49ab-89de-a8c4492d02fb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d4788613-d2cb-49ab-89de-a8c4492d02fb" (UID: "d4788613-d2cb-49ab-89de-a8c4492d02fb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:50:47 crc kubenswrapper[4758]: I0122 16:50:47.693839 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fff1f29f-f0e8-4fff-b964-9e9c9dc7f60b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fff1f29f-f0e8-4fff-b964-9e9c9dc7f60b" (UID: "fff1f29f-f0e8-4fff-b964-9e9c9dc7f60b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:50:47 crc kubenswrapper[4758]: I0122 16:50:47.694198 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4788613-d2cb-49ab-89de-a8c4492d02fb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:47 crc kubenswrapper[4758]: I0122 16:50:47.694216 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fff1f29f-f0e8-4fff-b964-9e9c9dc7f60b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:47 crc kubenswrapper[4758]: I0122 16:50:47.694224 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23eda699-be19-45a4-8fac-2f3c8d1f38f6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:47 crc kubenswrapper[4758]: I0122 16:50:47.698316 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23eda699-be19-45a4-8fac-2f3c8d1f38f6-kube-api-access-s4k47" (OuterVolumeSpecName: "kube-api-access-s4k47") pod "23eda699-be19-45a4-8fac-2f3c8d1f38f6" (UID: "23eda699-be19-45a4-8fac-2f3c8d1f38f6"). InnerVolumeSpecName "kube-api-access-s4k47". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:50:47 crc kubenswrapper[4758]: I0122 16:50:47.698553 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4788613-d2cb-49ab-89de-a8c4492d02fb-kube-api-access-lpkmr" (OuterVolumeSpecName: "kube-api-access-lpkmr") pod "d4788613-d2cb-49ab-89de-a8c4492d02fb" (UID: "d4788613-d2cb-49ab-89de-a8c4492d02fb"). InnerVolumeSpecName "kube-api-access-lpkmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:50:47 crc kubenswrapper[4758]: I0122 16:50:47.703274 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fff1f29f-f0e8-4fff-b964-9e9c9dc7f60b-kube-api-access-c76zh" (OuterVolumeSpecName: "kube-api-access-c76zh") pod "fff1f29f-f0e8-4fff-b964-9e9c9dc7f60b" (UID: "fff1f29f-f0e8-4fff-b964-9e9c9dc7f60b"). InnerVolumeSpecName "kube-api-access-c76zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:50:47 crc kubenswrapper[4758]: I0122 16:50:47.765565 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-bnlhc"] Jan 22 16:50:47 crc kubenswrapper[4758]: W0122 16:50:47.775274 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d7a40ed_25c4_4645_aaf7_3aa28db8a4d9.slice/crio-7e72bdbaca53459bc70b2675d1978fe98404c09b327203cc181a4bbb11717f0a WatchSource:0}: Error finding container 7e72bdbaca53459bc70b2675d1978fe98404c09b327203cc181a4bbb11717f0a: Status 404 returned error can't find the container with id 7e72bdbaca53459bc70b2675d1978fe98404c09b327203cc181a4bbb11717f0a Jan 22 16:50:47 crc kubenswrapper[4758]: I0122 16:50:47.796703 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c76zh\" (UniqueName: \"kubernetes.io/projected/fff1f29f-f0e8-4fff-b964-9e9c9dc7f60b-kube-api-access-c76zh\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:47 crc kubenswrapper[4758]: I0122 16:50:47.796734 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpkmr\" (UniqueName: \"kubernetes.io/projected/d4788613-d2cb-49ab-89de-a8c4492d02fb-kube-api-access-lpkmr\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:47 crc kubenswrapper[4758]: I0122 16:50:47.796760 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4k47\" (UniqueName: \"kubernetes.io/projected/23eda699-be19-45a4-8fac-2f3c8d1f38f6-kube-api-access-s4k47\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:47 crc kubenswrapper[4758]: I0122 16:50:47.900247 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-da06-account-create-update-bbdrz"] Jan 22 16:50:47 crc kubenswrapper[4758]: I0122 16:50:47.902919 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vh5np" Jan 22 16:50:47 crc kubenswrapper[4758]: I0122 16:50:47.919942 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-fc5a-account-create-update-rxqdl"] Jan 22 16:50:47 crc kubenswrapper[4758]: I0122 16:50:47.945892 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-t9c62"] Jan 22 16:50:47 crc kubenswrapper[4758]: I0122 16:50:47.970609 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-da06-account-create-update-bbdrz" event={"ID":"d309a140-33cc-4a62-b068-8ebc4797ee7e","Type":"ContainerStarted","Data":"1bd36b7b5bd71a4d4057fe02ff0a86cbaddefbf4a95042595ff8aba09c8c69e5"} Jan 22 16:50:47 crc kubenswrapper[4758]: I0122 16:50:47.972340 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3a43-account-create-update-bljp2" event={"ID":"fff1f29f-f0e8-4fff-b964-9e9c9dc7f60b","Type":"ContainerDied","Data":"77daada17a87da93ae22884bb7ac5bb2592c59fc38f31bdd2aa155bffad291b4"} Jan 22 16:50:47 crc kubenswrapper[4758]: I0122 16:50:47.972453 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77daada17a87da93ae22884bb7ac5bb2592c59fc38f31bdd2aa155bffad291b4" Jan 22 16:50:47 crc kubenswrapper[4758]: I0122 16:50:47.972364 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3a43-account-create-update-bljp2" Jan 22 16:50:47 crc kubenswrapper[4758]: I0122 16:50:47.976734 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vh5np" Jan 22 16:50:47 crc kubenswrapper[4758]: I0122 16:50:47.976772 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-vh5np" event={"ID":"6623f30f-8f61-4f19-962f-de3e10559547","Type":"ContainerDied","Data":"77ecb6f724e99c5ab1f0a24e7bb32195b2219efe9fd8ecbece3344de34c1bca6"} Jan 22 16:50:47 crc kubenswrapper[4758]: I0122 16:50:47.976814 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77ecb6f724e99c5ab1f0a24e7bb32195b2219efe9fd8ecbece3344de34c1bca6" Jan 22 16:50:47 crc kubenswrapper[4758]: I0122 16:50:47.979399 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b2f3-account-create-update-fw588" event={"ID":"d4788613-d2cb-49ab-89de-a8c4492d02fb","Type":"ContainerDied","Data":"10146ee208e6cb5b5beb63a7a6150851b185fa37863435374063903f57a0b558"} Jan 22 16:50:47 crc kubenswrapper[4758]: I0122 16:50:47.979432 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b2f3-account-create-update-fw588" Jan 22 16:50:47 crc kubenswrapper[4758]: I0122 16:50:47.979444 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10146ee208e6cb5b5beb63a7a6150851b185fa37863435374063903f57a0b558" Jan 22 16:50:47 crc kubenswrapper[4758]: I0122 16:50:47.981944 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-2zsn9" event={"ID":"23eda699-be19-45a4-8fac-2f3c8d1f38f6","Type":"ContainerDied","Data":"3a3eca830f86d432a5067769984703c160e390d4cd9a67a207cd255430f16627"} Jan 22 16:50:47 crc kubenswrapper[4758]: I0122 16:50:47.981978 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a3eca830f86d432a5067769984703c160e390d4cd9a67a207cd255430f16627" Jan 22 16:50:47 crc kubenswrapper[4758]: I0122 16:50:47.981975 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-2zsn9" Jan 22 16:50:47 crc kubenswrapper[4758]: I0122 16:50:47.983575 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-bnlhc" event={"ID":"2d7a40ed-25c4-4645-aaf7-3aa28db8a4d9","Type":"ContainerStarted","Data":"7e72bdbaca53459bc70b2675d1978fe98404c09b327203cc181a4bbb11717f0a"} Jan 22 16:50:48 crc kubenswrapper[4758]: I0122 16:50:48.000325 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6623f30f-8f61-4f19-962f-de3e10559547-operator-scripts\") pod \"6623f30f-8f61-4f19-962f-de3e10559547\" (UID: \"6623f30f-8f61-4f19-962f-de3e10559547\") " Jan 22 16:50:48 crc kubenswrapper[4758]: I0122 16:50:48.000594 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqdzm\" (UniqueName: \"kubernetes.io/projected/6623f30f-8f61-4f19-962f-de3e10559547-kube-api-access-jqdzm\") pod \"6623f30f-8f61-4f19-962f-de3e10559547\" (UID: \"6623f30f-8f61-4f19-962f-de3e10559547\") " Jan 22 16:50:48 crc kubenswrapper[4758]: I0122 16:50:48.003032 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6623f30f-8f61-4f19-962f-de3e10559547-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6623f30f-8f61-4f19-962f-de3e10559547" (UID: "6623f30f-8f61-4f19-962f-de3e10559547"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:50:48 crc kubenswrapper[4758]: I0122 16:50:48.013094 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6623f30f-8f61-4f19-962f-de3e10559547-kube-api-access-jqdzm" (OuterVolumeSpecName: "kube-api-access-jqdzm") pod "6623f30f-8f61-4f19-962f-de3e10559547" (UID: "6623f30f-8f61-4f19-962f-de3e10559547"). InnerVolumeSpecName "kube-api-access-jqdzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:50:48 crc kubenswrapper[4758]: I0122 16:50:48.102314 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqdzm\" (UniqueName: \"kubernetes.io/projected/6623f30f-8f61-4f19-962f-de3e10559547-kube-api-access-jqdzm\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:48 crc kubenswrapper[4758]: I0122 16:50:48.102349 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6623f30f-8f61-4f19-962f-de3e10559547-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:48 crc kubenswrapper[4758]: I0122 16:50:48.994791 4758 generic.go:334] "Generic (PLEG): container finished" podID="09e5cd9a-2eae-49cd-b9a8-37b9d0a109dd" containerID="32b2ec1d0f08d3576322d3b66eefe81d3b933190f39ae9a3e89af393bd1813be" exitCode=0 Jan 22 16:50:48 crc kubenswrapper[4758]: I0122 16:50:48.994881 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-fc5a-account-create-update-rxqdl" event={"ID":"09e5cd9a-2eae-49cd-b9a8-37b9d0a109dd","Type":"ContainerDied","Data":"32b2ec1d0f08d3576322d3b66eefe81d3b933190f39ae9a3e89af393bd1813be"} Jan 22 16:50:48 crc kubenswrapper[4758]: I0122 16:50:48.994925 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-fc5a-account-create-update-rxqdl" event={"ID":"09e5cd9a-2eae-49cd-b9a8-37b9d0a109dd","Type":"ContainerStarted","Data":"9c6ea1706b1774640d40ed356580a7b46fa6d957ba24463e508094c5b7b4d03d"} Jan 22 16:50:49 crc kubenswrapper[4758]: I0122 16:50:48.998528 4758 generic.go:334] "Generic (PLEG): container finished" podID="d309a140-33cc-4a62-b068-8ebc4797ee7e" containerID="3eddb0ef8815d265f88e986e83965bce976c3e51fc1ceade2b1edecf333345c1" exitCode=0 Jan 22 16:50:49 crc kubenswrapper[4758]: I0122 16:50:48.998559 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-da06-account-create-update-bbdrz" event={"ID":"d309a140-33cc-4a62-b068-8ebc4797ee7e","Type":"ContainerDied","Data":"3eddb0ef8815d265f88e986e83965bce976c3e51fc1ceade2b1edecf333345c1"} Jan 22 16:50:49 crc kubenswrapper[4758]: I0122 16:50:48.999765 4758 generic.go:334] "Generic (PLEG): container finished" podID="d6339d32-557a-4f41-9d09-47d3d469615b" containerID="d4a5a79f8ce228446900755606f405e2fdddd5132988345e404ba3c46c6d5042" exitCode=0 Jan 22 16:50:49 crc kubenswrapper[4758]: I0122 16:50:48.999832 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-t9c62" event={"ID":"d6339d32-557a-4f41-9d09-47d3d469615b","Type":"ContainerDied","Data":"d4a5a79f8ce228446900755606f405e2fdddd5132988345e404ba3c46c6d5042"} Jan 22 16:50:49 crc kubenswrapper[4758]: I0122 16:50:48.999851 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-t9c62" event={"ID":"d6339d32-557a-4f41-9d09-47d3d469615b","Type":"ContainerStarted","Data":"7323c3ae2668fe5be7d794a69184315f10c2784ff4e65be69a8320dcfd752d8c"} Jan 22 16:50:49 crc kubenswrapper[4758]: I0122 16:50:49.001251 4758 generic.go:334] "Generic (PLEG): container finished" podID="2d7a40ed-25c4-4645-aaf7-3aa28db8a4d9" containerID="716bd19f34b2ec2483d2092ba3ebbf34cdcd098584b55275af7b77ca138bc641" exitCode=0 Jan 22 16:50:49 crc kubenswrapper[4758]: I0122 16:50:49.001277 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-bnlhc" event={"ID":"2d7a40ed-25c4-4645-aaf7-3aa28db8a4d9","Type":"ContainerDied","Data":"716bd19f34b2ec2483d2092ba3ebbf34cdcd098584b55275af7b77ca138bc641"} Jan 22 16:50:51 crc kubenswrapper[4758]: I0122 16:50:51.433493 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-fc5a-account-create-update-rxqdl" Jan 22 16:50:51 crc kubenswrapper[4758]: I0122 16:50:51.442578 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-da06-account-create-update-bbdrz" Jan 22 16:50:51 crc kubenswrapper[4758]: I0122 16:50:51.588082 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9phb\" (UniqueName: \"kubernetes.io/projected/d309a140-33cc-4a62-b068-8ebc4797ee7e-kube-api-access-h9phb\") pod \"d309a140-33cc-4a62-b068-8ebc4797ee7e\" (UID: \"d309a140-33cc-4a62-b068-8ebc4797ee7e\") " Jan 22 16:50:51 crc kubenswrapper[4758]: I0122 16:50:51.588147 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d309a140-33cc-4a62-b068-8ebc4797ee7e-operator-scripts\") pod \"d309a140-33cc-4a62-b068-8ebc4797ee7e\" (UID: \"d309a140-33cc-4a62-b068-8ebc4797ee7e\") " Jan 22 16:50:51 crc kubenswrapper[4758]: I0122 16:50:51.588199 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09e5cd9a-2eae-49cd-b9a8-37b9d0a109dd-operator-scripts\") pod \"09e5cd9a-2eae-49cd-b9a8-37b9d0a109dd\" (UID: \"09e5cd9a-2eae-49cd-b9a8-37b9d0a109dd\") " Jan 22 16:50:51 crc kubenswrapper[4758]: I0122 16:50:51.588284 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rrk7\" (UniqueName: \"kubernetes.io/projected/09e5cd9a-2eae-49cd-b9a8-37b9d0a109dd-kube-api-access-7rrk7\") pod \"09e5cd9a-2eae-49cd-b9a8-37b9d0a109dd\" (UID: \"09e5cd9a-2eae-49cd-b9a8-37b9d0a109dd\") " Jan 22 16:50:51 crc kubenswrapper[4758]: I0122 16:50:51.588722 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d309a140-33cc-4a62-b068-8ebc4797ee7e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d309a140-33cc-4a62-b068-8ebc4797ee7e" (UID: "d309a140-33cc-4a62-b068-8ebc4797ee7e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:50:51 crc kubenswrapper[4758]: I0122 16:50:51.588719 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09e5cd9a-2eae-49cd-b9a8-37b9d0a109dd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "09e5cd9a-2eae-49cd-b9a8-37b9d0a109dd" (UID: "09e5cd9a-2eae-49cd-b9a8-37b9d0a109dd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:50:51 crc kubenswrapper[4758]: I0122 16:50:51.595511 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d309a140-33cc-4a62-b068-8ebc4797ee7e-kube-api-access-h9phb" (OuterVolumeSpecName: "kube-api-access-h9phb") pod "d309a140-33cc-4a62-b068-8ebc4797ee7e" (UID: "d309a140-33cc-4a62-b068-8ebc4797ee7e"). InnerVolumeSpecName "kube-api-access-h9phb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:50:51 crc kubenswrapper[4758]: I0122 16:50:51.603670 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09e5cd9a-2eae-49cd-b9a8-37b9d0a109dd-kube-api-access-7rrk7" (OuterVolumeSpecName: "kube-api-access-7rrk7") pod "09e5cd9a-2eae-49cd-b9a8-37b9d0a109dd" (UID: "09e5cd9a-2eae-49cd-b9a8-37b9d0a109dd"). InnerVolumeSpecName "kube-api-access-7rrk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:50:51 crc kubenswrapper[4758]: I0122 16:50:51.690295 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9phb\" (UniqueName: \"kubernetes.io/projected/d309a140-33cc-4a62-b068-8ebc4797ee7e-kube-api-access-h9phb\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:51 crc kubenswrapper[4758]: I0122 16:50:51.690328 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d309a140-33cc-4a62-b068-8ebc4797ee7e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:51 crc kubenswrapper[4758]: I0122 16:50:51.690339 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09e5cd9a-2eae-49cd-b9a8-37b9d0a109dd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:51 crc kubenswrapper[4758]: I0122 16:50:51.690348 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rrk7\" (UniqueName: \"kubernetes.io/projected/09e5cd9a-2eae-49cd-b9a8-37b9d0a109dd-kube-api-access-7rrk7\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:52 crc kubenswrapper[4758]: I0122 16:50:52.032559 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-da06-account-create-update-bbdrz" event={"ID":"d309a140-33cc-4a62-b068-8ebc4797ee7e","Type":"ContainerDied","Data":"1bd36b7b5bd71a4d4057fe02ff0a86cbaddefbf4a95042595ff8aba09c8c69e5"} Jan 22 16:50:52 crc kubenswrapper[4758]: I0122 16:50:52.032609 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bd36b7b5bd71a4d4057fe02ff0a86cbaddefbf4a95042595ff8aba09c8c69e5" Jan 22 16:50:52 crc kubenswrapper[4758]: I0122 16:50:52.032684 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-da06-account-create-update-bbdrz" Jan 22 16:50:52 crc kubenswrapper[4758]: I0122 16:50:52.038502 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-fc5a-account-create-update-rxqdl" event={"ID":"09e5cd9a-2eae-49cd-b9a8-37b9d0a109dd","Type":"ContainerDied","Data":"9c6ea1706b1774640d40ed356580a7b46fa6d957ba24463e508094c5b7b4d03d"} Jan 22 16:50:52 crc kubenswrapper[4758]: I0122 16:50:52.038572 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c6ea1706b1774640d40ed356580a7b46fa6d957ba24463e508094c5b7b4d03d" Jan 22 16:50:52 crc kubenswrapper[4758]: I0122 16:50:52.038668 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-fc5a-account-create-update-rxqdl" Jan 22 16:50:52 crc kubenswrapper[4758]: I0122 16:50:52.404732 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b65dddd8f-twdkl" Jan 22 16:50:52 crc kubenswrapper[4758]: I0122 16:50:52.469831 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79fb856f67-6q6hs"] Jan 22 16:50:52 crc kubenswrapper[4758]: I0122 16:50:52.470397 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79fb856f67-6q6hs" podUID="b704dfb7-fb7d-422c-82b0-1a0f4ae9b755" containerName="dnsmasq-dns" containerID="cri-o://e278e8759fff8b09177920664b40daf990a978ba76dafda5841c71e3d6b1843d" gracePeriod=10 Jan 22 16:50:53 crc kubenswrapper[4758]: I0122 16:50:53.052887 4758 generic.go:334] "Generic (PLEG): container finished" podID="b704dfb7-fb7d-422c-82b0-1a0f4ae9b755" containerID="e278e8759fff8b09177920664b40daf990a978ba76dafda5841c71e3d6b1843d" exitCode=0 Jan 22 16:50:53 crc kubenswrapper[4758]: I0122 16:50:53.052935 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79fb856f67-6q6hs" event={"ID":"b704dfb7-fb7d-422c-82b0-1a0f4ae9b755","Type":"ContainerDied","Data":"e278e8759fff8b09177920664b40daf990a978ba76dafda5841c71e3d6b1843d"} Jan 22 16:50:54 crc kubenswrapper[4758]: I0122 16:50:54.085503 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-79fb856f67-6q6hs" podUID="b704dfb7-fb7d-422c-82b0-1a0f4ae9b755" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.121:5353: connect: connection refused" Jan 22 16:50:56 crc kubenswrapper[4758]: I0122 16:50:56.048210 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79fb856f67-6q6hs" Jan 22 16:50:56 crc kubenswrapper[4758]: I0122 16:50:56.064614 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-bnlhc" Jan 22 16:50:56 crc kubenswrapper[4758]: I0122 16:50:56.091241 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-t9c62" Jan 22 16:50:56 crc kubenswrapper[4758]: I0122 16:50:56.091521 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-t9c62" event={"ID":"d6339d32-557a-4f41-9d09-47d3d469615b","Type":"ContainerDied","Data":"7323c3ae2668fe5be7d794a69184315f10c2784ff4e65be69a8320dcfd752d8c"} Jan 22 16:50:56 crc kubenswrapper[4758]: I0122 16:50:56.091674 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7323c3ae2668fe5be7d794a69184315f10c2784ff4e65be69a8320dcfd752d8c" Jan 22 16:50:56 crc kubenswrapper[4758]: I0122 16:50:56.094711 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-bnlhc" Jan 22 16:50:56 crc kubenswrapper[4758]: I0122 16:50:56.095012 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-bnlhc" event={"ID":"2d7a40ed-25c4-4645-aaf7-3aa28db8a4d9","Type":"ContainerDied","Data":"7e72bdbaca53459bc70b2675d1978fe98404c09b327203cc181a4bbb11717f0a"} Jan 22 16:50:56 crc kubenswrapper[4758]: I0122 16:50:56.095056 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e72bdbaca53459bc70b2675d1978fe98404c09b327203cc181a4bbb11717f0a" Jan 22 16:50:56 crc kubenswrapper[4758]: I0122 16:50:56.097916 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79fb856f67-6q6hs" event={"ID":"b704dfb7-fb7d-422c-82b0-1a0f4ae9b755","Type":"ContainerDied","Data":"15f2f97b3e383494b10e0314d203753b62116dc958d4ceff459252336d3af890"} Jan 22 16:50:56 crc kubenswrapper[4758]: I0122 16:50:56.098111 4758 scope.go:117] "RemoveContainer" containerID="e278e8759fff8b09177920664b40daf990a978ba76dafda5841c71e3d6b1843d" Jan 22 16:50:56 crc kubenswrapper[4758]: I0122 16:50:56.098359 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79fb856f67-6q6hs" Jan 22 16:50:56 crc kubenswrapper[4758]: I0122 16:50:56.120375 4758 scope.go:117] "RemoveContainer" containerID="00fd018e3fa79c43af5ef0d5be2465fa8629a9da7b8faf7c664d2d91b19985ec" Jan 22 16:50:56 crc kubenswrapper[4758]: I0122 16:50:56.189542 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b704dfb7-fb7d-422c-82b0-1a0f4ae9b755-ovsdbserver-sb\") pod \"b704dfb7-fb7d-422c-82b0-1a0f4ae9b755\" (UID: \"b704dfb7-fb7d-422c-82b0-1a0f4ae9b755\") " Jan 22 16:50:56 crc kubenswrapper[4758]: I0122 16:50:56.189603 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b704dfb7-fb7d-422c-82b0-1a0f4ae9b755-ovsdbserver-nb\") pod \"b704dfb7-fb7d-422c-82b0-1a0f4ae9b755\" (UID: \"b704dfb7-fb7d-422c-82b0-1a0f4ae9b755\") " Jan 22 16:50:56 crc kubenswrapper[4758]: I0122 16:50:56.189657 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gm48k\" (UniqueName: \"kubernetes.io/projected/2d7a40ed-25c4-4645-aaf7-3aa28db8a4d9-kube-api-access-gm48k\") pod \"2d7a40ed-25c4-4645-aaf7-3aa28db8a4d9\" (UID: \"2d7a40ed-25c4-4645-aaf7-3aa28db8a4d9\") " Jan 22 16:50:56 crc kubenswrapper[4758]: I0122 16:50:56.189700 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b704dfb7-fb7d-422c-82b0-1a0f4ae9b755-config\") pod \"b704dfb7-fb7d-422c-82b0-1a0f4ae9b755\" (UID: \"b704dfb7-fb7d-422c-82b0-1a0f4ae9b755\") " Jan 22 16:50:56 crc kubenswrapper[4758]: I0122 16:50:56.189789 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltlmv\" (UniqueName: \"kubernetes.io/projected/b704dfb7-fb7d-422c-82b0-1a0f4ae9b755-kube-api-access-ltlmv\") pod \"b704dfb7-fb7d-422c-82b0-1a0f4ae9b755\" (UID: \"b704dfb7-fb7d-422c-82b0-1a0f4ae9b755\") " Jan 22 16:50:56 crc kubenswrapper[4758]: I0122 16:50:56.190196 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d7a40ed-25c4-4645-aaf7-3aa28db8a4d9-operator-scripts\") pod \"2d7a40ed-25c4-4645-aaf7-3aa28db8a4d9\" (UID: \"2d7a40ed-25c4-4645-aaf7-3aa28db8a4d9\") " Jan 22 16:50:56 crc kubenswrapper[4758]: I0122 16:50:56.190223 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b704dfb7-fb7d-422c-82b0-1a0f4ae9b755-dns-svc\") pod \"b704dfb7-fb7d-422c-82b0-1a0f4ae9b755\" (UID: \"b704dfb7-fb7d-422c-82b0-1a0f4ae9b755\") " Jan 22 16:50:56 crc kubenswrapper[4758]: I0122 16:50:56.191958 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d7a40ed-25c4-4645-aaf7-3aa28db8a4d9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2d7a40ed-25c4-4645-aaf7-3aa28db8a4d9" (UID: "2d7a40ed-25c4-4645-aaf7-3aa28db8a4d9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:50:56 crc kubenswrapper[4758]: I0122 16:50:56.193690 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d7a40ed-25c4-4645-aaf7-3aa28db8a4d9-kube-api-access-gm48k" (OuterVolumeSpecName: "kube-api-access-gm48k") pod "2d7a40ed-25c4-4645-aaf7-3aa28db8a4d9" (UID: "2d7a40ed-25c4-4645-aaf7-3aa28db8a4d9"). InnerVolumeSpecName "kube-api-access-gm48k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:50:56 crc kubenswrapper[4758]: I0122 16:50:56.194389 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b704dfb7-fb7d-422c-82b0-1a0f4ae9b755-kube-api-access-ltlmv" (OuterVolumeSpecName: "kube-api-access-ltlmv") pod "b704dfb7-fb7d-422c-82b0-1a0f4ae9b755" (UID: "b704dfb7-fb7d-422c-82b0-1a0f4ae9b755"). InnerVolumeSpecName "kube-api-access-ltlmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:50:56 crc kubenswrapper[4758]: I0122 16:50:56.239068 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b704dfb7-fb7d-422c-82b0-1a0f4ae9b755-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b704dfb7-fb7d-422c-82b0-1a0f4ae9b755" (UID: "b704dfb7-fb7d-422c-82b0-1a0f4ae9b755"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:50:56 crc kubenswrapper[4758]: I0122 16:50:56.239219 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b704dfb7-fb7d-422c-82b0-1a0f4ae9b755-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b704dfb7-fb7d-422c-82b0-1a0f4ae9b755" (UID: "b704dfb7-fb7d-422c-82b0-1a0f4ae9b755"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:50:56 crc kubenswrapper[4758]: I0122 16:50:56.243017 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b704dfb7-fb7d-422c-82b0-1a0f4ae9b755-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b704dfb7-fb7d-422c-82b0-1a0f4ae9b755" (UID: "b704dfb7-fb7d-422c-82b0-1a0f4ae9b755"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:50:56 crc kubenswrapper[4758]: I0122 16:50:56.248108 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b704dfb7-fb7d-422c-82b0-1a0f4ae9b755-config" (OuterVolumeSpecName: "config") pod "b704dfb7-fb7d-422c-82b0-1a0f4ae9b755" (UID: "b704dfb7-fb7d-422c-82b0-1a0f4ae9b755"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:50:56 crc kubenswrapper[4758]: I0122 16:50:56.291564 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksnzw\" (UniqueName: \"kubernetes.io/projected/d6339d32-557a-4f41-9d09-47d3d469615b-kube-api-access-ksnzw\") pod \"d6339d32-557a-4f41-9d09-47d3d469615b\" (UID: \"d6339d32-557a-4f41-9d09-47d3d469615b\") " Jan 22 16:50:56 crc kubenswrapper[4758]: I0122 16:50:56.291635 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6339d32-557a-4f41-9d09-47d3d469615b-operator-scripts\") pod \"d6339d32-557a-4f41-9d09-47d3d469615b\" (UID: \"d6339d32-557a-4f41-9d09-47d3d469615b\") " Jan 22 16:50:56 crc kubenswrapper[4758]: I0122 16:50:56.292383 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d7a40ed-25c4-4645-aaf7-3aa28db8a4d9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:56 crc kubenswrapper[4758]: I0122 16:50:56.292410 4758 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b704dfb7-fb7d-422c-82b0-1a0f4ae9b755-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:56 crc kubenswrapper[4758]: I0122 16:50:56.292445 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b704dfb7-fb7d-422c-82b0-1a0f4ae9b755-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:56 crc kubenswrapper[4758]: I0122 16:50:56.292458 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b704dfb7-fb7d-422c-82b0-1a0f4ae9b755-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:56 crc kubenswrapper[4758]: I0122 16:50:56.292471 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gm48k\" (UniqueName: \"kubernetes.io/projected/2d7a40ed-25c4-4645-aaf7-3aa28db8a4d9-kube-api-access-gm48k\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:56 crc kubenswrapper[4758]: I0122 16:50:56.292518 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b704dfb7-fb7d-422c-82b0-1a0f4ae9b755-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:56 crc kubenswrapper[4758]: I0122 16:50:56.292531 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltlmv\" (UniqueName: \"kubernetes.io/projected/b704dfb7-fb7d-422c-82b0-1a0f4ae9b755-kube-api-access-ltlmv\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:56 crc kubenswrapper[4758]: I0122 16:50:56.292918 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6339d32-557a-4f41-9d09-47d3d469615b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d6339d32-557a-4f41-9d09-47d3d469615b" (UID: "d6339d32-557a-4f41-9d09-47d3d469615b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:50:56 crc kubenswrapper[4758]: I0122 16:50:56.295910 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6339d32-557a-4f41-9d09-47d3d469615b-kube-api-access-ksnzw" (OuterVolumeSpecName: "kube-api-access-ksnzw") pod "d6339d32-557a-4f41-9d09-47d3d469615b" (UID: "d6339d32-557a-4f41-9d09-47d3d469615b"). InnerVolumeSpecName "kube-api-access-ksnzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:50:56 crc kubenswrapper[4758]: I0122 16:50:56.393579 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6339d32-557a-4f41-9d09-47d3d469615b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:56 crc kubenswrapper[4758]: I0122 16:50:56.393618 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksnzw\" (UniqueName: \"kubernetes.io/projected/d6339d32-557a-4f41-9d09-47d3d469615b-kube-api-access-ksnzw\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:56 crc kubenswrapper[4758]: I0122 16:50:56.437269 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79fb856f67-6q6hs"] Jan 22 16:50:56 crc kubenswrapper[4758]: I0122 16:50:56.445253 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79fb856f67-6q6hs"] Jan 22 16:50:56 crc kubenswrapper[4758]: I0122 16:50:56.820586 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b704dfb7-fb7d-422c-82b0-1a0f4ae9b755" path="/var/lib/kubelet/pods/b704dfb7-fb7d-422c-82b0-1a0f4ae9b755/volumes" Jan 22 16:50:57 crc kubenswrapper[4758]: E0122 16:50:57.016194 4758 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6339d32_557a_4f41_9d09_47d3d469615b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6339d32_557a_4f41_9d09_47d3d469615b.slice/crio-7323c3ae2668fe5be7d794a69184315f10c2784ff4e65be69a8320dcfd752d8c\": RecentStats: unable to find data in memory cache]" Jan 22 16:50:57 crc kubenswrapper[4758]: I0122 16:50:57.111936 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-zftxl" event={"ID":"0b6f4b9a-54d9-440f-853b-b1e3a7b6069b","Type":"ContainerStarted","Data":"2ccb59c8ad7c58f793fffb7731cd998a424c3cf38586390be317e0f235d90577"} Jan 22 16:50:57 crc kubenswrapper[4758]: I0122 16:50:57.113989 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-t9c62" Jan 22 16:50:57 crc kubenswrapper[4758]: I0122 16:50:57.114314 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-fdqxw" event={"ID":"c34cee78-07e7-4762-98ed-56f4f0ffc257","Type":"ContainerStarted","Data":"599e8eeda8a41982195764e5e8bb2304e85da4d83f034cebe3ba0df1e5d9284a"} Jan 22 16:50:57 crc kubenswrapper[4758]: I0122 16:50:57.145636 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-sync-zftxl" podStartSLOduration=2.862788327 podStartE2EDuration="14.145618724s" podCreationTimestamp="2026-01-22 16:50:43 +0000 UTC" firstStartedPulling="2026-01-22 16:50:44.634998976 +0000 UTC m=+1266.118338261" lastFinishedPulling="2026-01-22 16:50:55.917829363 +0000 UTC m=+1277.401168658" observedRunningTime="2026-01-22 16:50:57.136922567 +0000 UTC m=+1278.620261852" watchObservedRunningTime="2026-01-22 16:50:57.145618724 +0000 UTC m=+1278.628958009" Jan 22 16:50:57 crc kubenswrapper[4758]: I0122 16:50:57.165037 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-fdqxw" podStartSLOduration=3.26608147 podStartE2EDuration="14.165020262s" podCreationTimestamp="2026-01-22 16:50:43 +0000 UTC" firstStartedPulling="2026-01-22 16:50:44.94723875 +0000 UTC m=+1266.430578035" lastFinishedPulling="2026-01-22 16:50:55.846177542 +0000 UTC m=+1277.329516827" observedRunningTime="2026-01-22 16:50:57.154930877 +0000 UTC m=+1278.638270162" watchObservedRunningTime="2026-01-22 16:50:57.165020262 +0000 UTC m=+1278.648359547" Jan 22 16:51:01 crc kubenswrapper[4758]: I0122 16:51:01.648798 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-9h9hb"] Jan 22 16:51:01 crc kubenswrapper[4758]: E0122 16:51:01.650063 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6623f30f-8f61-4f19-962f-de3e10559547" containerName="mariadb-database-create" Jan 22 16:51:01 crc kubenswrapper[4758]: I0122 16:51:01.650084 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="6623f30f-8f61-4f19-962f-de3e10559547" containerName="mariadb-database-create" Jan 22 16:51:01 crc kubenswrapper[4758]: E0122 16:51:01.650094 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d309a140-33cc-4a62-b068-8ebc4797ee7e" containerName="mariadb-account-create-update" Jan 22 16:51:01 crc kubenswrapper[4758]: I0122 16:51:01.650102 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="d309a140-33cc-4a62-b068-8ebc4797ee7e" containerName="mariadb-account-create-update" Jan 22 16:51:01 crc kubenswrapper[4758]: E0122 16:51:01.650113 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fff1f29f-f0e8-4fff-b964-9e9c9dc7f60b" containerName="mariadb-account-create-update" Jan 22 16:51:01 crc kubenswrapper[4758]: I0122 16:51:01.650122 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="fff1f29f-f0e8-4fff-b964-9e9c9dc7f60b" containerName="mariadb-account-create-update" Jan 22 16:51:01 crc kubenswrapper[4758]: E0122 16:51:01.650140 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09e5cd9a-2eae-49cd-b9a8-37b9d0a109dd" containerName="mariadb-account-create-update" Jan 22 16:51:01 crc kubenswrapper[4758]: I0122 16:51:01.650150 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="09e5cd9a-2eae-49cd-b9a8-37b9d0a109dd" containerName="mariadb-account-create-update" Jan 22 16:51:01 crc kubenswrapper[4758]: E0122 16:51:01.650165 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23eda699-be19-45a4-8fac-2f3c8d1f38f6" containerName="mariadb-database-create" Jan 22 16:51:01 crc kubenswrapper[4758]: I0122 16:51:01.650173 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="23eda699-be19-45a4-8fac-2f3c8d1f38f6" containerName="mariadb-database-create" Jan 22 16:51:01 crc kubenswrapper[4758]: E0122 16:51:01.650191 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b704dfb7-fb7d-422c-82b0-1a0f4ae9b755" containerName="dnsmasq-dns" Jan 22 16:51:01 crc kubenswrapper[4758]: I0122 16:51:01.650200 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="b704dfb7-fb7d-422c-82b0-1a0f4ae9b755" containerName="dnsmasq-dns" Jan 22 16:51:01 crc kubenswrapper[4758]: E0122 16:51:01.650223 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6339d32-557a-4f41-9d09-47d3d469615b" containerName="mariadb-database-create" Jan 22 16:51:01 crc kubenswrapper[4758]: I0122 16:51:01.650232 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6339d32-557a-4f41-9d09-47d3d469615b" containerName="mariadb-database-create" Jan 22 16:51:01 crc kubenswrapper[4758]: E0122 16:51:01.650245 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b704dfb7-fb7d-422c-82b0-1a0f4ae9b755" containerName="init" Jan 22 16:51:01 crc kubenswrapper[4758]: I0122 16:51:01.650254 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="b704dfb7-fb7d-422c-82b0-1a0f4ae9b755" containerName="init" Jan 22 16:51:01 crc kubenswrapper[4758]: E0122 16:51:01.650266 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d7a40ed-25c4-4645-aaf7-3aa28db8a4d9" containerName="mariadb-database-create" Jan 22 16:51:01 crc kubenswrapper[4758]: I0122 16:51:01.650274 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d7a40ed-25c4-4645-aaf7-3aa28db8a4d9" containerName="mariadb-database-create" Jan 22 16:51:01 crc kubenswrapper[4758]: E0122 16:51:01.650291 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4788613-d2cb-49ab-89de-a8c4492d02fb" containerName="mariadb-account-create-update" Jan 22 16:51:01 crc kubenswrapper[4758]: I0122 16:51:01.650299 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4788613-d2cb-49ab-89de-a8c4492d02fb" containerName="mariadb-account-create-update" Jan 22 16:51:01 crc kubenswrapper[4758]: I0122 16:51:01.650599 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="23eda699-be19-45a4-8fac-2f3c8d1f38f6" containerName="mariadb-database-create" Jan 22 16:51:01 crc kubenswrapper[4758]: I0122 16:51:01.650644 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="6623f30f-8f61-4f19-962f-de3e10559547" containerName="mariadb-database-create" Jan 22 16:51:01 crc kubenswrapper[4758]: I0122 16:51:01.650655 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d7a40ed-25c4-4645-aaf7-3aa28db8a4d9" containerName="mariadb-database-create" Jan 22 16:51:01 crc kubenswrapper[4758]: I0122 16:51:01.650675 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="d309a140-33cc-4a62-b068-8ebc4797ee7e" containerName="mariadb-account-create-update" Jan 22 16:51:01 crc kubenswrapper[4758]: I0122 16:51:01.650695 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="b704dfb7-fb7d-422c-82b0-1a0f4ae9b755" containerName="dnsmasq-dns" Jan 22 16:51:01 crc kubenswrapper[4758]: I0122 16:51:01.650710 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4788613-d2cb-49ab-89de-a8c4492d02fb" containerName="mariadb-account-create-update" Jan 22 16:51:01 crc kubenswrapper[4758]: I0122 16:51:01.650732 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="09e5cd9a-2eae-49cd-b9a8-37b9d0a109dd" containerName="mariadb-account-create-update" Jan 22 16:51:01 crc kubenswrapper[4758]: I0122 16:51:01.650775 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="fff1f29f-f0e8-4fff-b964-9e9c9dc7f60b" containerName="mariadb-account-create-update" Jan 22 16:51:01 crc kubenswrapper[4758]: I0122 16:51:01.650797 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6339d32-557a-4f41-9d09-47d3d469615b" containerName="mariadb-database-create" Jan 22 16:51:01 crc kubenswrapper[4758]: I0122 16:51:01.651578 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9h9hb" Jan 22 16:51:01 crc kubenswrapper[4758]: I0122 16:51:01.661383 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-9h9hb"] Jan 22 16:51:01 crc kubenswrapper[4758]: I0122 16:51:01.662860 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 22 16:51:01 crc kubenswrapper[4758]: I0122 16:51:01.663140 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-th7td" Jan 22 16:51:01 crc kubenswrapper[4758]: I0122 16:51:01.690794 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8fe0f21-8912-4d6c-ba4f-6600456784e1-combined-ca-bundle\") pod \"glance-db-sync-9h9hb\" (UID: \"f8fe0f21-8912-4d6c-ba4f-6600456784e1\") " pod="openstack/glance-db-sync-9h9hb" Jan 22 16:51:01 crc kubenswrapper[4758]: I0122 16:51:01.690876 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f8fe0f21-8912-4d6c-ba4f-6600456784e1-db-sync-config-data\") pod \"glance-db-sync-9h9hb\" (UID: \"f8fe0f21-8912-4d6c-ba4f-6600456784e1\") " pod="openstack/glance-db-sync-9h9hb" Jan 22 16:51:01 crc kubenswrapper[4758]: I0122 16:51:01.690928 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8fe0f21-8912-4d6c-ba4f-6600456784e1-config-data\") pod \"glance-db-sync-9h9hb\" (UID: \"f8fe0f21-8912-4d6c-ba4f-6600456784e1\") " pod="openstack/glance-db-sync-9h9hb" Jan 22 16:51:01 crc kubenswrapper[4758]: I0122 16:51:01.690957 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s86q8\" (UniqueName: \"kubernetes.io/projected/f8fe0f21-8912-4d6c-ba4f-6600456784e1-kube-api-access-s86q8\") pod \"glance-db-sync-9h9hb\" (UID: \"f8fe0f21-8912-4d6c-ba4f-6600456784e1\") " pod="openstack/glance-db-sync-9h9hb" Jan 22 16:51:01 crc kubenswrapper[4758]: I0122 16:51:01.792429 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8fe0f21-8912-4d6c-ba4f-6600456784e1-combined-ca-bundle\") pod \"glance-db-sync-9h9hb\" (UID: \"f8fe0f21-8912-4d6c-ba4f-6600456784e1\") " pod="openstack/glance-db-sync-9h9hb" Jan 22 16:51:01 crc kubenswrapper[4758]: I0122 16:51:01.792481 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f8fe0f21-8912-4d6c-ba4f-6600456784e1-db-sync-config-data\") pod \"glance-db-sync-9h9hb\" (UID: \"f8fe0f21-8912-4d6c-ba4f-6600456784e1\") " pod="openstack/glance-db-sync-9h9hb" Jan 22 16:51:01 crc kubenswrapper[4758]: I0122 16:51:01.792519 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8fe0f21-8912-4d6c-ba4f-6600456784e1-config-data\") pod \"glance-db-sync-9h9hb\" (UID: \"f8fe0f21-8912-4d6c-ba4f-6600456784e1\") " pod="openstack/glance-db-sync-9h9hb" Jan 22 16:51:01 crc kubenswrapper[4758]: I0122 16:51:01.792542 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s86q8\" (UniqueName: \"kubernetes.io/projected/f8fe0f21-8912-4d6c-ba4f-6600456784e1-kube-api-access-s86q8\") pod \"glance-db-sync-9h9hb\" (UID: \"f8fe0f21-8912-4d6c-ba4f-6600456784e1\") " pod="openstack/glance-db-sync-9h9hb" Jan 22 16:51:01 crc kubenswrapper[4758]: I0122 16:51:01.799039 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f8fe0f21-8912-4d6c-ba4f-6600456784e1-db-sync-config-data\") pod \"glance-db-sync-9h9hb\" (UID: \"f8fe0f21-8912-4d6c-ba4f-6600456784e1\") " pod="openstack/glance-db-sync-9h9hb" Jan 22 16:51:01 crc kubenswrapper[4758]: I0122 16:51:01.801481 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8fe0f21-8912-4d6c-ba4f-6600456784e1-combined-ca-bundle\") pod \"glance-db-sync-9h9hb\" (UID: \"f8fe0f21-8912-4d6c-ba4f-6600456784e1\") " pod="openstack/glance-db-sync-9h9hb" Jan 22 16:51:01 crc kubenswrapper[4758]: I0122 16:51:01.811159 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8fe0f21-8912-4d6c-ba4f-6600456784e1-config-data\") pod \"glance-db-sync-9h9hb\" (UID: \"f8fe0f21-8912-4d6c-ba4f-6600456784e1\") " pod="openstack/glance-db-sync-9h9hb" Jan 22 16:51:01 crc kubenswrapper[4758]: I0122 16:51:01.811976 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s86q8\" (UniqueName: \"kubernetes.io/projected/f8fe0f21-8912-4d6c-ba4f-6600456784e1-kube-api-access-s86q8\") pod \"glance-db-sync-9h9hb\" (UID: \"f8fe0f21-8912-4d6c-ba4f-6600456784e1\") " pod="openstack/glance-db-sync-9h9hb" Jan 22 16:51:02 crc kubenswrapper[4758]: I0122 16:51:02.040634 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9h9hb" Jan 22 16:51:03 crc kubenswrapper[4758]: I0122 16:51:02.855681 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-9h9hb"] Jan 22 16:51:03 crc kubenswrapper[4758]: I0122 16:51:03.174897 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9h9hb" event={"ID":"f8fe0f21-8912-4d6c-ba4f-6600456784e1","Type":"ContainerStarted","Data":"95c2dcfb21c4dfe180e2269eb4ff18ff5560a69c3d80dca474a1d910c79f3cdb"} Jan 22 16:51:04 crc kubenswrapper[4758]: I0122 16:51:04.188682 4758 generic.go:334] "Generic (PLEG): container finished" podID="0b6f4b9a-54d9-440f-853b-b1e3a7b6069b" containerID="2ccb59c8ad7c58f793fffb7731cd998a424c3cf38586390be317e0f235d90577" exitCode=0 Jan 22 16:51:04 crc kubenswrapper[4758]: I0122 16:51:04.188777 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-zftxl" event={"ID":"0b6f4b9a-54d9-440f-853b-b1e3a7b6069b","Type":"ContainerDied","Data":"2ccb59c8ad7c58f793fffb7731cd998a424c3cf38586390be317e0f235d90577"} Jan 22 16:51:05 crc kubenswrapper[4758]: I0122 16:51:05.201541 4758 generic.go:334] "Generic (PLEG): container finished" podID="c34cee78-07e7-4762-98ed-56f4f0ffc257" containerID="599e8eeda8a41982195764e5e8bb2304e85da4d83f034cebe3ba0df1e5d9284a" exitCode=0 Jan 22 16:51:05 crc kubenswrapper[4758]: I0122 16:51:05.201618 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-fdqxw" event={"ID":"c34cee78-07e7-4762-98ed-56f4f0ffc257","Type":"ContainerDied","Data":"599e8eeda8a41982195764e5e8bb2304e85da4d83f034cebe3ba0df1e5d9284a"} Jan 22 16:51:05 crc kubenswrapper[4758]: I0122 16:51:05.532377 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-zftxl" Jan 22 16:51:05 crc kubenswrapper[4758]: I0122 16:51:05.663267 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b6f4b9a-54d9-440f-853b-b1e3a7b6069b-combined-ca-bundle\") pod \"0b6f4b9a-54d9-440f-853b-b1e3a7b6069b\" (UID: \"0b6f4b9a-54d9-440f-853b-b1e3a7b6069b\") " Jan 22 16:51:05 crc kubenswrapper[4758]: I0122 16:51:05.663421 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9cm5\" (UniqueName: \"kubernetes.io/projected/0b6f4b9a-54d9-440f-853b-b1e3a7b6069b-kube-api-access-x9cm5\") pod \"0b6f4b9a-54d9-440f-853b-b1e3a7b6069b\" (UID: \"0b6f4b9a-54d9-440f-853b-b1e3a7b6069b\") " Jan 22 16:51:05 crc kubenswrapper[4758]: I0122 16:51:05.663493 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0b6f4b9a-54d9-440f-853b-b1e3a7b6069b-db-sync-config-data\") pod \"0b6f4b9a-54d9-440f-853b-b1e3a7b6069b\" (UID: \"0b6f4b9a-54d9-440f-853b-b1e3a7b6069b\") " Jan 22 16:51:05 crc kubenswrapper[4758]: I0122 16:51:05.663673 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b6f4b9a-54d9-440f-853b-b1e3a7b6069b-config-data\") pod \"0b6f4b9a-54d9-440f-853b-b1e3a7b6069b\" (UID: \"0b6f4b9a-54d9-440f-853b-b1e3a7b6069b\") " Jan 22 16:51:05 crc kubenswrapper[4758]: I0122 16:51:05.674088 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b6f4b9a-54d9-440f-853b-b1e3a7b6069b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "0b6f4b9a-54d9-440f-853b-b1e3a7b6069b" (UID: "0b6f4b9a-54d9-440f-853b-b1e3a7b6069b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:51:05 crc kubenswrapper[4758]: I0122 16:51:05.674163 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b6f4b9a-54d9-440f-853b-b1e3a7b6069b-kube-api-access-x9cm5" (OuterVolumeSpecName: "kube-api-access-x9cm5") pod "0b6f4b9a-54d9-440f-853b-b1e3a7b6069b" (UID: "0b6f4b9a-54d9-440f-853b-b1e3a7b6069b"). InnerVolumeSpecName "kube-api-access-x9cm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:51:05 crc kubenswrapper[4758]: I0122 16:51:05.691956 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b6f4b9a-54d9-440f-853b-b1e3a7b6069b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b6f4b9a-54d9-440f-853b-b1e3a7b6069b" (UID: "0b6f4b9a-54d9-440f-853b-b1e3a7b6069b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:51:05 crc kubenswrapper[4758]: I0122 16:51:05.710705 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b6f4b9a-54d9-440f-853b-b1e3a7b6069b-config-data" (OuterVolumeSpecName: "config-data") pod "0b6f4b9a-54d9-440f-853b-b1e3a7b6069b" (UID: "0b6f4b9a-54d9-440f-853b-b1e3a7b6069b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:51:05 crc kubenswrapper[4758]: I0122 16:51:05.765261 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b6f4b9a-54d9-440f-853b-b1e3a7b6069b-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:05 crc kubenswrapper[4758]: I0122 16:51:05.765309 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b6f4b9a-54d9-440f-853b-b1e3a7b6069b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:05 crc kubenswrapper[4758]: I0122 16:51:05.765321 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9cm5\" (UniqueName: \"kubernetes.io/projected/0b6f4b9a-54d9-440f-853b-b1e3a7b6069b-kube-api-access-x9cm5\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:05 crc kubenswrapper[4758]: I0122 16:51:05.765330 4758 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0b6f4b9a-54d9-440f-853b-b1e3a7b6069b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:06 crc kubenswrapper[4758]: I0122 16:51:06.223344 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-zftxl" Jan 22 16:51:06 crc kubenswrapper[4758]: I0122 16:51:06.225293 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-zftxl" event={"ID":"0b6f4b9a-54d9-440f-853b-b1e3a7b6069b","Type":"ContainerDied","Data":"f3b714e76219fea96254d4e3a41a0d23845acebfbc47518021c28ec8baad6145"} Jan 22 16:51:06 crc kubenswrapper[4758]: I0122 16:51:06.225351 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3b714e76219fea96254d4e3a41a0d23845acebfbc47518021c28ec8baad6145" Jan 22 16:51:06 crc kubenswrapper[4758]: I0122 16:51:06.772318 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-fdqxw" Jan 22 16:51:06 crc kubenswrapper[4758]: I0122 16:51:06.785691 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jbmr\" (UniqueName: \"kubernetes.io/projected/c34cee78-07e7-4762-98ed-56f4f0ffc257-kube-api-access-4jbmr\") pod \"c34cee78-07e7-4762-98ed-56f4f0ffc257\" (UID: \"c34cee78-07e7-4762-98ed-56f4f0ffc257\") " Jan 22 16:51:06 crc kubenswrapper[4758]: I0122 16:51:06.786038 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c34cee78-07e7-4762-98ed-56f4f0ffc257-combined-ca-bundle\") pod \"c34cee78-07e7-4762-98ed-56f4f0ffc257\" (UID: \"c34cee78-07e7-4762-98ed-56f4f0ffc257\") " Jan 22 16:51:06 crc kubenswrapper[4758]: I0122 16:51:06.786126 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c34cee78-07e7-4762-98ed-56f4f0ffc257-config-data\") pod \"c34cee78-07e7-4762-98ed-56f4f0ffc257\" (UID: \"c34cee78-07e7-4762-98ed-56f4f0ffc257\") " Jan 22 16:51:06 crc kubenswrapper[4758]: I0122 16:51:06.799085 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c34cee78-07e7-4762-98ed-56f4f0ffc257-kube-api-access-4jbmr" (OuterVolumeSpecName: "kube-api-access-4jbmr") pod "c34cee78-07e7-4762-98ed-56f4f0ffc257" (UID: "c34cee78-07e7-4762-98ed-56f4f0ffc257"). InnerVolumeSpecName "kube-api-access-4jbmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:51:06 crc kubenswrapper[4758]: I0122 16:51:06.867931 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c34cee78-07e7-4762-98ed-56f4f0ffc257-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c34cee78-07e7-4762-98ed-56f4f0ffc257" (UID: "c34cee78-07e7-4762-98ed-56f4f0ffc257"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:51:06 crc kubenswrapper[4758]: I0122 16:51:06.871958 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c34cee78-07e7-4762-98ed-56f4f0ffc257-config-data" (OuterVolumeSpecName: "config-data") pod "c34cee78-07e7-4762-98ed-56f4f0ffc257" (UID: "c34cee78-07e7-4762-98ed-56f4f0ffc257"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:51:06 crc kubenswrapper[4758]: I0122 16:51:06.890148 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c34cee78-07e7-4762-98ed-56f4f0ffc257-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:06 crc kubenswrapper[4758]: I0122 16:51:06.891322 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jbmr\" (UniqueName: \"kubernetes.io/projected/c34cee78-07e7-4762-98ed-56f4f0ffc257-kube-api-access-4jbmr\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:06 crc kubenswrapper[4758]: I0122 16:51:06.891351 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c34cee78-07e7-4762-98ed-56f4f0ffc257-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.236453 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-fdqxw" event={"ID":"c34cee78-07e7-4762-98ed-56f4f0ffc257","Type":"ContainerDied","Data":"796dd498bf6ea5eaa691bc0789fdf656d92647cdd14a87acda60825e846b0859"} Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.236493 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-fdqxw" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.236500 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="796dd498bf6ea5eaa691bc0789fdf656d92647cdd14a87acda60825e846b0859" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.412695 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b8f69bc8c-bxtdk"] Jan 22 16:51:07 crc kubenswrapper[4758]: E0122 16:51:07.413911 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c34cee78-07e7-4762-98ed-56f4f0ffc257" containerName="keystone-db-sync" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.413936 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="c34cee78-07e7-4762-98ed-56f4f0ffc257" containerName="keystone-db-sync" Jan 22 16:51:07 crc kubenswrapper[4758]: E0122 16:51:07.413955 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b6f4b9a-54d9-440f-853b-b1e3a7b6069b" containerName="watcher-db-sync" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.413964 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b6f4b9a-54d9-440f-853b-b1e3a7b6069b" containerName="watcher-db-sync" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.414190 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b6f4b9a-54d9-440f-853b-b1e3a7b6069b" containerName="watcher-db-sync" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.414228 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="c34cee78-07e7-4762-98ed-56f4f0ffc257" containerName="keystone-db-sync" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.415673 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b8f69bc8c-bxtdk" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.429901 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b8f69bc8c-bxtdk"] Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.467494 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-jv8qb"] Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.468949 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jv8qb" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.475648 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.475692 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.475909 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-q7l7k" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.476029 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.476127 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.508132 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jv8qb"] Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.581591 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.595505 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.613819 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-bvchw" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.614283 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.638153 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.640276 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.646653 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.695037 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dbcd3850-61a6-4d03-914a-790b1257e2fe-fernet-keys\") pod \"keystone-bootstrap-jv8qb\" (UID: \"dbcd3850-61a6-4d03-914a-790b1257e2fe\") " pod="openstack/keystone-bootstrap-jv8qb" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.695216 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbcd3850-61a6-4d03-914a-790b1257e2fe-config-data\") pod \"keystone-bootstrap-jv8qb\" (UID: \"dbcd3850-61a6-4d03-914a-790b1257e2fe\") " pod="openstack/keystone-bootstrap-jv8qb" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.695338 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2a9b58cb-7958-4dae-82ec-c435a970a8db-ovsdbserver-sb\") pod \"dnsmasq-dns-6b8f69bc8c-bxtdk\" (UID: \"2a9b58cb-7958-4dae-82ec-c435a970a8db\") " pod="openstack/dnsmasq-dns-6b8f69bc8c-bxtdk" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.695387 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsbs5\" (UniqueName: \"kubernetes.io/projected/dbcd3850-61a6-4d03-914a-790b1257e2fe-kube-api-access-hsbs5\") pod \"keystone-bootstrap-jv8qb\" (UID: \"dbcd3850-61a6-4d03-914a-790b1257e2fe\") " pod="openstack/keystone-bootstrap-jv8qb" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.695480 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbcd3850-61a6-4d03-914a-790b1257e2fe-combined-ca-bundle\") pod \"keystone-bootstrap-jv8qb\" (UID: \"dbcd3850-61a6-4d03-914a-790b1257e2fe\") " pod="openstack/keystone-bootstrap-jv8qb" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.695569 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a9b58cb-7958-4dae-82ec-c435a970a8db-dns-svc\") pod \"dnsmasq-dns-6b8f69bc8c-bxtdk\" (UID: \"2a9b58cb-7958-4dae-82ec-c435a970a8db\") " pod="openstack/dnsmasq-dns-6b8f69bc8c-bxtdk" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.695616 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbcd3850-61a6-4d03-914a-790b1257e2fe-scripts\") pod \"keystone-bootstrap-jv8qb\" (UID: \"dbcd3850-61a6-4d03-914a-790b1257e2fe\") " pod="openstack/keystone-bootstrap-jv8qb" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.695642 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2a9b58cb-7958-4dae-82ec-c435a970a8db-ovsdbserver-nb\") pod \"dnsmasq-dns-6b8f69bc8c-bxtdk\" (UID: \"2a9b58cb-7958-4dae-82ec-c435a970a8db\") " pod="openstack/dnsmasq-dns-6b8f69bc8c-bxtdk" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.695707 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dbcd3850-61a6-4d03-914a-790b1257e2fe-credential-keys\") pod \"keystone-bootstrap-jv8qb\" (UID: \"dbcd3850-61a6-4d03-914a-790b1257e2fe\") " pod="openstack/keystone-bootstrap-jv8qb" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.695802 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a9b58cb-7958-4dae-82ec-c435a970a8db-config\") pod \"dnsmasq-dns-6b8f69bc8c-bxtdk\" (UID: \"2a9b58cb-7958-4dae-82ec-c435a970a8db\") " pod="openstack/dnsmasq-dns-6b8f69bc8c-bxtdk" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.695880 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnwmf\" (UniqueName: \"kubernetes.io/projected/2a9b58cb-7958-4dae-82ec-c435a970a8db-kube-api-access-wnwmf\") pod \"dnsmasq-dns-6b8f69bc8c-bxtdk\" (UID: \"2a9b58cb-7958-4dae-82ec-c435a970a8db\") " pod="openstack/dnsmasq-dns-6b8f69bc8c-bxtdk" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.695943 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2a9b58cb-7958-4dae-82ec-c435a970a8db-dns-swift-storage-0\") pod \"dnsmasq-dns-6b8f69bc8c-bxtdk\" (UID: \"2a9b58cb-7958-4dae-82ec-c435a970a8db\") " pod="openstack/dnsmasq-dns-6b8f69bc8c-bxtdk" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.707840 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.759510 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.766625 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.802275 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.819836 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.829196 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dbcd3850-61a6-4d03-914a-790b1257e2fe-fernet-keys\") pod \"keystone-bootstrap-jv8qb\" (UID: \"dbcd3850-61a6-4d03-914a-790b1257e2fe\") " pod="openstack/keystone-bootstrap-jv8qb" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.829254 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b167566-11db-4fba-9e9b-711b7a5f950d-config-data\") pod \"watcher-api-0\" (UID: \"7b167566-11db-4fba-9e9b-711b7a5f950d\") " pod="openstack/watcher-api-0" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.829287 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea53227e-7c78-42b4-959c-dd2531914be2-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"ea53227e-7c78-42b4-959c-dd2531914be2\") " pod="openstack/watcher-decision-engine-0" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.829314 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e324fa8-b3ee-4072-8a8d-5c08e771d0c0-logs\") pod \"watcher-applier-0\" (UID: \"9e324fa8-b3ee-4072-8a8d-5c08e771d0c0\") " pod="openstack/watcher-applier-0" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.829338 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbcd3850-61a6-4d03-914a-790b1257e2fe-config-data\") pod \"keystone-bootstrap-jv8qb\" (UID: \"dbcd3850-61a6-4d03-914a-790b1257e2fe\") " pod="openstack/keystone-bootstrap-jv8qb" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.829359 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b7xm\" (UniqueName: \"kubernetes.io/projected/9e324fa8-b3ee-4072-8a8d-5c08e771d0c0-kube-api-access-9b7xm\") pod \"watcher-applier-0\" (UID: \"9e324fa8-b3ee-4072-8a8d-5c08e771d0c0\") " pod="openstack/watcher-applier-0" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.829395 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2a9b58cb-7958-4dae-82ec-c435a970a8db-ovsdbserver-sb\") pod \"dnsmasq-dns-6b8f69bc8c-bxtdk\" (UID: \"2a9b58cb-7958-4dae-82ec-c435a970a8db\") " pod="openstack/dnsmasq-dns-6b8f69bc8c-bxtdk" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.829412 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsbs5\" (UniqueName: \"kubernetes.io/projected/dbcd3850-61a6-4d03-914a-790b1257e2fe-kube-api-access-hsbs5\") pod \"keystone-bootstrap-jv8qb\" (UID: \"dbcd3850-61a6-4d03-914a-790b1257e2fe\") " pod="openstack/keystone-bootstrap-jv8qb" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.829434 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e324fa8-b3ee-4072-8a8d-5c08e771d0c0-config-data\") pod \"watcher-applier-0\" (UID: \"9e324fa8-b3ee-4072-8a8d-5c08e771d0c0\") " pod="openstack/watcher-applier-0" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.829449 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea53227e-7c78-42b4-959c-dd2531914be2-config-data\") pod \"watcher-decision-engine-0\" (UID: \"ea53227e-7c78-42b4-959c-dd2531914be2\") " pod="openstack/watcher-decision-engine-0" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.829509 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb4hd\" (UniqueName: \"kubernetes.io/projected/ea53227e-7c78-42b4-959c-dd2531914be2-kube-api-access-hb4hd\") pod \"watcher-decision-engine-0\" (UID: \"ea53227e-7c78-42b4-959c-dd2531914be2\") " pod="openstack/watcher-decision-engine-0" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.829527 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbcd3850-61a6-4d03-914a-790b1257e2fe-combined-ca-bundle\") pod \"keystone-bootstrap-jv8qb\" (UID: \"dbcd3850-61a6-4d03-914a-790b1257e2fe\") " pod="openstack/keystone-bootstrap-jv8qb" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.829548 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea53227e-7c78-42b4-959c-dd2531914be2-logs\") pod \"watcher-decision-engine-0\" (UID: \"ea53227e-7c78-42b4-959c-dd2531914be2\") " pod="openstack/watcher-decision-engine-0" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.829570 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e324fa8-b3ee-4072-8a8d-5c08e771d0c0-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"9e324fa8-b3ee-4072-8a8d-5c08e771d0c0\") " pod="openstack/watcher-applier-0" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.829595 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a9b58cb-7958-4dae-82ec-c435a970a8db-dns-svc\") pod \"dnsmasq-dns-6b8f69bc8c-bxtdk\" (UID: \"2a9b58cb-7958-4dae-82ec-c435a970a8db\") " pod="openstack/dnsmasq-dns-6b8f69bc8c-bxtdk" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.829615 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbcd3850-61a6-4d03-914a-790b1257e2fe-scripts\") pod \"keystone-bootstrap-jv8qb\" (UID: \"dbcd3850-61a6-4d03-914a-790b1257e2fe\") " pod="openstack/keystone-bootstrap-jv8qb" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.829633 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2a9b58cb-7958-4dae-82ec-c435a970a8db-ovsdbserver-nb\") pod \"dnsmasq-dns-6b8f69bc8c-bxtdk\" (UID: \"2a9b58cb-7958-4dae-82ec-c435a970a8db\") " pod="openstack/dnsmasq-dns-6b8f69bc8c-bxtdk" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.829652 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dbcd3850-61a6-4d03-914a-790b1257e2fe-credential-keys\") pod \"keystone-bootstrap-jv8qb\" (UID: \"dbcd3850-61a6-4d03-914a-790b1257e2fe\") " pod="openstack/keystone-bootstrap-jv8qb" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.829674 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b167566-11db-4fba-9e9b-711b7a5f950d-logs\") pod \"watcher-api-0\" (UID: \"7b167566-11db-4fba-9e9b-711b7a5f950d\") " pod="openstack/watcher-api-0" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.829698 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b167566-11db-4fba-9e9b-711b7a5f950d-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"7b167566-11db-4fba-9e9b-711b7a5f950d\") " pod="openstack/watcher-api-0" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.829722 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a9b58cb-7958-4dae-82ec-c435a970a8db-config\") pod \"dnsmasq-dns-6b8f69bc8c-bxtdk\" (UID: \"2a9b58cb-7958-4dae-82ec-c435a970a8db\") " pod="openstack/dnsmasq-dns-6b8f69bc8c-bxtdk" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.829755 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7b167566-11db-4fba-9e9b-711b7a5f950d-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"7b167566-11db-4fba-9e9b-711b7a5f950d\") " pod="openstack/watcher-api-0" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.829771 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prmgg\" (UniqueName: \"kubernetes.io/projected/7b167566-11db-4fba-9e9b-711b7a5f950d-kube-api-access-prmgg\") pod \"watcher-api-0\" (UID: \"7b167566-11db-4fba-9e9b-711b7a5f950d\") " pod="openstack/watcher-api-0" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.829800 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnwmf\" (UniqueName: \"kubernetes.io/projected/2a9b58cb-7958-4dae-82ec-c435a970a8db-kube-api-access-wnwmf\") pod \"dnsmasq-dns-6b8f69bc8c-bxtdk\" (UID: \"2a9b58cb-7958-4dae-82ec-c435a970a8db\") " pod="openstack/dnsmasq-dns-6b8f69bc8c-bxtdk" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.829831 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2a9b58cb-7958-4dae-82ec-c435a970a8db-dns-swift-storage-0\") pod \"dnsmasq-dns-6b8f69bc8c-bxtdk\" (UID: \"2a9b58cb-7958-4dae-82ec-c435a970a8db\") " pod="openstack/dnsmasq-dns-6b8f69bc8c-bxtdk" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.829864 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ea53227e-7c78-42b4-959c-dd2531914be2-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"ea53227e-7c78-42b4-959c-dd2531914be2\") " pod="openstack/watcher-decision-engine-0" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.833708 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a9b58cb-7958-4dae-82ec-c435a970a8db-dns-svc\") pod \"dnsmasq-dns-6b8f69bc8c-bxtdk\" (UID: \"2a9b58cb-7958-4dae-82ec-c435a970a8db\") " pod="openstack/dnsmasq-dns-6b8f69bc8c-bxtdk" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.834613 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2a9b58cb-7958-4dae-82ec-c435a970a8db-ovsdbserver-sb\") pod \"dnsmasq-dns-6b8f69bc8c-bxtdk\" (UID: \"2a9b58cb-7958-4dae-82ec-c435a970a8db\") " pod="openstack/dnsmasq-dns-6b8f69bc8c-bxtdk" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.834730 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2a9b58cb-7958-4dae-82ec-c435a970a8db-ovsdbserver-nb\") pod \"dnsmasq-dns-6b8f69bc8c-bxtdk\" (UID: \"2a9b58cb-7958-4dae-82ec-c435a970a8db\") " pod="openstack/dnsmasq-dns-6b8f69bc8c-bxtdk" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.834918 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a9b58cb-7958-4dae-82ec-c435a970a8db-config\") pod \"dnsmasq-dns-6b8f69bc8c-bxtdk\" (UID: \"2a9b58cb-7958-4dae-82ec-c435a970a8db\") " pod="openstack/dnsmasq-dns-6b8f69bc8c-bxtdk" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.835704 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2a9b58cb-7958-4dae-82ec-c435a970a8db-dns-swift-storage-0\") pod \"dnsmasq-dns-6b8f69bc8c-bxtdk\" (UID: \"2a9b58cb-7958-4dae-82ec-c435a970a8db\") " pod="openstack/dnsmasq-dns-6b8f69bc8c-bxtdk" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.847774 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dbcd3850-61a6-4d03-914a-790b1257e2fe-fernet-keys\") pod \"keystone-bootstrap-jv8qb\" (UID: \"dbcd3850-61a6-4d03-914a-790b1257e2fe\") " pod="openstack/keystone-bootstrap-jv8qb" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.853529 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-57f558b485-lrj7s"] Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.860554 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbcd3850-61a6-4d03-914a-790b1257e2fe-config-data\") pod \"keystone-bootstrap-jv8qb\" (UID: \"dbcd3850-61a6-4d03-914a-790b1257e2fe\") " pod="openstack/keystone-bootstrap-jv8qb" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.861248 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbcd3850-61a6-4d03-914a-790b1257e2fe-scripts\") pod \"keystone-bootstrap-jv8qb\" (UID: \"dbcd3850-61a6-4d03-914a-790b1257e2fe\") " pod="openstack/keystone-bootstrap-jv8qb" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.861579 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57f558b485-lrj7s" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.864757 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbcd3850-61a6-4d03-914a-790b1257e2fe-combined-ca-bundle\") pod \"keystone-bootstrap-jv8qb\" (UID: \"dbcd3850-61a6-4d03-914a-790b1257e2fe\") " pod="openstack/keystone-bootstrap-jv8qb" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.865425 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dbcd3850-61a6-4d03-914a-790b1257e2fe-credential-keys\") pod \"keystone-bootstrap-jv8qb\" (UID: \"dbcd3850-61a6-4d03-914a-790b1257e2fe\") " pod="openstack/keystone-bootstrap-jv8qb" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.884982 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnwmf\" (UniqueName: \"kubernetes.io/projected/2a9b58cb-7958-4dae-82ec-c435a970a8db-kube-api-access-wnwmf\") pod \"dnsmasq-dns-6b8f69bc8c-bxtdk\" (UID: \"2a9b58cb-7958-4dae-82ec-c435a970a8db\") " pod="openstack/dnsmasq-dns-6b8f69bc8c-bxtdk" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.885289 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.885425 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.885486 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-n2vxv" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.886063 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.886180 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.922590 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsbs5\" (UniqueName: \"kubernetes.io/projected/dbcd3850-61a6-4d03-914a-790b1257e2fe-kube-api-access-hsbs5\") pod \"keystone-bootstrap-jv8qb\" (UID: \"dbcd3850-61a6-4d03-914a-790b1257e2fe\") " pod="openstack/keystone-bootstrap-jv8qb" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.934510 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e324fa8-b3ee-4072-8a8d-5c08e771d0c0-config-data\") pod \"watcher-applier-0\" (UID: \"9e324fa8-b3ee-4072-8a8d-5c08e771d0c0\") " pod="openstack/watcher-applier-0" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.934621 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea53227e-7c78-42b4-959c-dd2531914be2-config-data\") pod \"watcher-decision-engine-0\" (UID: \"ea53227e-7c78-42b4-959c-dd2531914be2\") " pod="openstack/watcher-decision-engine-0" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.934678 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb4hd\" (UniqueName: \"kubernetes.io/projected/ea53227e-7c78-42b4-959c-dd2531914be2-kube-api-access-hb4hd\") pod \"watcher-decision-engine-0\" (UID: \"ea53227e-7c78-42b4-959c-dd2531914be2\") " pod="openstack/watcher-decision-engine-0" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.934724 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea53227e-7c78-42b4-959c-dd2531914be2-logs\") pod \"watcher-decision-engine-0\" (UID: \"ea53227e-7c78-42b4-959c-dd2531914be2\") " pod="openstack/watcher-decision-engine-0" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.948334 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e324fa8-b3ee-4072-8a8d-5c08e771d0c0-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"9e324fa8-b3ee-4072-8a8d-5c08e771d0c0\") " pod="openstack/watcher-applier-0" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.948548 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b167566-11db-4fba-9e9b-711b7a5f950d-logs\") pod \"watcher-api-0\" (UID: \"7b167566-11db-4fba-9e9b-711b7a5f950d\") " pod="openstack/watcher-api-0" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.948608 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b167566-11db-4fba-9e9b-711b7a5f950d-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"7b167566-11db-4fba-9e9b-711b7a5f950d\") " pod="openstack/watcher-api-0" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.948655 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7b167566-11db-4fba-9e9b-711b7a5f950d-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"7b167566-11db-4fba-9e9b-711b7a5f950d\") " pod="openstack/watcher-api-0" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.948696 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prmgg\" (UniqueName: \"kubernetes.io/projected/7b167566-11db-4fba-9e9b-711b7a5f950d-kube-api-access-prmgg\") pod \"watcher-api-0\" (UID: \"7b167566-11db-4fba-9e9b-711b7a5f950d\") " pod="openstack/watcher-api-0" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.948957 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ea53227e-7c78-42b4-959c-dd2531914be2-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"ea53227e-7c78-42b4-959c-dd2531914be2\") " pod="openstack/watcher-decision-engine-0" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.949127 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b167566-11db-4fba-9e9b-711b7a5f950d-config-data\") pod \"watcher-api-0\" (UID: \"7b167566-11db-4fba-9e9b-711b7a5f950d\") " pod="openstack/watcher-api-0" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.949211 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea53227e-7c78-42b4-959c-dd2531914be2-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"ea53227e-7c78-42b4-959c-dd2531914be2\") " pod="openstack/watcher-decision-engine-0" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.949279 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e324fa8-b3ee-4072-8a8d-5c08e771d0c0-logs\") pod \"watcher-applier-0\" (UID: \"9e324fa8-b3ee-4072-8a8d-5c08e771d0c0\") " pod="openstack/watcher-applier-0" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.949401 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b7xm\" (UniqueName: \"kubernetes.io/projected/9e324fa8-b3ee-4072-8a8d-5c08e771d0c0-kube-api-access-9b7xm\") pod \"watcher-applier-0\" (UID: \"9e324fa8-b3ee-4072-8a8d-5c08e771d0c0\") " pod="openstack/watcher-applier-0" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.949639 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea53227e-7c78-42b4-959c-dd2531914be2-config-data\") pod \"watcher-decision-engine-0\" (UID: \"ea53227e-7c78-42b4-959c-dd2531914be2\") " pod="openstack/watcher-decision-engine-0" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.939170 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea53227e-7c78-42b4-959c-dd2531914be2-logs\") pod \"watcher-decision-engine-0\" (UID: \"ea53227e-7c78-42b4-959c-dd2531914be2\") " pod="openstack/watcher-decision-engine-0" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.950327 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b167566-11db-4fba-9e9b-711b7a5f950d-logs\") pod \"watcher-api-0\" (UID: \"7b167566-11db-4fba-9e9b-711b7a5f950d\") " pod="openstack/watcher-api-0" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.953283 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-c52rv"] Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.954542 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-c52rv" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.971588 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e324fa8-b3ee-4072-8a8d-5c08e771d0c0-logs\") pod \"watcher-applier-0\" (UID: \"9e324fa8-b3ee-4072-8a8d-5c08e771d0c0\") " pod="openstack/watcher-applier-0" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.971879 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea53227e-7c78-42b4-959c-dd2531914be2-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"ea53227e-7c78-42b4-959c-dd2531914be2\") " pod="openstack/watcher-decision-engine-0" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.972562 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-zvr2k" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.973560 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.975723 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e324fa8-b3ee-4072-8a8d-5c08e771d0c0-config-data\") pod \"watcher-applier-0\" (UID: \"9e324fa8-b3ee-4072-8a8d-5c08e771d0c0\") " pod="openstack/watcher-applier-0" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.978555 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e324fa8-b3ee-4072-8a8d-5c08e771d0c0-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"9e324fa8-b3ee-4072-8a8d-5c08e771d0c0\") " pod="openstack/watcher-applier-0" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.982441 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7b167566-11db-4fba-9e9b-711b7a5f950d-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"7b167566-11db-4fba-9e9b-711b7a5f950d\") " pod="openstack/watcher-api-0" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.982623 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.987246 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b167566-11db-4fba-9e9b-711b7a5f950d-config-data\") pod \"watcher-api-0\" (UID: \"7b167566-11db-4fba-9e9b-711b7a5f950d\") " pod="openstack/watcher-api-0" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.994970 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b167566-11db-4fba-9e9b-711b7a5f950d-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"7b167566-11db-4fba-9e9b-711b7a5f950d\") " pod="openstack/watcher-api-0" Jan 22 16:51:07 crc kubenswrapper[4758]: I0122 16:51:07.995363 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ea53227e-7c78-42b4-959c-dd2531914be2-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"ea53227e-7c78-42b4-959c-dd2531914be2\") " pod="openstack/watcher-decision-engine-0" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.011442 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb4hd\" (UniqueName: \"kubernetes.io/projected/ea53227e-7c78-42b4-959c-dd2531914be2-kube-api-access-hb4hd\") pod \"watcher-decision-engine-0\" (UID: \"ea53227e-7c78-42b4-959c-dd2531914be2\") " pod="openstack/watcher-decision-engine-0" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.014501 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b7xm\" (UniqueName: \"kubernetes.io/projected/9e324fa8-b3ee-4072-8a8d-5c08e771d0c0-kube-api-access-9b7xm\") pod \"watcher-applier-0\" (UID: \"9e324fa8-b3ee-4072-8a8d-5c08e771d0c0\") " pod="openstack/watcher-applier-0" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.025495 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prmgg\" (UniqueName: \"kubernetes.io/projected/7b167566-11db-4fba-9e9b-711b7a5f950d-kube-api-access-prmgg\") pod \"watcher-api-0\" (UID: \"7b167566-11db-4fba-9e9b-711b7a5f950d\") " pod="openstack/watcher-api-0" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.054623 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl6z8\" (UniqueName: \"kubernetes.io/projected/4424ff23-60e8-4b65-80f1-87e1003b6f46-kube-api-access-tl6z8\") pod \"horizon-57f558b485-lrj7s\" (UID: \"4424ff23-60e8-4b65-80f1-87e1003b6f46\") " pod="openstack/horizon-57f558b485-lrj7s" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.054702 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdswc\" (UniqueName: \"kubernetes.io/projected/c276b685-1d06-4272-9eeb-7b759a8bffff-kube-api-access-hdswc\") pod \"neutron-db-sync-c52rv\" (UID: \"c276b685-1d06-4272-9eeb-7b759a8bffff\") " pod="openstack/neutron-db-sync-c52rv" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.054731 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4424ff23-60e8-4b65-80f1-87e1003b6f46-horizon-secret-key\") pod \"horizon-57f558b485-lrj7s\" (UID: \"4424ff23-60e8-4b65-80f1-87e1003b6f46\") " pod="openstack/horizon-57f558b485-lrj7s" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.054780 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4424ff23-60e8-4b65-80f1-87e1003b6f46-scripts\") pod \"horizon-57f558b485-lrj7s\" (UID: \"4424ff23-60e8-4b65-80f1-87e1003b6f46\") " pod="openstack/horizon-57f558b485-lrj7s" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.054801 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c276b685-1d06-4272-9eeb-7b759a8bffff-config\") pod \"neutron-db-sync-c52rv\" (UID: \"c276b685-1d06-4272-9eeb-7b759a8bffff\") " pod="openstack/neutron-db-sync-c52rv" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.054831 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4424ff23-60e8-4b65-80f1-87e1003b6f46-config-data\") pod \"horizon-57f558b485-lrj7s\" (UID: \"4424ff23-60e8-4b65-80f1-87e1003b6f46\") " pod="openstack/horizon-57f558b485-lrj7s" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.054869 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4424ff23-60e8-4b65-80f1-87e1003b6f46-logs\") pod \"horizon-57f558b485-lrj7s\" (UID: \"4424ff23-60e8-4b65-80f1-87e1003b6f46\") " pod="openstack/horizon-57f558b485-lrj7s" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.054891 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c276b685-1d06-4272-9eeb-7b759a8bffff-combined-ca-bundle\") pod \"neutron-db-sync-c52rv\" (UID: \"c276b685-1d06-4272-9eeb-7b759a8bffff\") " pod="openstack/neutron-db-sync-c52rv" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.055775 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-57f558b485-lrj7s"] Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.059879 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.065063 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b8f69bc8c-bxtdk" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.065977 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-c52rv"] Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.076127 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-529mh"] Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.081416 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-529mh" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.086628 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.086931 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.087582 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-85hcg" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.093261 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-529mh"] Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.112546 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.114801 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.118285 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.118540 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.127823 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b8f69bc8c-bxtdk"] Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.138006 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jv8qb" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.156362 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b1666997-8287-4065-bcaf-409713fc6782-etc-machine-id\") pod \"cinder-db-sync-529mh\" (UID: \"b1666997-8287-4065-bcaf-409713fc6782\") " pod="openstack/cinder-db-sync-529mh" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.156417 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4424ff23-60e8-4b65-80f1-87e1003b6f46-logs\") pod \"horizon-57f558b485-lrj7s\" (UID: \"4424ff23-60e8-4b65-80f1-87e1003b6f46\") " pod="openstack/horizon-57f558b485-lrj7s" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.156449 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1666997-8287-4065-bcaf-409713fc6782-combined-ca-bundle\") pod \"cinder-db-sync-529mh\" (UID: \"b1666997-8287-4065-bcaf-409713fc6782\") " pod="openstack/cinder-db-sync-529mh" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.156478 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c276b685-1d06-4272-9eeb-7b759a8bffff-combined-ca-bundle\") pod \"neutron-db-sync-c52rv\" (UID: \"c276b685-1d06-4272-9eeb-7b759a8bffff\") " pod="openstack/neutron-db-sync-c52rv" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.156566 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a67f1efb-4c74-4acd-9948-de1491a8479c-scripts\") pod \"ceilometer-0\" (UID: \"a67f1efb-4c74-4acd-9948-de1491a8479c\") " pod="openstack/ceilometer-0" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.156591 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl6z8\" (UniqueName: \"kubernetes.io/projected/4424ff23-60e8-4b65-80f1-87e1003b6f46-kube-api-access-tl6z8\") pod \"horizon-57f558b485-lrj7s\" (UID: \"4424ff23-60e8-4b65-80f1-87e1003b6f46\") " pod="openstack/horizon-57f558b485-lrj7s" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.156616 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a67f1efb-4c74-4acd-9948-de1491a8479c-log-httpd\") pod \"ceilometer-0\" (UID: \"a67f1efb-4c74-4acd-9948-de1491a8479c\") " pod="openstack/ceilometer-0" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.157070 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4424ff23-60e8-4b65-80f1-87e1003b6f46-logs\") pod \"horizon-57f558b485-lrj7s\" (UID: \"4424ff23-60e8-4b65-80f1-87e1003b6f46\") " pod="openstack/horizon-57f558b485-lrj7s" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.157298 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chd47\" (UniqueName: \"kubernetes.io/projected/a67f1efb-4c74-4acd-9948-de1491a8479c-kube-api-access-chd47\") pod \"ceilometer-0\" (UID: \"a67f1efb-4c74-4acd-9948-de1491a8479c\") " pod="openstack/ceilometer-0" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.157396 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b1666997-8287-4065-bcaf-409713fc6782-db-sync-config-data\") pod \"cinder-db-sync-529mh\" (UID: \"b1666997-8287-4065-bcaf-409713fc6782\") " pod="openstack/cinder-db-sync-529mh" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.157424 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a67f1efb-4c74-4acd-9948-de1491a8479c-run-httpd\") pod \"ceilometer-0\" (UID: \"a67f1efb-4c74-4acd-9948-de1491a8479c\") " pod="openstack/ceilometer-0" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.157481 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1666997-8287-4065-bcaf-409713fc6782-config-data\") pod \"cinder-db-sync-529mh\" (UID: \"b1666997-8287-4065-bcaf-409713fc6782\") " pod="openstack/cinder-db-sync-529mh" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.157519 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdswc\" (UniqueName: \"kubernetes.io/projected/c276b685-1d06-4272-9eeb-7b759a8bffff-kube-api-access-hdswc\") pod \"neutron-db-sync-c52rv\" (UID: \"c276b685-1d06-4272-9eeb-7b759a8bffff\") " pod="openstack/neutron-db-sync-c52rv" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.157561 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4424ff23-60e8-4b65-80f1-87e1003b6f46-horizon-secret-key\") pod \"horizon-57f558b485-lrj7s\" (UID: \"4424ff23-60e8-4b65-80f1-87e1003b6f46\") " pod="openstack/horizon-57f558b485-lrj7s" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.157583 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5x5k\" (UniqueName: \"kubernetes.io/projected/b1666997-8287-4065-bcaf-409713fc6782-kube-api-access-t5x5k\") pod \"cinder-db-sync-529mh\" (UID: \"b1666997-8287-4065-bcaf-409713fc6782\") " pod="openstack/cinder-db-sync-529mh" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.157627 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4424ff23-60e8-4b65-80f1-87e1003b6f46-scripts\") pod \"horizon-57f558b485-lrj7s\" (UID: \"4424ff23-60e8-4b65-80f1-87e1003b6f46\") " pod="openstack/horizon-57f558b485-lrj7s" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.157655 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c276b685-1d06-4272-9eeb-7b759a8bffff-config\") pod \"neutron-db-sync-c52rv\" (UID: \"c276b685-1d06-4272-9eeb-7b759a8bffff\") " pod="openstack/neutron-db-sync-c52rv" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.157679 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67f1efb-4c74-4acd-9948-de1491a8479c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a67f1efb-4c74-4acd-9948-de1491a8479c\") " pod="openstack/ceilometer-0" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.157712 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4424ff23-60e8-4b65-80f1-87e1003b6f46-config-data\") pod \"horizon-57f558b485-lrj7s\" (UID: \"4424ff23-60e8-4b65-80f1-87e1003b6f46\") " pod="openstack/horizon-57f558b485-lrj7s" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.157757 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a67f1efb-4c74-4acd-9948-de1491a8479c-config-data\") pod \"ceilometer-0\" (UID: \"a67f1efb-4c74-4acd-9948-de1491a8479c\") " pod="openstack/ceilometer-0" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.157825 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a67f1efb-4c74-4acd-9948-de1491a8479c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a67f1efb-4c74-4acd-9948-de1491a8479c\") " pod="openstack/ceilometer-0" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.157883 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1666997-8287-4065-bcaf-409713fc6782-scripts\") pod \"cinder-db-sync-529mh\" (UID: \"b1666997-8287-4065-bcaf-409713fc6782\") " pod="openstack/cinder-db-sync-529mh" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.158578 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4424ff23-60e8-4b65-80f1-87e1003b6f46-scripts\") pod \"horizon-57f558b485-lrj7s\" (UID: \"4424ff23-60e8-4b65-80f1-87e1003b6f46\") " pod="openstack/horizon-57f558b485-lrj7s" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.159721 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4424ff23-60e8-4b65-80f1-87e1003b6f46-config-data\") pod \"horizon-57f558b485-lrj7s\" (UID: \"4424ff23-60e8-4b65-80f1-87e1003b6f46\") " pod="openstack/horizon-57f558b485-lrj7s" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.160791 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-lv7h6"] Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.161977 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-lv7h6" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.164157 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4424ff23-60e8-4b65-80f1-87e1003b6f46-horizon-secret-key\") pod \"horizon-57f558b485-lrj7s\" (UID: \"4424ff23-60e8-4b65-80f1-87e1003b6f46\") " pod="openstack/horizon-57f558b485-lrj7s" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.166235 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c276b685-1d06-4272-9eeb-7b759a8bffff-combined-ca-bundle\") pod \"neutron-db-sync-c52rv\" (UID: \"c276b685-1d06-4272-9eeb-7b759a8bffff\") " pod="openstack/neutron-db-sync-c52rv" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.179976 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c276b685-1d06-4272-9eeb-7b759a8bffff-config\") pod \"neutron-db-sync-c52rv\" (UID: \"c276b685-1d06-4272-9eeb-7b759a8bffff\") " pod="openstack/neutron-db-sync-c52rv" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.180515 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.180707 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.180858 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-n4qvk" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.181074 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl6z8\" (UniqueName: \"kubernetes.io/projected/4424ff23-60e8-4b65-80f1-87e1003b6f46-kube-api-access-tl6z8\") pod \"horizon-57f558b485-lrj7s\" (UID: \"4424ff23-60e8-4b65-80f1-87e1003b6f46\") " pod="openstack/horizon-57f558b485-lrj7s" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.187392 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdswc\" (UniqueName: \"kubernetes.io/projected/c276b685-1d06-4272-9eeb-7b759a8bffff-kube-api-access-hdswc\") pod \"neutron-db-sync-c52rv\" (UID: \"c276b685-1d06-4272-9eeb-7b759a8bffff\") " pod="openstack/neutron-db-sync-c52rv" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.222310 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.250261 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.260152 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67f1efb-4c74-4acd-9948-de1491a8479c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a67f1efb-4c74-4acd-9948-de1491a8479c\") " pod="openstack/ceilometer-0" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.260202 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a67f1efb-4c74-4acd-9948-de1491a8479c-config-data\") pod \"ceilometer-0\" (UID: \"a67f1efb-4c74-4acd-9948-de1491a8479c\") " pod="openstack/ceilometer-0" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.260234 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a67f1efb-4c74-4acd-9948-de1491a8479c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a67f1efb-4c74-4acd-9948-de1491a8479c\") " pod="openstack/ceilometer-0" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.260263 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1666997-8287-4065-bcaf-409713fc6782-scripts\") pod \"cinder-db-sync-529mh\" (UID: \"b1666997-8287-4065-bcaf-409713fc6782\") " pod="openstack/cinder-db-sync-529mh" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.260293 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b1666997-8287-4065-bcaf-409713fc6782-etc-machine-id\") pod \"cinder-db-sync-529mh\" (UID: \"b1666997-8287-4065-bcaf-409713fc6782\") " pod="openstack/cinder-db-sync-529mh" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.260326 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwvtc\" (UniqueName: \"kubernetes.io/projected/1cc69af0-0ef0-4399-9084-e81419b65acd-kube-api-access-wwvtc\") pod \"placement-db-sync-lv7h6\" (UID: \"1cc69af0-0ef0-4399-9084-e81419b65acd\") " pod="openstack/placement-db-sync-lv7h6" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.261244 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1666997-8287-4065-bcaf-409713fc6782-combined-ca-bundle\") pod \"cinder-db-sync-529mh\" (UID: \"b1666997-8287-4065-bcaf-409713fc6782\") " pod="openstack/cinder-db-sync-529mh" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.261291 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1cc69af0-0ef0-4399-9084-e81419b65acd-logs\") pod \"placement-db-sync-lv7h6\" (UID: \"1cc69af0-0ef0-4399-9084-e81419b65acd\") " pod="openstack/placement-db-sync-lv7h6" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.261343 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cc69af0-0ef0-4399-9084-e81419b65acd-scripts\") pod \"placement-db-sync-lv7h6\" (UID: \"1cc69af0-0ef0-4399-9084-e81419b65acd\") " pod="openstack/placement-db-sync-lv7h6" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.261360 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cc69af0-0ef0-4399-9084-e81419b65acd-combined-ca-bundle\") pod \"placement-db-sync-lv7h6\" (UID: \"1cc69af0-0ef0-4399-9084-e81419b65acd\") " pod="openstack/placement-db-sync-lv7h6" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.261381 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a67f1efb-4c74-4acd-9948-de1491a8479c-scripts\") pod \"ceilometer-0\" (UID: \"a67f1efb-4c74-4acd-9948-de1491a8479c\") " pod="openstack/ceilometer-0" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.261400 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a67f1efb-4c74-4acd-9948-de1491a8479c-log-httpd\") pod \"ceilometer-0\" (UID: \"a67f1efb-4c74-4acd-9948-de1491a8479c\") " pod="openstack/ceilometer-0" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.261414 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cc69af0-0ef0-4399-9084-e81419b65acd-config-data\") pod \"placement-db-sync-lv7h6\" (UID: \"1cc69af0-0ef0-4399-9084-e81419b65acd\") " pod="openstack/placement-db-sync-lv7h6" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.261436 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chd47\" (UniqueName: \"kubernetes.io/projected/a67f1efb-4c74-4acd-9948-de1491a8479c-kube-api-access-chd47\") pod \"ceilometer-0\" (UID: \"a67f1efb-4c74-4acd-9948-de1491a8479c\") " pod="openstack/ceilometer-0" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.261459 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b1666997-8287-4065-bcaf-409713fc6782-db-sync-config-data\") pod \"cinder-db-sync-529mh\" (UID: \"b1666997-8287-4065-bcaf-409713fc6782\") " pod="openstack/cinder-db-sync-529mh" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.261475 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a67f1efb-4c74-4acd-9948-de1491a8479c-run-httpd\") pod \"ceilometer-0\" (UID: \"a67f1efb-4c74-4acd-9948-de1491a8479c\") " pod="openstack/ceilometer-0" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.261549 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1666997-8287-4065-bcaf-409713fc6782-config-data\") pod \"cinder-db-sync-529mh\" (UID: \"b1666997-8287-4065-bcaf-409713fc6782\") " pod="openstack/cinder-db-sync-529mh" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.263754 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5x5k\" (UniqueName: \"kubernetes.io/projected/b1666997-8287-4065-bcaf-409713fc6782-kube-api-access-t5x5k\") pod \"cinder-db-sync-529mh\" (UID: \"b1666997-8287-4065-bcaf-409713fc6782\") " pod="openstack/cinder-db-sync-529mh" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.264435 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a67f1efb-4c74-4acd-9948-de1491a8479c-log-httpd\") pod \"ceilometer-0\" (UID: \"a67f1efb-4c74-4acd-9948-de1491a8479c\") " pod="openstack/ceilometer-0" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.265214 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a67f1efb-4c74-4acd-9948-de1491a8479c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a67f1efb-4c74-4acd-9948-de1491a8479c\") " pod="openstack/ceilometer-0" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.265346 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a67f1efb-4c74-4acd-9948-de1491a8479c-run-httpd\") pod \"ceilometer-0\" (UID: \"a67f1efb-4c74-4acd-9948-de1491a8479c\") " pod="openstack/ceilometer-0" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.273400 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b1666997-8287-4065-bcaf-409713fc6782-etc-machine-id\") pod \"cinder-db-sync-529mh\" (UID: \"b1666997-8287-4065-bcaf-409713fc6782\") " pod="openstack/cinder-db-sync-529mh" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.274780 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b1666997-8287-4065-bcaf-409713fc6782-db-sync-config-data\") pod \"cinder-db-sync-529mh\" (UID: \"b1666997-8287-4065-bcaf-409713fc6782\") " pod="openstack/cinder-db-sync-529mh" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.276411 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1666997-8287-4065-bcaf-409713fc6782-scripts\") pod \"cinder-db-sync-529mh\" (UID: \"b1666997-8287-4065-bcaf-409713fc6782\") " pod="openstack/cinder-db-sync-529mh" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.276842 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1666997-8287-4065-bcaf-409713fc6782-combined-ca-bundle\") pod \"cinder-db-sync-529mh\" (UID: \"b1666997-8287-4065-bcaf-409713fc6782\") " pod="openstack/cinder-db-sync-529mh" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.276855 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a67f1efb-4c74-4acd-9948-de1491a8479c-scripts\") pod \"ceilometer-0\" (UID: \"a67f1efb-4c74-4acd-9948-de1491a8479c\") " pod="openstack/ceilometer-0" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.276952 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-lv7h6"] Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.276978 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1666997-8287-4065-bcaf-409713fc6782-config-data\") pod \"cinder-db-sync-529mh\" (UID: \"b1666997-8287-4065-bcaf-409713fc6782\") " pod="openstack/cinder-db-sync-529mh" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.283543 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a67f1efb-4c74-4acd-9948-de1491a8479c-config-data\") pod \"ceilometer-0\" (UID: \"a67f1efb-4c74-4acd-9948-de1491a8479c\") " pod="openstack/ceilometer-0" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.285425 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67f1efb-4c74-4acd-9948-de1491a8479c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a67f1efb-4c74-4acd-9948-de1491a8479c\") " pod="openstack/ceilometer-0" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.301041 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5x5k\" (UniqueName: \"kubernetes.io/projected/b1666997-8287-4065-bcaf-409713fc6782-kube-api-access-t5x5k\") pod \"cinder-db-sync-529mh\" (UID: \"b1666997-8287-4065-bcaf-409713fc6782\") " pod="openstack/cinder-db-sync-529mh" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.304025 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chd47\" (UniqueName: \"kubernetes.io/projected/a67f1efb-4c74-4acd-9948-de1491a8479c-kube-api-access-chd47\") pod \"ceilometer-0\" (UID: \"a67f1efb-4c74-4acd-9948-de1491a8479c\") " pod="openstack/ceilometer-0" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.309990 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-dmssm"] Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.312970 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.315351 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-dmssm" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.318054 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-z4pqk" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.318621 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.340157 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-645cd9555c-62zx7"] Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.341731 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-645cd9555c-62zx7" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.366801 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1cc69af0-0ef0-4399-9084-e81419b65acd-logs\") pod \"placement-db-sync-lv7h6\" (UID: \"1cc69af0-0ef0-4399-9084-e81419b65acd\") " pod="openstack/placement-db-sync-lv7h6" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.366867 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a5061fa-23f9-42ce-9682-a3fd99d419d7-combined-ca-bundle\") pod \"barbican-db-sync-dmssm\" (UID: \"7a5061fa-23f9-42ce-9682-a3fd99d419d7\") " pod="openstack/barbican-db-sync-dmssm" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.366930 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cc69af0-0ef0-4399-9084-e81419b65acd-scripts\") pod \"placement-db-sync-lv7h6\" (UID: \"1cc69af0-0ef0-4399-9084-e81419b65acd\") " pod="openstack/placement-db-sync-lv7h6" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.366951 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cc69af0-0ef0-4399-9084-e81419b65acd-combined-ca-bundle\") pod \"placement-db-sync-lv7h6\" (UID: \"1cc69af0-0ef0-4399-9084-e81419b65acd\") " pod="openstack/placement-db-sync-lv7h6" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.366979 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cc69af0-0ef0-4399-9084-e81419b65acd-config-data\") pod \"placement-db-sync-lv7h6\" (UID: \"1cc69af0-0ef0-4399-9084-e81419b65acd\") " pod="openstack/placement-db-sync-lv7h6" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.367014 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcj2v\" (UniqueName: \"kubernetes.io/projected/5e14bf40-a1bf-421a-acb7-f8e45f36dbf6-kube-api-access-fcj2v\") pod \"dnsmasq-dns-645cd9555c-62zx7\" (UID: \"5e14bf40-a1bf-421a-acb7-f8e45f36dbf6\") " pod="openstack/dnsmasq-dns-645cd9555c-62zx7" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.367052 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e14bf40-a1bf-421a-acb7-f8e45f36dbf6-dns-svc\") pod \"dnsmasq-dns-645cd9555c-62zx7\" (UID: \"5e14bf40-a1bf-421a-acb7-f8e45f36dbf6\") " pod="openstack/dnsmasq-dns-645cd9555c-62zx7" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.367094 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e14bf40-a1bf-421a-acb7-f8e45f36dbf6-ovsdbserver-nb\") pod \"dnsmasq-dns-645cd9555c-62zx7\" (UID: \"5e14bf40-a1bf-421a-acb7-f8e45f36dbf6\") " pod="openstack/dnsmasq-dns-645cd9555c-62zx7" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.367115 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7a5061fa-23f9-42ce-9682-a3fd99d419d7-db-sync-config-data\") pod \"barbican-db-sync-dmssm\" (UID: \"7a5061fa-23f9-42ce-9682-a3fd99d419d7\") " pod="openstack/barbican-db-sync-dmssm" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.367138 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnz85\" (UniqueName: \"kubernetes.io/projected/7a5061fa-23f9-42ce-9682-a3fd99d419d7-kube-api-access-lnz85\") pod \"barbican-db-sync-dmssm\" (UID: \"7a5061fa-23f9-42ce-9682-a3fd99d419d7\") " pod="openstack/barbican-db-sync-dmssm" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.368051 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e14bf40-a1bf-421a-acb7-f8e45f36dbf6-ovsdbserver-sb\") pod \"dnsmasq-dns-645cd9555c-62zx7\" (UID: \"5e14bf40-a1bf-421a-acb7-f8e45f36dbf6\") " pod="openstack/dnsmasq-dns-645cd9555c-62zx7" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.368078 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e14bf40-a1bf-421a-acb7-f8e45f36dbf6-config\") pod \"dnsmasq-dns-645cd9555c-62zx7\" (UID: \"5e14bf40-a1bf-421a-acb7-f8e45f36dbf6\") " pod="openstack/dnsmasq-dns-645cd9555c-62zx7" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.368135 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5e14bf40-a1bf-421a-acb7-f8e45f36dbf6-dns-swift-storage-0\") pod \"dnsmasq-dns-645cd9555c-62zx7\" (UID: \"5e14bf40-a1bf-421a-acb7-f8e45f36dbf6\") " pod="openstack/dnsmasq-dns-645cd9555c-62zx7" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.368187 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwvtc\" (UniqueName: \"kubernetes.io/projected/1cc69af0-0ef0-4399-9084-e81419b65acd-kube-api-access-wwvtc\") pod \"placement-db-sync-lv7h6\" (UID: \"1cc69af0-0ef0-4399-9084-e81419b65acd\") " pod="openstack/placement-db-sync-lv7h6" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.372898 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1cc69af0-0ef0-4399-9084-e81419b65acd-logs\") pod \"placement-db-sync-lv7h6\" (UID: \"1cc69af0-0ef0-4399-9084-e81419b65acd\") " pod="openstack/placement-db-sync-lv7h6" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.377005 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cc69af0-0ef0-4399-9084-e81419b65acd-combined-ca-bundle\") pod \"placement-db-sync-lv7h6\" (UID: \"1cc69af0-0ef0-4399-9084-e81419b65acd\") " pod="openstack/placement-db-sync-lv7h6" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.378927 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cc69af0-0ef0-4399-9084-e81419b65acd-scripts\") pod \"placement-db-sync-lv7h6\" (UID: \"1cc69af0-0ef0-4399-9084-e81419b65acd\") " pod="openstack/placement-db-sync-lv7h6" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.384150 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cc69af0-0ef0-4399-9084-e81419b65acd-config-data\") pod \"placement-db-sync-lv7h6\" (UID: \"1cc69af0-0ef0-4399-9084-e81419b65acd\") " pod="openstack/placement-db-sync-lv7h6" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.405100 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57f558b485-lrj7s" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.405199 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwvtc\" (UniqueName: \"kubernetes.io/projected/1cc69af0-0ef0-4399-9084-e81419b65acd-kube-api-access-wwvtc\") pod \"placement-db-sync-lv7h6\" (UID: \"1cc69af0-0ef0-4399-9084-e81419b65acd\") " pod="openstack/placement-db-sync-lv7h6" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.414313 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-dmssm"] Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.423659 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5b5d5b589c-8c4hx"] Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.428671 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b5d5b589c-8c4hx" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.430510 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-c52rv" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.451136 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-645cd9555c-62zx7"] Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.462501 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5b5d5b589c-8c4hx"] Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.465402 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-529mh" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.470355 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnz85\" (UniqueName: \"kubernetes.io/projected/7a5061fa-23f9-42ce-9682-a3fd99d419d7-kube-api-access-lnz85\") pod \"barbican-db-sync-dmssm\" (UID: \"7a5061fa-23f9-42ce-9682-a3fd99d419d7\") " pod="openstack/barbican-db-sync-dmssm" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.472377 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e14bf40-a1bf-421a-acb7-f8e45f36dbf6-ovsdbserver-sb\") pod \"dnsmasq-dns-645cd9555c-62zx7\" (UID: \"5e14bf40-a1bf-421a-acb7-f8e45f36dbf6\") " pod="openstack/dnsmasq-dns-645cd9555c-62zx7" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.473304 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e14bf40-a1bf-421a-acb7-f8e45f36dbf6-ovsdbserver-sb\") pod \"dnsmasq-dns-645cd9555c-62zx7\" (UID: \"5e14bf40-a1bf-421a-acb7-f8e45f36dbf6\") " pod="openstack/dnsmasq-dns-645cd9555c-62zx7" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.473426 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e14bf40-a1bf-421a-acb7-f8e45f36dbf6-config\") pod \"dnsmasq-dns-645cd9555c-62zx7\" (UID: \"5e14bf40-a1bf-421a-acb7-f8e45f36dbf6\") " pod="openstack/dnsmasq-dns-645cd9555c-62zx7" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.473506 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5e14bf40-a1bf-421a-acb7-f8e45f36dbf6-dns-swift-storage-0\") pod \"dnsmasq-dns-645cd9555c-62zx7\" (UID: \"5e14bf40-a1bf-421a-acb7-f8e45f36dbf6\") " pod="openstack/dnsmasq-dns-645cd9555c-62zx7" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.473626 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e3eee08c-7cca-4bd3-bcd2-f3702e470ff2-scripts\") pod \"horizon-5b5d5b589c-8c4hx\" (UID: \"e3eee08c-7cca-4bd3-bcd2-f3702e470ff2\") " pod="openstack/horizon-5b5d5b589c-8c4hx" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.473668 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a5061fa-23f9-42ce-9682-a3fd99d419d7-combined-ca-bundle\") pod \"barbican-db-sync-dmssm\" (UID: \"7a5061fa-23f9-42ce-9682-a3fd99d419d7\") " pod="openstack/barbican-db-sync-dmssm" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.473716 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e3eee08c-7cca-4bd3-bcd2-f3702e470ff2-horizon-secret-key\") pod \"horizon-5b5d5b589c-8c4hx\" (UID: \"e3eee08c-7cca-4bd3-bcd2-f3702e470ff2\") " pod="openstack/horizon-5b5d5b589c-8c4hx" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.473950 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3eee08c-7cca-4bd3-bcd2-f3702e470ff2-logs\") pod \"horizon-5b5d5b589c-8c4hx\" (UID: \"e3eee08c-7cca-4bd3-bcd2-f3702e470ff2\") " pod="openstack/horizon-5b5d5b589c-8c4hx" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.474001 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcj2v\" (UniqueName: \"kubernetes.io/projected/5e14bf40-a1bf-421a-acb7-f8e45f36dbf6-kube-api-access-fcj2v\") pod \"dnsmasq-dns-645cd9555c-62zx7\" (UID: \"5e14bf40-a1bf-421a-acb7-f8e45f36dbf6\") " pod="openstack/dnsmasq-dns-645cd9555c-62zx7" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.474040 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n4qw\" (UniqueName: \"kubernetes.io/projected/e3eee08c-7cca-4bd3-bcd2-f3702e470ff2-kube-api-access-9n4qw\") pod \"horizon-5b5d5b589c-8c4hx\" (UID: \"e3eee08c-7cca-4bd3-bcd2-f3702e470ff2\") " pod="openstack/horizon-5b5d5b589c-8c4hx" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.474082 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e14bf40-a1bf-421a-acb7-f8e45f36dbf6-dns-svc\") pod \"dnsmasq-dns-645cd9555c-62zx7\" (UID: \"5e14bf40-a1bf-421a-acb7-f8e45f36dbf6\") " pod="openstack/dnsmasq-dns-645cd9555c-62zx7" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.474105 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3eee08c-7cca-4bd3-bcd2-f3702e470ff2-config-data\") pod \"horizon-5b5d5b589c-8c4hx\" (UID: \"e3eee08c-7cca-4bd3-bcd2-f3702e470ff2\") " pod="openstack/horizon-5b5d5b589c-8c4hx" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.474166 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e14bf40-a1bf-421a-acb7-f8e45f36dbf6-ovsdbserver-nb\") pod \"dnsmasq-dns-645cd9555c-62zx7\" (UID: \"5e14bf40-a1bf-421a-acb7-f8e45f36dbf6\") " pod="openstack/dnsmasq-dns-645cd9555c-62zx7" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.474213 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7a5061fa-23f9-42ce-9682-a3fd99d419d7-db-sync-config-data\") pod \"barbican-db-sync-dmssm\" (UID: \"7a5061fa-23f9-42ce-9682-a3fd99d419d7\") " pod="openstack/barbican-db-sync-dmssm" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.479123 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e14bf40-a1bf-421a-acb7-f8e45f36dbf6-ovsdbserver-nb\") pod \"dnsmasq-dns-645cd9555c-62zx7\" (UID: \"5e14bf40-a1bf-421a-acb7-f8e45f36dbf6\") " pod="openstack/dnsmasq-dns-645cd9555c-62zx7" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.479123 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e14bf40-a1bf-421a-acb7-f8e45f36dbf6-dns-svc\") pod \"dnsmasq-dns-645cd9555c-62zx7\" (UID: \"5e14bf40-a1bf-421a-acb7-f8e45f36dbf6\") " pod="openstack/dnsmasq-dns-645cd9555c-62zx7" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.479795 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e14bf40-a1bf-421a-acb7-f8e45f36dbf6-config\") pod \"dnsmasq-dns-645cd9555c-62zx7\" (UID: \"5e14bf40-a1bf-421a-acb7-f8e45f36dbf6\") " pod="openstack/dnsmasq-dns-645cd9555c-62zx7" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.480121 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5e14bf40-a1bf-421a-acb7-f8e45f36dbf6-dns-swift-storage-0\") pod \"dnsmasq-dns-645cd9555c-62zx7\" (UID: \"5e14bf40-a1bf-421a-acb7-f8e45f36dbf6\") " pod="openstack/dnsmasq-dns-645cd9555c-62zx7" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.489878 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7a5061fa-23f9-42ce-9682-a3fd99d419d7-db-sync-config-data\") pod \"barbican-db-sync-dmssm\" (UID: \"7a5061fa-23f9-42ce-9682-a3fd99d419d7\") " pod="openstack/barbican-db-sync-dmssm" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.491548 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.515664 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-lv7h6" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.523000 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcj2v\" (UniqueName: \"kubernetes.io/projected/5e14bf40-a1bf-421a-acb7-f8e45f36dbf6-kube-api-access-fcj2v\") pod \"dnsmasq-dns-645cd9555c-62zx7\" (UID: \"5e14bf40-a1bf-421a-acb7-f8e45f36dbf6\") " pod="openstack/dnsmasq-dns-645cd9555c-62zx7" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.524084 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnz85\" (UniqueName: \"kubernetes.io/projected/7a5061fa-23f9-42ce-9682-a3fd99d419d7-kube-api-access-lnz85\") pod \"barbican-db-sync-dmssm\" (UID: \"7a5061fa-23f9-42ce-9682-a3fd99d419d7\") " pod="openstack/barbican-db-sync-dmssm" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.525080 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a5061fa-23f9-42ce-9682-a3fd99d419d7-combined-ca-bundle\") pod \"barbican-db-sync-dmssm\" (UID: \"7a5061fa-23f9-42ce-9682-a3fd99d419d7\") " pod="openstack/barbican-db-sync-dmssm" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.588526 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e3eee08c-7cca-4bd3-bcd2-f3702e470ff2-scripts\") pod \"horizon-5b5d5b589c-8c4hx\" (UID: \"e3eee08c-7cca-4bd3-bcd2-f3702e470ff2\") " pod="openstack/horizon-5b5d5b589c-8c4hx" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.588593 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e3eee08c-7cca-4bd3-bcd2-f3702e470ff2-horizon-secret-key\") pod \"horizon-5b5d5b589c-8c4hx\" (UID: \"e3eee08c-7cca-4bd3-bcd2-f3702e470ff2\") " pod="openstack/horizon-5b5d5b589c-8c4hx" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.588640 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3eee08c-7cca-4bd3-bcd2-f3702e470ff2-logs\") pod \"horizon-5b5d5b589c-8c4hx\" (UID: \"e3eee08c-7cca-4bd3-bcd2-f3702e470ff2\") " pod="openstack/horizon-5b5d5b589c-8c4hx" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.588876 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n4qw\" (UniqueName: \"kubernetes.io/projected/e3eee08c-7cca-4bd3-bcd2-f3702e470ff2-kube-api-access-9n4qw\") pod \"horizon-5b5d5b589c-8c4hx\" (UID: \"e3eee08c-7cca-4bd3-bcd2-f3702e470ff2\") " pod="openstack/horizon-5b5d5b589c-8c4hx" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.588899 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3eee08c-7cca-4bd3-bcd2-f3702e470ff2-config-data\") pod \"horizon-5b5d5b589c-8c4hx\" (UID: \"e3eee08c-7cca-4bd3-bcd2-f3702e470ff2\") " pod="openstack/horizon-5b5d5b589c-8c4hx" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.590371 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3eee08c-7cca-4bd3-bcd2-f3702e470ff2-config-data\") pod \"horizon-5b5d5b589c-8c4hx\" (UID: \"e3eee08c-7cca-4bd3-bcd2-f3702e470ff2\") " pod="openstack/horizon-5b5d5b589c-8c4hx" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.597164 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3eee08c-7cca-4bd3-bcd2-f3702e470ff2-logs\") pod \"horizon-5b5d5b589c-8c4hx\" (UID: \"e3eee08c-7cca-4bd3-bcd2-f3702e470ff2\") " pod="openstack/horizon-5b5d5b589c-8c4hx" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.611097 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e3eee08c-7cca-4bd3-bcd2-f3702e470ff2-scripts\") pod \"horizon-5b5d5b589c-8c4hx\" (UID: \"e3eee08c-7cca-4bd3-bcd2-f3702e470ff2\") " pod="openstack/horizon-5b5d5b589c-8c4hx" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.621498 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e3eee08c-7cca-4bd3-bcd2-f3702e470ff2-horizon-secret-key\") pod \"horizon-5b5d5b589c-8c4hx\" (UID: \"e3eee08c-7cca-4bd3-bcd2-f3702e470ff2\") " pod="openstack/horizon-5b5d5b589c-8c4hx" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.656727 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-dmssm" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.658825 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n4qw\" (UniqueName: \"kubernetes.io/projected/e3eee08c-7cca-4bd3-bcd2-f3702e470ff2-kube-api-access-9n4qw\") pod \"horizon-5b5d5b589c-8c4hx\" (UID: \"e3eee08c-7cca-4bd3-bcd2-f3702e470ff2\") " pod="openstack/horizon-5b5d5b589c-8c4hx" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.699098 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-645cd9555c-62zx7" Jan 22 16:51:08 crc kubenswrapper[4758]: I0122 16:51:08.817478 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b5d5b589c-8c4hx" Jan 22 16:51:09 crc kubenswrapper[4758]: I0122 16:51:09.396828 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jv8qb"] Jan 22 16:51:09 crc kubenswrapper[4758]: W0122 16:51:09.400536 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddbcd3850_61a6_4d03_914a_790b1257e2fe.slice/crio-423c8abe6815b8a8670b76291d6d1848aaf92f1ac62c1969580198667c7a64da WatchSource:0}: Error finding container 423c8abe6815b8a8670b76291d6d1848aaf92f1ac62c1969580198667c7a64da: Status 404 returned error can't find the container with id 423c8abe6815b8a8670b76291d6d1848aaf92f1ac62c1969580198667c7a64da Jan 22 16:51:09 crc kubenswrapper[4758]: I0122 16:51:09.436383 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b8f69bc8c-bxtdk"] Jan 22 16:51:09 crc kubenswrapper[4758]: W0122 16:51:09.614707 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a9b58cb_7958_4dae_82ec_c435a970a8db.slice/crio-a7f790ed3ee4671c58f53e69970d6b8c66a6f7bbd8845584bc0886697c691e6e WatchSource:0}: Error finding container a7f790ed3ee4671c58f53e69970d6b8c66a6f7bbd8845584bc0886697c691e6e: Status 404 returned error can't find the container with id a7f790ed3ee4671c58f53e69970d6b8c66a6f7bbd8845584bc0886697c691e6e Jan 22 16:51:09 crc kubenswrapper[4758]: I0122 16:51:09.636198 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 22 16:51:09 crc kubenswrapper[4758]: I0122 16:51:09.908407 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-c52rv"] Jan 22 16:51:09 crc kubenswrapper[4758]: I0122 16:51:09.942849 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Jan 22 16:51:09 crc kubenswrapper[4758]: I0122 16:51:09.951869 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 16:51:10 crc kubenswrapper[4758]: I0122 16:51:10.011617 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-lv7h6"] Jan 22 16:51:10 crc kubenswrapper[4758]: I0122 16:51:10.017172 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-57f558b485-lrj7s"] Jan 22 16:51:10 crc kubenswrapper[4758]: W0122 16:51:10.049228 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1cc69af0_0ef0_4399_9084_e81419b65acd.slice/crio-8859445bae632771d33099612a2d6cd150dac680a2abe866e9636c3e306c179d WatchSource:0}: Error finding container 8859445bae632771d33099612a2d6cd150dac680a2abe866e9636c3e306c179d: Status 404 returned error can't find the container with id 8859445bae632771d33099612a2d6cd150dac680a2abe866e9636c3e306c179d Jan 22 16:51:10 crc kubenswrapper[4758]: W0122 16:51:10.080159 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e324fa8_b3ee_4072_8a8d_5c08e771d0c0.slice/crio-4d87ac8190614b24ac71a438a4e7643ba4fa34b3ba33d9f3c1be4c1c5737674a WatchSource:0}: Error finding container 4d87ac8190614b24ac71a438a4e7643ba4fa34b3ba33d9f3c1be4c1c5737674a: Status 404 returned error can't find the container with id 4d87ac8190614b24ac71a438a4e7643ba4fa34b3ba33d9f3c1be4c1c5737674a Jan 22 16:51:10 crc kubenswrapper[4758]: W0122 16:51:10.082224 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4424ff23_60e8_4b65_80f1_87e1003b6f46.slice/crio-e22d74a76dd4a2b886dab039bebf7877b847c30eb40232a5256b67257958671c WatchSource:0}: Error finding container e22d74a76dd4a2b886dab039bebf7877b847c30eb40232a5256b67257958671c: Status 404 returned error can't find the container with id e22d74a76dd4a2b886dab039bebf7877b847c30eb40232a5256b67257958671c Jan 22 16:51:10 crc kubenswrapper[4758]: I0122 16:51:10.152014 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-529mh"] Jan 22 16:51:10 crc kubenswrapper[4758]: I0122 16:51:10.169241 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5b5d5b589c-8c4hx"] Jan 22 16:51:10 crc kubenswrapper[4758]: I0122 16:51:10.180338 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 22 16:51:10 crc kubenswrapper[4758]: W0122 16:51:10.190388 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea53227e_7c78_42b4_959c_dd2531914be2.slice/crio-c77ced53f64d07ef3a38ca638ea8cd3142878c1beb3143a78ba43a71d899d5f1 WatchSource:0}: Error finding container c77ced53f64d07ef3a38ca638ea8cd3142878c1beb3143a78ba43a71d899d5f1: Status 404 returned error can't find the container with id c77ced53f64d07ef3a38ca638ea8cd3142878c1beb3143a78ba43a71d899d5f1 Jan 22 16:51:10 crc kubenswrapper[4758]: I0122 16:51:10.195923 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-dmssm"] Jan 22 16:51:10 crc kubenswrapper[4758]: W0122 16:51:10.201278 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e14bf40_a1bf_421a_acb7_f8e45f36dbf6.slice/crio-036f040a3a0bd5395031fa867b9314d30fbb931e79054ae07f95937b4b56bf3d WatchSource:0}: Error finding container 036f040a3a0bd5395031fa867b9314d30fbb931e79054ae07f95937b4b56bf3d: Status 404 returned error can't find the container with id 036f040a3a0bd5395031fa867b9314d30fbb931e79054ae07f95937b4b56bf3d Jan 22 16:51:10 crc kubenswrapper[4758]: W0122 16:51:10.219852 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1666997_8287_4065_bcaf_409713fc6782.slice/crio-d2857fc65ea5b8a2f731139ab65aa0f1107efabab815078a44ab02715be19125 WatchSource:0}: Error finding container d2857fc65ea5b8a2f731139ab65aa0f1107efabab815078a44ab02715be19125: Status 404 returned error can't find the container with id d2857fc65ea5b8a2f731139ab65aa0f1107efabab815078a44ab02715be19125 Jan 22 16:51:10 crc kubenswrapper[4758]: I0122 16:51:10.241469 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-645cd9555c-62zx7"] Jan 22 16:51:10 crc kubenswrapper[4758]: I0122 16:51:10.321529 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-529mh" event={"ID":"b1666997-8287-4065-bcaf-409713fc6782","Type":"ContainerStarted","Data":"d2857fc65ea5b8a2f731139ab65aa0f1107efabab815078a44ab02715be19125"} Jan 22 16:51:10 crc kubenswrapper[4758]: I0122 16:51:10.326605 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ea53227e-7c78-42b4-959c-dd2531914be2","Type":"ContainerStarted","Data":"c77ced53f64d07ef3a38ca638ea8cd3142878c1beb3143a78ba43a71d899d5f1"} Jan 22 16:51:10 crc kubenswrapper[4758]: I0122 16:51:10.333756 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a67f1efb-4c74-4acd-9948-de1491a8479c","Type":"ContainerStarted","Data":"d7a1cf246f1b5bd5c74c6e6f6c8fb54d02ec6635810328fd6150b356007a2a66"} Jan 22 16:51:10 crc kubenswrapper[4758]: I0122 16:51:10.340968 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dmssm" event={"ID":"7a5061fa-23f9-42ce-9682-a3fd99d419d7","Type":"ContainerStarted","Data":"06e8257cc9a7b2575bb3496493220632652948430f3d663277adc891aabc2e93"} Jan 22 16:51:10 crc kubenswrapper[4758]: I0122 16:51:10.346982 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b8f69bc8c-bxtdk" event={"ID":"2a9b58cb-7958-4dae-82ec-c435a970a8db","Type":"ContainerStarted","Data":"a7f790ed3ee4671c58f53e69970d6b8c66a6f7bbd8845584bc0886697c691e6e"} Jan 22 16:51:10 crc kubenswrapper[4758]: I0122 16:51:10.368095 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-645cd9555c-62zx7" event={"ID":"5e14bf40-a1bf-421a-acb7-f8e45f36dbf6","Type":"ContainerStarted","Data":"036f040a3a0bd5395031fa867b9314d30fbb931e79054ae07f95937b4b56bf3d"} Jan 22 16:51:10 crc kubenswrapper[4758]: I0122 16:51:10.369662 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b5d5b589c-8c4hx" event={"ID":"e3eee08c-7cca-4bd3-bcd2-f3702e470ff2","Type":"ContainerStarted","Data":"c1b9d0a6ef91826b41a41b65dcf6960a48909596a19e804626ac0264c304bbd5"} Jan 22 16:51:10 crc kubenswrapper[4758]: I0122 16:51:10.370662 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"9e324fa8-b3ee-4072-8a8d-5c08e771d0c0","Type":"ContainerStarted","Data":"4d87ac8190614b24ac71a438a4e7643ba4fa34b3ba33d9f3c1be4c1c5737674a"} Jan 22 16:51:10 crc kubenswrapper[4758]: I0122 16:51:10.371673 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-lv7h6" event={"ID":"1cc69af0-0ef0-4399-9084-e81419b65acd","Type":"ContainerStarted","Data":"8859445bae632771d33099612a2d6cd150dac680a2abe866e9636c3e306c179d"} Jan 22 16:51:10 crc kubenswrapper[4758]: I0122 16:51:10.373708 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-c52rv" event={"ID":"c276b685-1d06-4272-9eeb-7b759a8bffff","Type":"ContainerStarted","Data":"bdda7b4130b590b05b0c7299e6a52049afd08bb5a570e99c0c911722dd51e7fb"} Jan 22 16:51:10 crc kubenswrapper[4758]: I0122 16:51:10.377538 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"7b167566-11db-4fba-9e9b-711b7a5f950d","Type":"ContainerStarted","Data":"0b344bff57236d08e3baf13f0393ce1fe41a00bd1b26c2436014a3fa2fd2a966"} Jan 22 16:51:10 crc kubenswrapper[4758]: I0122 16:51:10.380090 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jv8qb" event={"ID":"dbcd3850-61a6-4d03-914a-790b1257e2fe","Type":"ContainerStarted","Data":"423c8abe6815b8a8670b76291d6d1848aaf92f1ac62c1969580198667c7a64da"} Jan 22 16:51:10 crc kubenswrapper[4758]: I0122 16:51:10.389314 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57f558b485-lrj7s" event={"ID":"4424ff23-60e8-4b65-80f1-87e1003b6f46","Type":"ContainerStarted","Data":"e22d74a76dd4a2b886dab039bebf7877b847c30eb40232a5256b67257958671c"} Jan 22 16:51:11 crc kubenswrapper[4758]: I0122 16:51:11.023234 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Jan 22 16:51:11 crc kubenswrapper[4758]: I0122 16:51:11.071579 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-57f558b485-lrj7s"] Jan 22 16:51:11 crc kubenswrapper[4758]: I0122 16:51:11.189924 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 16:51:11 crc kubenswrapper[4758]: I0122 16:51:11.204794 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-744dd76757-hj9wx"] Jan 22 16:51:11 crc kubenswrapper[4758]: I0122 16:51:11.215227 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-744dd76757-hj9wx" Jan 22 16:51:11 crc kubenswrapper[4758]: I0122 16:51:11.236474 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-744dd76757-hj9wx"] Jan 22 16:51:11 crc kubenswrapper[4758]: I0122 16:51:11.310318 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg4pf\" (UniqueName: \"kubernetes.io/projected/6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e-kube-api-access-sg4pf\") pod \"horizon-744dd76757-hj9wx\" (UID: \"6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e\") " pod="openstack/horizon-744dd76757-hj9wx" Jan 22 16:51:11 crc kubenswrapper[4758]: I0122 16:51:11.310484 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e-horizon-secret-key\") pod \"horizon-744dd76757-hj9wx\" (UID: \"6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e\") " pod="openstack/horizon-744dd76757-hj9wx" Jan 22 16:51:11 crc kubenswrapper[4758]: I0122 16:51:11.310519 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e-logs\") pod \"horizon-744dd76757-hj9wx\" (UID: \"6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e\") " pod="openstack/horizon-744dd76757-hj9wx" Jan 22 16:51:11 crc kubenswrapper[4758]: I0122 16:51:11.310569 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e-config-data\") pod \"horizon-744dd76757-hj9wx\" (UID: \"6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e\") " pod="openstack/horizon-744dd76757-hj9wx" Jan 22 16:51:11 crc kubenswrapper[4758]: I0122 16:51:11.310604 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e-scripts\") pod \"horizon-744dd76757-hj9wx\" (UID: \"6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e\") " pod="openstack/horizon-744dd76757-hj9wx" Jan 22 16:51:11 crc kubenswrapper[4758]: I0122 16:51:11.414880 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e-horizon-secret-key\") pod \"horizon-744dd76757-hj9wx\" (UID: \"6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e\") " pod="openstack/horizon-744dd76757-hj9wx" Jan 22 16:51:11 crc kubenswrapper[4758]: I0122 16:51:11.415239 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e-logs\") pod \"horizon-744dd76757-hj9wx\" (UID: \"6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e\") " pod="openstack/horizon-744dd76757-hj9wx" Jan 22 16:51:11 crc kubenswrapper[4758]: I0122 16:51:11.415288 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e-config-data\") pod \"horizon-744dd76757-hj9wx\" (UID: \"6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e\") " pod="openstack/horizon-744dd76757-hj9wx" Jan 22 16:51:11 crc kubenswrapper[4758]: I0122 16:51:11.415326 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e-scripts\") pod \"horizon-744dd76757-hj9wx\" (UID: \"6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e\") " pod="openstack/horizon-744dd76757-hj9wx" Jan 22 16:51:11 crc kubenswrapper[4758]: I0122 16:51:11.415415 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg4pf\" (UniqueName: \"kubernetes.io/projected/6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e-kube-api-access-sg4pf\") pod \"horizon-744dd76757-hj9wx\" (UID: \"6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e\") " pod="openstack/horizon-744dd76757-hj9wx" Jan 22 16:51:11 crc kubenswrapper[4758]: I0122 16:51:11.418558 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e-scripts\") pod \"horizon-744dd76757-hj9wx\" (UID: \"6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e\") " pod="openstack/horizon-744dd76757-hj9wx" Jan 22 16:51:11 crc kubenswrapper[4758]: I0122 16:51:11.418889 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e-config-data\") pod \"horizon-744dd76757-hj9wx\" (UID: \"6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e\") " pod="openstack/horizon-744dd76757-hj9wx" Jan 22 16:51:11 crc kubenswrapper[4758]: I0122 16:51:11.419062 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e-logs\") pod \"horizon-744dd76757-hj9wx\" (UID: \"6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e\") " pod="openstack/horizon-744dd76757-hj9wx" Jan 22 16:51:11 crc kubenswrapper[4758]: I0122 16:51:11.422332 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e-horizon-secret-key\") pod \"horizon-744dd76757-hj9wx\" (UID: \"6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e\") " pod="openstack/horizon-744dd76757-hj9wx" Jan 22 16:51:11 crc kubenswrapper[4758]: I0122 16:51:11.444871 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"7b167566-11db-4fba-9e9b-711b7a5f950d","Type":"ContainerStarted","Data":"ca8a7fdb46a4f7dcb61ead11f8b04d5b481d20cfe1c641eb489a5d5929fe9c8a"} Jan 22 16:51:11 crc kubenswrapper[4758]: I0122 16:51:11.464925 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg4pf\" (UniqueName: \"kubernetes.io/projected/6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e-kube-api-access-sg4pf\") pod \"horizon-744dd76757-hj9wx\" (UID: \"6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e\") " pod="openstack/horizon-744dd76757-hj9wx" Jan 22 16:51:11 crc kubenswrapper[4758]: I0122 16:51:11.477054 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jv8qb" event={"ID":"dbcd3850-61a6-4d03-914a-790b1257e2fe","Type":"ContainerStarted","Data":"47f3726f7387eb4ba5b0e7d9659e764c8691d09a28b4e2e4cf0e7bb995fe1b82"} Jan 22 16:51:11 crc kubenswrapper[4758]: I0122 16:51:11.493548 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-c52rv" event={"ID":"c276b685-1d06-4272-9eeb-7b759a8bffff","Type":"ContainerStarted","Data":"3d90a62b483d010a7a8dc323d0a9383e4c40248ba21a44fdc6e779c4e5730570"} Jan 22 16:51:11 crc kubenswrapper[4758]: I0122 16:51:11.518113 4758 generic.go:334] "Generic (PLEG): container finished" podID="2a9b58cb-7958-4dae-82ec-c435a970a8db" containerID="705b20c4f0654e4eead9725a2e6850e1c65a78be9c61653c817aabd8dfffc9b1" exitCode=0 Jan 22 16:51:11 crc kubenswrapper[4758]: I0122 16:51:11.518331 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b8f69bc8c-bxtdk" event={"ID":"2a9b58cb-7958-4dae-82ec-c435a970a8db","Type":"ContainerDied","Data":"705b20c4f0654e4eead9725a2e6850e1c65a78be9c61653c817aabd8dfffc9b1"} Jan 22 16:51:11 crc kubenswrapper[4758]: I0122 16:51:11.520523 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-jv8qb" podStartSLOduration=4.520504109 podStartE2EDuration="4.520504109s" podCreationTimestamp="2026-01-22 16:51:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:51:11.512227423 +0000 UTC m=+1292.995566708" watchObservedRunningTime="2026-01-22 16:51:11.520504109 +0000 UTC m=+1293.003843394" Jan 22 16:51:11 crc kubenswrapper[4758]: I0122 16:51:11.538207 4758 generic.go:334] "Generic (PLEG): container finished" podID="5e14bf40-a1bf-421a-acb7-f8e45f36dbf6" containerID="15294c85aebd194a415199d89a3b65124e35991a9ad9c444e27da8310ca93f57" exitCode=0 Jan 22 16:51:11 crc kubenswrapper[4758]: I0122 16:51:11.538253 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-645cd9555c-62zx7" event={"ID":"5e14bf40-a1bf-421a-acb7-f8e45f36dbf6","Type":"ContainerDied","Data":"15294c85aebd194a415199d89a3b65124e35991a9ad9c444e27da8310ca93f57"} Jan 22 16:51:11 crc kubenswrapper[4758]: I0122 16:51:11.555758 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-744dd76757-hj9wx" Jan 22 16:51:11 crc kubenswrapper[4758]: I0122 16:51:11.586214 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-c52rv" podStartSLOduration=4.584718118 podStartE2EDuration="4.584718118s" podCreationTimestamp="2026-01-22 16:51:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:51:11.55432481 +0000 UTC m=+1293.037664095" watchObservedRunningTime="2026-01-22 16:51:11.584718118 +0000 UTC m=+1293.068057413" Jan 22 16:51:16 crc kubenswrapper[4758]: I0122 16:51:16.477946 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5b5d5b589c-8c4hx"] Jan 22 16:51:16 crc kubenswrapper[4758]: I0122 16:51:16.537640 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-88b76f788-th2jq"] Jan 22 16:51:16 crc kubenswrapper[4758]: I0122 16:51:16.550866 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-88b76f788-th2jq" Jan 22 16:51:16 crc kubenswrapper[4758]: I0122 16:51:16.564063 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 22 16:51:16 crc kubenswrapper[4758]: I0122 16:51:16.567425 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-88b76f788-th2jq"] Jan 22 16:51:16 crc kubenswrapper[4758]: I0122 16:51:16.653942 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/40487aaa-4c45-41b2-ab14-76477ed2f4bb-horizon-secret-key\") pod \"horizon-88b76f788-th2jq\" (UID: \"40487aaa-4c45-41b2-ab14-76477ed2f4bb\") " pod="openstack/horizon-88b76f788-th2jq" Jan 22 16:51:16 crc kubenswrapper[4758]: I0122 16:51:16.653991 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/40487aaa-4c45-41b2-ab14-76477ed2f4bb-scripts\") pod \"horizon-88b76f788-th2jq\" (UID: \"40487aaa-4c45-41b2-ab14-76477ed2f4bb\") " pod="openstack/horizon-88b76f788-th2jq" Jan 22 16:51:16 crc kubenswrapper[4758]: I0122 16:51:16.654037 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/40487aaa-4c45-41b2-ab14-76477ed2f4bb-config-data\") pod \"horizon-88b76f788-th2jq\" (UID: \"40487aaa-4c45-41b2-ab14-76477ed2f4bb\") " pod="openstack/horizon-88b76f788-th2jq" Jan 22 16:51:16 crc kubenswrapper[4758]: I0122 16:51:16.654069 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntmsn\" (UniqueName: \"kubernetes.io/projected/40487aaa-4c45-41b2-ab14-76477ed2f4bb-kube-api-access-ntmsn\") pod \"horizon-88b76f788-th2jq\" (UID: \"40487aaa-4c45-41b2-ab14-76477ed2f4bb\") " pod="openstack/horizon-88b76f788-th2jq" Jan 22 16:51:16 crc kubenswrapper[4758]: I0122 16:51:16.654099 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40487aaa-4c45-41b2-ab14-76477ed2f4bb-logs\") pod \"horizon-88b76f788-th2jq\" (UID: \"40487aaa-4c45-41b2-ab14-76477ed2f4bb\") " pod="openstack/horizon-88b76f788-th2jq" Jan 22 16:51:16 crc kubenswrapper[4758]: I0122 16:51:16.654122 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40487aaa-4c45-41b2-ab14-76477ed2f4bb-combined-ca-bundle\") pod \"horizon-88b76f788-th2jq\" (UID: \"40487aaa-4c45-41b2-ab14-76477ed2f4bb\") " pod="openstack/horizon-88b76f788-th2jq" Jan 22 16:51:16 crc kubenswrapper[4758]: I0122 16:51:16.654155 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/40487aaa-4c45-41b2-ab14-76477ed2f4bb-horizon-tls-certs\") pod \"horizon-88b76f788-th2jq\" (UID: \"40487aaa-4c45-41b2-ab14-76477ed2f4bb\") " pod="openstack/horizon-88b76f788-th2jq" Jan 22 16:51:16 crc kubenswrapper[4758]: I0122 16:51:16.764298 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/40487aaa-4c45-41b2-ab14-76477ed2f4bb-horizon-secret-key\") pod \"horizon-88b76f788-th2jq\" (UID: \"40487aaa-4c45-41b2-ab14-76477ed2f4bb\") " pod="openstack/horizon-88b76f788-th2jq" Jan 22 16:51:16 crc kubenswrapper[4758]: I0122 16:51:16.764373 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/40487aaa-4c45-41b2-ab14-76477ed2f4bb-scripts\") pod \"horizon-88b76f788-th2jq\" (UID: \"40487aaa-4c45-41b2-ab14-76477ed2f4bb\") " pod="openstack/horizon-88b76f788-th2jq" Jan 22 16:51:16 crc kubenswrapper[4758]: I0122 16:51:16.764479 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/40487aaa-4c45-41b2-ab14-76477ed2f4bb-config-data\") pod \"horizon-88b76f788-th2jq\" (UID: \"40487aaa-4c45-41b2-ab14-76477ed2f4bb\") " pod="openstack/horizon-88b76f788-th2jq" Jan 22 16:51:16 crc kubenswrapper[4758]: I0122 16:51:16.764533 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntmsn\" (UniqueName: \"kubernetes.io/projected/40487aaa-4c45-41b2-ab14-76477ed2f4bb-kube-api-access-ntmsn\") pod \"horizon-88b76f788-th2jq\" (UID: \"40487aaa-4c45-41b2-ab14-76477ed2f4bb\") " pod="openstack/horizon-88b76f788-th2jq" Jan 22 16:51:16 crc kubenswrapper[4758]: I0122 16:51:16.764589 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40487aaa-4c45-41b2-ab14-76477ed2f4bb-logs\") pod \"horizon-88b76f788-th2jq\" (UID: \"40487aaa-4c45-41b2-ab14-76477ed2f4bb\") " pod="openstack/horizon-88b76f788-th2jq" Jan 22 16:51:16 crc kubenswrapper[4758]: I0122 16:51:16.764634 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40487aaa-4c45-41b2-ab14-76477ed2f4bb-combined-ca-bundle\") pod \"horizon-88b76f788-th2jq\" (UID: \"40487aaa-4c45-41b2-ab14-76477ed2f4bb\") " pod="openstack/horizon-88b76f788-th2jq" Jan 22 16:51:16 crc kubenswrapper[4758]: I0122 16:51:16.764702 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/40487aaa-4c45-41b2-ab14-76477ed2f4bb-horizon-tls-certs\") pod \"horizon-88b76f788-th2jq\" (UID: \"40487aaa-4c45-41b2-ab14-76477ed2f4bb\") " pod="openstack/horizon-88b76f788-th2jq" Jan 22 16:51:16 crc kubenswrapper[4758]: I0122 16:51:16.774018 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/40487aaa-4c45-41b2-ab14-76477ed2f4bb-config-data\") pod \"horizon-88b76f788-th2jq\" (UID: \"40487aaa-4c45-41b2-ab14-76477ed2f4bb\") " pod="openstack/horizon-88b76f788-th2jq" Jan 22 16:51:16 crc kubenswrapper[4758]: I0122 16:51:16.774775 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40487aaa-4c45-41b2-ab14-76477ed2f4bb-logs\") pod \"horizon-88b76f788-th2jq\" (UID: \"40487aaa-4c45-41b2-ab14-76477ed2f4bb\") " pod="openstack/horizon-88b76f788-th2jq" Jan 22 16:51:16 crc kubenswrapper[4758]: I0122 16:51:16.776099 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/40487aaa-4c45-41b2-ab14-76477ed2f4bb-horizon-tls-certs\") pod \"horizon-88b76f788-th2jq\" (UID: \"40487aaa-4c45-41b2-ab14-76477ed2f4bb\") " pod="openstack/horizon-88b76f788-th2jq" Jan 22 16:51:16 crc kubenswrapper[4758]: I0122 16:51:16.787130 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/40487aaa-4c45-41b2-ab14-76477ed2f4bb-scripts\") pod \"horizon-88b76f788-th2jq\" (UID: \"40487aaa-4c45-41b2-ab14-76477ed2f4bb\") " pod="openstack/horizon-88b76f788-th2jq" Jan 22 16:51:16 crc kubenswrapper[4758]: I0122 16:51:16.797435 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40487aaa-4c45-41b2-ab14-76477ed2f4bb-combined-ca-bundle\") pod \"horizon-88b76f788-th2jq\" (UID: \"40487aaa-4c45-41b2-ab14-76477ed2f4bb\") " pod="openstack/horizon-88b76f788-th2jq" Jan 22 16:51:16 crc kubenswrapper[4758]: I0122 16:51:16.810155 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/40487aaa-4c45-41b2-ab14-76477ed2f4bb-horizon-secret-key\") pod \"horizon-88b76f788-th2jq\" (UID: \"40487aaa-4c45-41b2-ab14-76477ed2f4bb\") " pod="openstack/horizon-88b76f788-th2jq" Jan 22 16:51:16 crc kubenswrapper[4758]: I0122 16:51:16.854484 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntmsn\" (UniqueName: \"kubernetes.io/projected/40487aaa-4c45-41b2-ab14-76477ed2f4bb-kube-api-access-ntmsn\") pod \"horizon-88b76f788-th2jq\" (UID: \"40487aaa-4c45-41b2-ab14-76477ed2f4bb\") " pod="openstack/horizon-88b76f788-th2jq" Jan 22 16:51:16 crc kubenswrapper[4758]: I0122 16:51:16.860078 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-744dd76757-hj9wx"] Jan 22 16:51:16 crc kubenswrapper[4758]: I0122 16:51:16.864613 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-55b94d9b56-4x8cx"] Jan 22 16:51:16 crc kubenswrapper[4758]: I0122 16:51:16.875579 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-55b94d9b56-4x8cx" Jan 22 16:51:16 crc kubenswrapper[4758]: I0122 16:51:16.898223 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-88b76f788-th2jq" Jan 22 16:51:16 crc kubenswrapper[4758]: I0122 16:51:16.917705 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-55b94d9b56-4x8cx"] Jan 22 16:51:16 crc kubenswrapper[4758]: I0122 16:51:16.972392 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rst44\" (UniqueName: \"kubernetes.io/projected/44cc928c-2531-4055-9b8f-b36957f3485d-kube-api-access-rst44\") pod \"horizon-55b94d9b56-4x8cx\" (UID: \"44cc928c-2531-4055-9b8f-b36957f3485d\") " pod="openstack/horizon-55b94d9b56-4x8cx" Jan 22 16:51:16 crc kubenswrapper[4758]: I0122 16:51:16.972676 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/44cc928c-2531-4055-9b8f-b36957f3485d-config-data\") pod \"horizon-55b94d9b56-4x8cx\" (UID: \"44cc928c-2531-4055-9b8f-b36957f3485d\") " pod="openstack/horizon-55b94d9b56-4x8cx" Jan 22 16:51:16 crc kubenswrapper[4758]: I0122 16:51:16.972836 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/44cc928c-2531-4055-9b8f-b36957f3485d-horizon-secret-key\") pod \"horizon-55b94d9b56-4x8cx\" (UID: \"44cc928c-2531-4055-9b8f-b36957f3485d\") " pod="openstack/horizon-55b94d9b56-4x8cx" Jan 22 16:51:16 crc kubenswrapper[4758]: I0122 16:51:16.972947 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/44cc928c-2531-4055-9b8f-b36957f3485d-horizon-tls-certs\") pod \"horizon-55b94d9b56-4x8cx\" (UID: \"44cc928c-2531-4055-9b8f-b36957f3485d\") " pod="openstack/horizon-55b94d9b56-4x8cx" Jan 22 16:51:16 crc kubenswrapper[4758]: I0122 16:51:16.973148 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44cc928c-2531-4055-9b8f-b36957f3485d-logs\") pod \"horizon-55b94d9b56-4x8cx\" (UID: \"44cc928c-2531-4055-9b8f-b36957f3485d\") " pod="openstack/horizon-55b94d9b56-4x8cx" Jan 22 16:51:16 crc kubenswrapper[4758]: I0122 16:51:16.973409 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44cc928c-2531-4055-9b8f-b36957f3485d-combined-ca-bundle\") pod \"horizon-55b94d9b56-4x8cx\" (UID: \"44cc928c-2531-4055-9b8f-b36957f3485d\") " pod="openstack/horizon-55b94d9b56-4x8cx" Jan 22 16:51:16 crc kubenswrapper[4758]: I0122 16:51:16.973519 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/44cc928c-2531-4055-9b8f-b36957f3485d-scripts\") pod \"horizon-55b94d9b56-4x8cx\" (UID: \"44cc928c-2531-4055-9b8f-b36957f3485d\") " pod="openstack/horizon-55b94d9b56-4x8cx" Jan 22 16:51:17 crc kubenswrapper[4758]: I0122 16:51:17.075630 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/44cc928c-2531-4055-9b8f-b36957f3485d-config-data\") pod \"horizon-55b94d9b56-4x8cx\" (UID: \"44cc928c-2531-4055-9b8f-b36957f3485d\") " pod="openstack/horizon-55b94d9b56-4x8cx" Jan 22 16:51:17 crc kubenswrapper[4758]: I0122 16:51:17.076020 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/44cc928c-2531-4055-9b8f-b36957f3485d-horizon-secret-key\") pod \"horizon-55b94d9b56-4x8cx\" (UID: \"44cc928c-2531-4055-9b8f-b36957f3485d\") " pod="openstack/horizon-55b94d9b56-4x8cx" Jan 22 16:51:17 crc kubenswrapper[4758]: I0122 16:51:17.076048 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/44cc928c-2531-4055-9b8f-b36957f3485d-horizon-tls-certs\") pod \"horizon-55b94d9b56-4x8cx\" (UID: \"44cc928c-2531-4055-9b8f-b36957f3485d\") " pod="openstack/horizon-55b94d9b56-4x8cx" Jan 22 16:51:17 crc kubenswrapper[4758]: I0122 16:51:17.076114 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44cc928c-2531-4055-9b8f-b36957f3485d-logs\") pod \"horizon-55b94d9b56-4x8cx\" (UID: \"44cc928c-2531-4055-9b8f-b36957f3485d\") " pod="openstack/horizon-55b94d9b56-4x8cx" Jan 22 16:51:17 crc kubenswrapper[4758]: I0122 16:51:17.076181 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44cc928c-2531-4055-9b8f-b36957f3485d-combined-ca-bundle\") pod \"horizon-55b94d9b56-4x8cx\" (UID: \"44cc928c-2531-4055-9b8f-b36957f3485d\") " pod="openstack/horizon-55b94d9b56-4x8cx" Jan 22 16:51:17 crc kubenswrapper[4758]: I0122 16:51:17.076204 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/44cc928c-2531-4055-9b8f-b36957f3485d-scripts\") pod \"horizon-55b94d9b56-4x8cx\" (UID: \"44cc928c-2531-4055-9b8f-b36957f3485d\") " pod="openstack/horizon-55b94d9b56-4x8cx" Jan 22 16:51:17 crc kubenswrapper[4758]: I0122 16:51:17.076231 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rst44\" (UniqueName: \"kubernetes.io/projected/44cc928c-2531-4055-9b8f-b36957f3485d-kube-api-access-rst44\") pod \"horizon-55b94d9b56-4x8cx\" (UID: \"44cc928c-2531-4055-9b8f-b36957f3485d\") " pod="openstack/horizon-55b94d9b56-4x8cx" Jan 22 16:51:17 crc kubenswrapper[4758]: I0122 16:51:17.077167 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/44cc928c-2531-4055-9b8f-b36957f3485d-config-data\") pod \"horizon-55b94d9b56-4x8cx\" (UID: \"44cc928c-2531-4055-9b8f-b36957f3485d\") " pod="openstack/horizon-55b94d9b56-4x8cx" Jan 22 16:51:17 crc kubenswrapper[4758]: I0122 16:51:17.077215 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44cc928c-2531-4055-9b8f-b36957f3485d-logs\") pod \"horizon-55b94d9b56-4x8cx\" (UID: \"44cc928c-2531-4055-9b8f-b36957f3485d\") " pod="openstack/horizon-55b94d9b56-4x8cx" Jan 22 16:51:17 crc kubenswrapper[4758]: I0122 16:51:17.077535 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/44cc928c-2531-4055-9b8f-b36957f3485d-scripts\") pod \"horizon-55b94d9b56-4x8cx\" (UID: \"44cc928c-2531-4055-9b8f-b36957f3485d\") " pod="openstack/horizon-55b94d9b56-4x8cx" Jan 22 16:51:17 crc kubenswrapper[4758]: I0122 16:51:17.081666 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/44cc928c-2531-4055-9b8f-b36957f3485d-horizon-secret-key\") pod \"horizon-55b94d9b56-4x8cx\" (UID: \"44cc928c-2531-4055-9b8f-b36957f3485d\") " pod="openstack/horizon-55b94d9b56-4x8cx" Jan 22 16:51:17 crc kubenswrapper[4758]: I0122 16:51:17.081697 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44cc928c-2531-4055-9b8f-b36957f3485d-combined-ca-bundle\") pod \"horizon-55b94d9b56-4x8cx\" (UID: \"44cc928c-2531-4055-9b8f-b36957f3485d\") " pod="openstack/horizon-55b94d9b56-4x8cx" Jan 22 16:51:17 crc kubenswrapper[4758]: I0122 16:51:17.083363 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/44cc928c-2531-4055-9b8f-b36957f3485d-horizon-tls-certs\") pod \"horizon-55b94d9b56-4x8cx\" (UID: \"44cc928c-2531-4055-9b8f-b36957f3485d\") " pod="openstack/horizon-55b94d9b56-4x8cx" Jan 22 16:51:17 crc kubenswrapper[4758]: I0122 16:51:17.094305 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rst44\" (UniqueName: \"kubernetes.io/projected/44cc928c-2531-4055-9b8f-b36957f3485d-kube-api-access-rst44\") pod \"horizon-55b94d9b56-4x8cx\" (UID: \"44cc928c-2531-4055-9b8f-b36957f3485d\") " pod="openstack/horizon-55b94d9b56-4x8cx" Jan 22 16:51:17 crc kubenswrapper[4758]: I0122 16:51:17.287632 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-55b94d9b56-4x8cx" Jan 22 16:51:18 crc kubenswrapper[4758]: I0122 16:51:18.682299 4758 generic.go:334] "Generic (PLEG): container finished" podID="dbcd3850-61a6-4d03-914a-790b1257e2fe" containerID="47f3726f7387eb4ba5b0e7d9659e764c8691d09a28b4e2e4cf0e7bb995fe1b82" exitCode=0 Jan 22 16:51:18 crc kubenswrapper[4758]: I0122 16:51:18.682398 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jv8qb" event={"ID":"dbcd3850-61a6-4d03-914a-790b1257e2fe","Type":"ContainerDied","Data":"47f3726f7387eb4ba5b0e7d9659e764c8691d09a28b4e2e4cf0e7bb995fe1b82"} Jan 22 16:51:23 crc kubenswrapper[4758]: I0122 16:51:23.216851 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jv8qb" Jan 22 16:51:23 crc kubenswrapper[4758]: E0122 16:51:23.255985 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.196:5001/podified-master-centos10/openstack-glance-api:watcher_latest" Jan 22 16:51:23 crc kubenswrapper[4758]: E0122 16:51:23.256150 4758 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.196:5001/podified-master-centos10/openstack-glance-api:watcher_latest" Jan 22 16:51:23 crc kubenswrapper[4758]: E0122 16:51:23.256500 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:38.102.83.196:5001/podified-master-centos10/openstack-glance-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s86q8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-9h9hb_openstack(f8fe0f21-8912-4d6c-ba4f-6600456784e1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 16:51:23 crc kubenswrapper[4758]: E0122 16:51:23.257820 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-9h9hb" podUID="f8fe0f21-8912-4d6c-ba4f-6600456784e1" Jan 22 16:51:23 crc kubenswrapper[4758]: I0122 16:51:23.401446 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbcd3850-61a6-4d03-914a-790b1257e2fe-scripts\") pod \"dbcd3850-61a6-4d03-914a-790b1257e2fe\" (UID: \"dbcd3850-61a6-4d03-914a-790b1257e2fe\") " Jan 22 16:51:23 crc kubenswrapper[4758]: I0122 16:51:23.401609 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbcd3850-61a6-4d03-914a-790b1257e2fe-combined-ca-bundle\") pod \"dbcd3850-61a6-4d03-914a-790b1257e2fe\" (UID: \"dbcd3850-61a6-4d03-914a-790b1257e2fe\") " Jan 22 16:51:23 crc kubenswrapper[4758]: I0122 16:51:23.401693 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dbcd3850-61a6-4d03-914a-790b1257e2fe-fernet-keys\") pod \"dbcd3850-61a6-4d03-914a-790b1257e2fe\" (UID: \"dbcd3850-61a6-4d03-914a-790b1257e2fe\") " Jan 22 16:51:23 crc kubenswrapper[4758]: I0122 16:51:23.401725 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsbs5\" (UniqueName: \"kubernetes.io/projected/dbcd3850-61a6-4d03-914a-790b1257e2fe-kube-api-access-hsbs5\") pod \"dbcd3850-61a6-4d03-914a-790b1257e2fe\" (UID: \"dbcd3850-61a6-4d03-914a-790b1257e2fe\") " Jan 22 16:51:23 crc kubenswrapper[4758]: I0122 16:51:23.401766 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dbcd3850-61a6-4d03-914a-790b1257e2fe-credential-keys\") pod \"dbcd3850-61a6-4d03-914a-790b1257e2fe\" (UID: \"dbcd3850-61a6-4d03-914a-790b1257e2fe\") " Jan 22 16:51:23 crc kubenswrapper[4758]: I0122 16:51:23.401801 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbcd3850-61a6-4d03-914a-790b1257e2fe-config-data\") pod \"dbcd3850-61a6-4d03-914a-790b1257e2fe\" (UID: \"dbcd3850-61a6-4d03-914a-790b1257e2fe\") " Jan 22 16:51:23 crc kubenswrapper[4758]: I0122 16:51:23.409133 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbcd3850-61a6-4d03-914a-790b1257e2fe-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "dbcd3850-61a6-4d03-914a-790b1257e2fe" (UID: "dbcd3850-61a6-4d03-914a-790b1257e2fe"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:51:23 crc kubenswrapper[4758]: I0122 16:51:23.409208 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbcd3850-61a6-4d03-914a-790b1257e2fe-kube-api-access-hsbs5" (OuterVolumeSpecName: "kube-api-access-hsbs5") pod "dbcd3850-61a6-4d03-914a-790b1257e2fe" (UID: "dbcd3850-61a6-4d03-914a-790b1257e2fe"). InnerVolumeSpecName "kube-api-access-hsbs5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:51:23 crc kubenswrapper[4758]: I0122 16:51:23.414845 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbcd3850-61a6-4d03-914a-790b1257e2fe-scripts" (OuterVolumeSpecName: "scripts") pod "dbcd3850-61a6-4d03-914a-790b1257e2fe" (UID: "dbcd3850-61a6-4d03-914a-790b1257e2fe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:51:23 crc kubenswrapper[4758]: I0122 16:51:23.418539 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbcd3850-61a6-4d03-914a-790b1257e2fe-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "dbcd3850-61a6-4d03-914a-790b1257e2fe" (UID: "dbcd3850-61a6-4d03-914a-790b1257e2fe"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:51:23 crc kubenswrapper[4758]: I0122 16:51:23.428594 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbcd3850-61a6-4d03-914a-790b1257e2fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dbcd3850-61a6-4d03-914a-790b1257e2fe" (UID: "dbcd3850-61a6-4d03-914a-790b1257e2fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:51:23 crc kubenswrapper[4758]: I0122 16:51:23.437423 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbcd3850-61a6-4d03-914a-790b1257e2fe-config-data" (OuterVolumeSpecName: "config-data") pod "dbcd3850-61a6-4d03-914a-790b1257e2fe" (UID: "dbcd3850-61a6-4d03-914a-790b1257e2fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:51:23 crc kubenswrapper[4758]: I0122 16:51:23.505363 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbcd3850-61a6-4d03-914a-790b1257e2fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:23 crc kubenswrapper[4758]: I0122 16:51:23.505664 4758 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dbcd3850-61a6-4d03-914a-790b1257e2fe-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:23 crc kubenswrapper[4758]: I0122 16:51:23.505678 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsbs5\" (UniqueName: \"kubernetes.io/projected/dbcd3850-61a6-4d03-914a-790b1257e2fe-kube-api-access-hsbs5\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:23 crc kubenswrapper[4758]: I0122 16:51:23.505704 4758 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dbcd3850-61a6-4d03-914a-790b1257e2fe-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:23 crc kubenswrapper[4758]: I0122 16:51:23.505716 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbcd3850-61a6-4d03-914a-790b1257e2fe-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:23 crc kubenswrapper[4758]: I0122 16:51:23.505726 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbcd3850-61a6-4d03-914a-790b1257e2fe-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:24 crc kubenswrapper[4758]: I0122 16:51:24.097053 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jv8qb" Jan 22 16:51:24 crc kubenswrapper[4758]: E0122 16:51:24.101477 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.196:5001/podified-master-centos10/openstack-glance-api:watcher_latest\\\"\"" pod="openstack/glance-db-sync-9h9hb" podUID="f8fe0f21-8912-4d6c-ba4f-6600456784e1" Jan 22 16:51:24 crc kubenswrapper[4758]: I0122 16:51:24.101787 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jv8qb" event={"ID":"dbcd3850-61a6-4d03-914a-790b1257e2fe","Type":"ContainerDied","Data":"423c8abe6815b8a8670b76291d6d1848aaf92f1ac62c1969580198667c7a64da"} Jan 22 16:51:24 crc kubenswrapper[4758]: I0122 16:51:24.101828 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="423c8abe6815b8a8670b76291d6d1848aaf92f1ac62c1969580198667c7a64da" Jan 22 16:51:24 crc kubenswrapper[4758]: I0122 16:51:24.339404 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-jv8qb"] Jan 22 16:51:24 crc kubenswrapper[4758]: I0122 16:51:24.346625 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-jv8qb"] Jan 22 16:51:24 crc kubenswrapper[4758]: I0122 16:51:24.446701 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-2l5dg"] Jan 22 16:51:24 crc kubenswrapper[4758]: E0122 16:51:24.447159 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbcd3850-61a6-4d03-914a-790b1257e2fe" containerName="keystone-bootstrap" Jan 22 16:51:24 crc kubenswrapper[4758]: I0122 16:51:24.447174 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbcd3850-61a6-4d03-914a-790b1257e2fe" containerName="keystone-bootstrap" Jan 22 16:51:24 crc kubenswrapper[4758]: I0122 16:51:24.447366 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbcd3850-61a6-4d03-914a-790b1257e2fe" containerName="keystone-bootstrap" Jan 22 16:51:24 crc kubenswrapper[4758]: I0122 16:51:24.448011 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2l5dg" Jan 22 16:51:24 crc kubenswrapper[4758]: I0122 16:51:24.450663 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 22 16:51:24 crc kubenswrapper[4758]: I0122 16:51:24.450694 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 22 16:51:24 crc kubenswrapper[4758]: I0122 16:51:24.450770 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-q7l7k" Jan 22 16:51:24 crc kubenswrapper[4758]: I0122 16:51:24.450701 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 22 16:51:24 crc kubenswrapper[4758]: I0122 16:51:24.450917 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 22 16:51:24 crc kubenswrapper[4758]: I0122 16:51:24.473013 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2l5dg"] Jan 22 16:51:24 crc kubenswrapper[4758]: I0122 16:51:24.546859 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad0bebb3-f086-4c81-8210-5ff9fed77ea4-config-data\") pod \"keystone-bootstrap-2l5dg\" (UID: \"ad0bebb3-f086-4c81-8210-5ff9fed77ea4\") " pod="openstack/keystone-bootstrap-2l5dg" Jan 22 16:51:24 crc kubenswrapper[4758]: I0122 16:51:24.546900 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ad0bebb3-f086-4c81-8210-5ff9fed77ea4-fernet-keys\") pod \"keystone-bootstrap-2l5dg\" (UID: \"ad0bebb3-f086-4c81-8210-5ff9fed77ea4\") " pod="openstack/keystone-bootstrap-2l5dg" Jan 22 16:51:24 crc kubenswrapper[4758]: I0122 16:51:24.546943 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad0bebb3-f086-4c81-8210-5ff9fed77ea4-combined-ca-bundle\") pod \"keystone-bootstrap-2l5dg\" (UID: \"ad0bebb3-f086-4c81-8210-5ff9fed77ea4\") " pod="openstack/keystone-bootstrap-2l5dg" Jan 22 16:51:24 crc kubenswrapper[4758]: I0122 16:51:24.547101 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlflf\" (UniqueName: \"kubernetes.io/projected/ad0bebb3-f086-4c81-8210-5ff9fed77ea4-kube-api-access-nlflf\") pod \"keystone-bootstrap-2l5dg\" (UID: \"ad0bebb3-f086-4c81-8210-5ff9fed77ea4\") " pod="openstack/keystone-bootstrap-2l5dg" Jan 22 16:51:24 crc kubenswrapper[4758]: I0122 16:51:24.547157 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ad0bebb3-f086-4c81-8210-5ff9fed77ea4-credential-keys\") pod \"keystone-bootstrap-2l5dg\" (UID: \"ad0bebb3-f086-4c81-8210-5ff9fed77ea4\") " pod="openstack/keystone-bootstrap-2l5dg" Jan 22 16:51:24 crc kubenswrapper[4758]: I0122 16:51:24.547181 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad0bebb3-f086-4c81-8210-5ff9fed77ea4-scripts\") pod \"keystone-bootstrap-2l5dg\" (UID: \"ad0bebb3-f086-4c81-8210-5ff9fed77ea4\") " pod="openstack/keystone-bootstrap-2l5dg" Jan 22 16:51:24 crc kubenswrapper[4758]: I0122 16:51:24.649165 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad0bebb3-f086-4c81-8210-5ff9fed77ea4-config-data\") pod \"keystone-bootstrap-2l5dg\" (UID: \"ad0bebb3-f086-4c81-8210-5ff9fed77ea4\") " pod="openstack/keystone-bootstrap-2l5dg" Jan 22 16:51:24 crc kubenswrapper[4758]: I0122 16:51:24.649214 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ad0bebb3-f086-4c81-8210-5ff9fed77ea4-fernet-keys\") pod \"keystone-bootstrap-2l5dg\" (UID: \"ad0bebb3-f086-4c81-8210-5ff9fed77ea4\") " pod="openstack/keystone-bootstrap-2l5dg" Jan 22 16:51:24 crc kubenswrapper[4758]: I0122 16:51:24.649264 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad0bebb3-f086-4c81-8210-5ff9fed77ea4-combined-ca-bundle\") pod \"keystone-bootstrap-2l5dg\" (UID: \"ad0bebb3-f086-4c81-8210-5ff9fed77ea4\") " pod="openstack/keystone-bootstrap-2l5dg" Jan 22 16:51:24 crc kubenswrapper[4758]: I0122 16:51:24.649346 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlflf\" (UniqueName: \"kubernetes.io/projected/ad0bebb3-f086-4c81-8210-5ff9fed77ea4-kube-api-access-nlflf\") pod \"keystone-bootstrap-2l5dg\" (UID: \"ad0bebb3-f086-4c81-8210-5ff9fed77ea4\") " pod="openstack/keystone-bootstrap-2l5dg" Jan 22 16:51:24 crc kubenswrapper[4758]: I0122 16:51:24.649390 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ad0bebb3-f086-4c81-8210-5ff9fed77ea4-credential-keys\") pod \"keystone-bootstrap-2l5dg\" (UID: \"ad0bebb3-f086-4c81-8210-5ff9fed77ea4\") " pod="openstack/keystone-bootstrap-2l5dg" Jan 22 16:51:24 crc kubenswrapper[4758]: I0122 16:51:24.649419 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad0bebb3-f086-4c81-8210-5ff9fed77ea4-scripts\") pod \"keystone-bootstrap-2l5dg\" (UID: \"ad0bebb3-f086-4c81-8210-5ff9fed77ea4\") " pod="openstack/keystone-bootstrap-2l5dg" Jan 22 16:51:24 crc kubenswrapper[4758]: I0122 16:51:24.654298 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad0bebb3-f086-4c81-8210-5ff9fed77ea4-combined-ca-bundle\") pod \"keystone-bootstrap-2l5dg\" (UID: \"ad0bebb3-f086-4c81-8210-5ff9fed77ea4\") " pod="openstack/keystone-bootstrap-2l5dg" Jan 22 16:51:24 crc kubenswrapper[4758]: I0122 16:51:24.654368 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad0bebb3-f086-4c81-8210-5ff9fed77ea4-scripts\") pod \"keystone-bootstrap-2l5dg\" (UID: \"ad0bebb3-f086-4c81-8210-5ff9fed77ea4\") " pod="openstack/keystone-bootstrap-2l5dg" Jan 22 16:51:24 crc kubenswrapper[4758]: I0122 16:51:24.655432 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad0bebb3-f086-4c81-8210-5ff9fed77ea4-config-data\") pod \"keystone-bootstrap-2l5dg\" (UID: \"ad0bebb3-f086-4c81-8210-5ff9fed77ea4\") " pod="openstack/keystone-bootstrap-2l5dg" Jan 22 16:51:24 crc kubenswrapper[4758]: I0122 16:51:24.661347 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ad0bebb3-f086-4c81-8210-5ff9fed77ea4-credential-keys\") pod \"keystone-bootstrap-2l5dg\" (UID: \"ad0bebb3-f086-4c81-8210-5ff9fed77ea4\") " pod="openstack/keystone-bootstrap-2l5dg" Jan 22 16:51:24 crc kubenswrapper[4758]: I0122 16:51:24.662241 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ad0bebb3-f086-4c81-8210-5ff9fed77ea4-fernet-keys\") pod \"keystone-bootstrap-2l5dg\" (UID: \"ad0bebb3-f086-4c81-8210-5ff9fed77ea4\") " pod="openstack/keystone-bootstrap-2l5dg" Jan 22 16:51:24 crc kubenswrapper[4758]: I0122 16:51:24.674330 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlflf\" (UniqueName: \"kubernetes.io/projected/ad0bebb3-f086-4c81-8210-5ff9fed77ea4-kube-api-access-nlflf\") pod \"keystone-bootstrap-2l5dg\" (UID: \"ad0bebb3-f086-4c81-8210-5ff9fed77ea4\") " pod="openstack/keystone-bootstrap-2l5dg" Jan 22 16:51:24 crc kubenswrapper[4758]: I0122 16:51:24.769436 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2l5dg" Jan 22 16:51:24 crc kubenswrapper[4758]: I0122 16:51:24.821995 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbcd3850-61a6-4d03-914a-790b1257e2fe" path="/var/lib/kubelet/pods/dbcd3850-61a6-4d03-914a-790b1257e2fe/volumes" Jan 22 16:51:25 crc kubenswrapper[4758]: I0122 16:51:25.288084 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b8f69bc8c-bxtdk" Jan 22 16:51:25 crc kubenswrapper[4758]: I0122 16:51:25.401469 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnwmf\" (UniqueName: \"kubernetes.io/projected/2a9b58cb-7958-4dae-82ec-c435a970a8db-kube-api-access-wnwmf\") pod \"2a9b58cb-7958-4dae-82ec-c435a970a8db\" (UID: \"2a9b58cb-7958-4dae-82ec-c435a970a8db\") " Jan 22 16:51:25 crc kubenswrapper[4758]: I0122 16:51:25.401550 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2a9b58cb-7958-4dae-82ec-c435a970a8db-ovsdbserver-sb\") pod \"2a9b58cb-7958-4dae-82ec-c435a970a8db\" (UID: \"2a9b58cb-7958-4dae-82ec-c435a970a8db\") " Jan 22 16:51:25 crc kubenswrapper[4758]: I0122 16:51:25.401582 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2a9b58cb-7958-4dae-82ec-c435a970a8db-ovsdbserver-nb\") pod \"2a9b58cb-7958-4dae-82ec-c435a970a8db\" (UID: \"2a9b58cb-7958-4dae-82ec-c435a970a8db\") " Jan 22 16:51:25 crc kubenswrapper[4758]: I0122 16:51:25.401632 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a9b58cb-7958-4dae-82ec-c435a970a8db-config\") pod \"2a9b58cb-7958-4dae-82ec-c435a970a8db\" (UID: \"2a9b58cb-7958-4dae-82ec-c435a970a8db\") " Jan 22 16:51:25 crc kubenswrapper[4758]: I0122 16:51:25.401691 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2a9b58cb-7958-4dae-82ec-c435a970a8db-dns-swift-storage-0\") pod \"2a9b58cb-7958-4dae-82ec-c435a970a8db\" (UID: \"2a9b58cb-7958-4dae-82ec-c435a970a8db\") " Jan 22 16:51:25 crc kubenswrapper[4758]: I0122 16:51:25.401761 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a9b58cb-7958-4dae-82ec-c435a970a8db-dns-svc\") pod \"2a9b58cb-7958-4dae-82ec-c435a970a8db\" (UID: \"2a9b58cb-7958-4dae-82ec-c435a970a8db\") " Jan 22 16:51:25 crc kubenswrapper[4758]: I0122 16:51:25.427652 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a9b58cb-7958-4dae-82ec-c435a970a8db-kube-api-access-wnwmf" (OuterVolumeSpecName: "kube-api-access-wnwmf") pod "2a9b58cb-7958-4dae-82ec-c435a970a8db" (UID: "2a9b58cb-7958-4dae-82ec-c435a970a8db"). InnerVolumeSpecName "kube-api-access-wnwmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:51:25 crc kubenswrapper[4758]: I0122 16:51:25.432437 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a9b58cb-7958-4dae-82ec-c435a970a8db-config" (OuterVolumeSpecName: "config") pod "2a9b58cb-7958-4dae-82ec-c435a970a8db" (UID: "2a9b58cb-7958-4dae-82ec-c435a970a8db"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:51:25 crc kubenswrapper[4758]: I0122 16:51:25.435521 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a9b58cb-7958-4dae-82ec-c435a970a8db-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2a9b58cb-7958-4dae-82ec-c435a970a8db" (UID: "2a9b58cb-7958-4dae-82ec-c435a970a8db"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:51:25 crc kubenswrapper[4758]: I0122 16:51:25.436725 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a9b58cb-7958-4dae-82ec-c435a970a8db-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2a9b58cb-7958-4dae-82ec-c435a970a8db" (UID: "2a9b58cb-7958-4dae-82ec-c435a970a8db"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:51:25 crc kubenswrapper[4758]: I0122 16:51:25.454901 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a9b58cb-7958-4dae-82ec-c435a970a8db-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2a9b58cb-7958-4dae-82ec-c435a970a8db" (UID: "2a9b58cb-7958-4dae-82ec-c435a970a8db"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:51:25 crc kubenswrapper[4758]: I0122 16:51:25.457791 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a9b58cb-7958-4dae-82ec-c435a970a8db-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2a9b58cb-7958-4dae-82ec-c435a970a8db" (UID: "2a9b58cb-7958-4dae-82ec-c435a970a8db"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:51:25 crc kubenswrapper[4758]: I0122 16:51:25.505102 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a9b58cb-7958-4dae-82ec-c435a970a8db-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:25 crc kubenswrapper[4758]: I0122 16:51:25.505137 4758 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2a9b58cb-7958-4dae-82ec-c435a970a8db-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:25 crc kubenswrapper[4758]: I0122 16:51:25.505150 4758 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a9b58cb-7958-4dae-82ec-c435a970a8db-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:25 crc kubenswrapper[4758]: I0122 16:51:25.505161 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnwmf\" (UniqueName: \"kubernetes.io/projected/2a9b58cb-7958-4dae-82ec-c435a970a8db-kube-api-access-wnwmf\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:25 crc kubenswrapper[4758]: I0122 16:51:25.505172 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2a9b58cb-7958-4dae-82ec-c435a970a8db-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:25 crc kubenswrapper[4758]: I0122 16:51:25.505188 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2a9b58cb-7958-4dae-82ec-c435a970a8db-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:26 crc kubenswrapper[4758]: I0122 16:51:26.113795 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b8f69bc8c-bxtdk" event={"ID":"2a9b58cb-7958-4dae-82ec-c435a970a8db","Type":"ContainerDied","Data":"a7f790ed3ee4671c58f53e69970d6b8c66a6f7bbd8845584bc0886697c691e6e"} Jan 22 16:51:26 crc kubenswrapper[4758]: I0122 16:51:26.113860 4758 scope.go:117] "RemoveContainer" containerID="705b20c4f0654e4eead9725a2e6850e1c65a78be9c61653c817aabd8dfffc9b1" Jan 22 16:51:26 crc kubenswrapper[4758]: I0122 16:51:26.113882 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b8f69bc8c-bxtdk" Jan 22 16:51:26 crc kubenswrapper[4758]: I0122 16:51:26.174258 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b8f69bc8c-bxtdk"] Jan 22 16:51:26 crc kubenswrapper[4758]: I0122 16:51:26.181647 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b8f69bc8c-bxtdk"] Jan 22 16:51:26 crc kubenswrapper[4758]: I0122 16:51:26.840375 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a9b58cb-7958-4dae-82ec-c435a970a8db" path="/var/lib/kubelet/pods/2a9b58cb-7958-4dae-82ec-c435a970a8db/volumes" Jan 22 16:51:29 crc kubenswrapper[4758]: E0122 16:51:29.764900 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.196:5001/podified-master-centos10/openstack-placement-api:watcher_latest" Jan 22 16:51:29 crc kubenswrapper[4758]: E0122 16:51:29.765312 4758 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.196:5001/podified-master-centos10/openstack-placement-api:watcher_latest" Jan 22 16:51:29 crc kubenswrapper[4758]: E0122 16:51:29.765444 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:38.102.83.196:5001/podified-master-centos10/openstack-placement-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wwvtc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-lv7h6_openstack(1cc69af0-0ef0-4399-9084-e81419b65acd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 16:51:29 crc kubenswrapper[4758]: E0122 16:51:29.767020 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-lv7h6" podUID="1cc69af0-0ef0-4399-9084-e81419b65acd" Jan 22 16:51:30 crc kubenswrapper[4758]: E0122 16:51:30.050824 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.196:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest" Jan 22 16:51:30 crc kubenswrapper[4758]: E0122 16:51:30.050873 4758 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.196:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest" Jan 22 16:51:30 crc kubenswrapper[4758]: E0122 16:51:30.050980 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:38.102.83.196:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n7ch68dh55fhc8h5f5h578h67fhc9hdh589h575h698h685h94hd9hd6hb6hdbh9bhbdh684h76h69h5d7h99h57bhdbhb5hd7h5c5h5dbh65cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-chd47,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(a67f1efb-4c74-4acd-9948-de1491a8479c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 16:51:30 crc kubenswrapper[4758]: I0122 16:51:30.053861 4758 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 16:51:30 crc kubenswrapper[4758]: E0122 16:51:30.174987 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.196:5001/podified-master-centos10/openstack-placement-api:watcher_latest\\\"\"" pod="openstack/placement-db-sync-lv7h6" podUID="1cc69af0-0ef0-4399-9084-e81419b65acd" Jan 22 16:51:36 crc kubenswrapper[4758]: E0122 16:51:36.598541 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.196:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 22 16:51:36 crc kubenswrapper[4758]: E0122 16:51:36.598880 4758 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.196:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 22 16:51:36 crc kubenswrapper[4758]: E0122 16:51:36.599004 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.196:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n78h685h654h5b8h5ffh585h6h666h564h654h576h548hf6h54dhfdh9fh5d8h567h6dh5cfh686h687h5d7h696h68chb7hc6h59bh5bfh595h64bh86q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tl6z8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-57f558b485-lrj7s_openstack(4424ff23-60e8-4b65-80f1-87e1003b6f46): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 16:51:36 crc kubenswrapper[4758]: E0122 16:51:36.613387 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.196:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-57f558b485-lrj7s" podUID="4424ff23-60e8-4b65-80f1-87e1003b6f46" Jan 22 16:51:37 crc kubenswrapper[4758]: I0122 16:51:37.081610 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-744dd76757-hj9wx"] Jan 22 16:51:38 crc kubenswrapper[4758]: I0122 16:51:38.072545 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-55b94d9b56-4x8cx"] Jan 22 16:51:50 crc kubenswrapper[4758]: W0122 16:51:50.747118 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44cc928c_2531_4055_9b8f_b36957f3485d.slice/crio-1925eb74e5abe2f00b751aca5b2376cefb445d08dcafd0932af720821c976c3e WatchSource:0}: Error finding container 1925eb74e5abe2f00b751aca5b2376cefb445d08dcafd0932af720821c976c3e: Status 404 returned error can't find the container with id 1925eb74e5abe2f00b751aca5b2376cefb445d08dcafd0932af720821c976c3e Jan 22 16:51:50 crc kubenswrapper[4758]: I0122 16:51:50.851577 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57f558b485-lrj7s" Jan 22 16:51:50 crc kubenswrapper[4758]: I0122 16:51:50.918320 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4424ff23-60e8-4b65-80f1-87e1003b6f46-logs\") pod \"4424ff23-60e8-4b65-80f1-87e1003b6f46\" (UID: \"4424ff23-60e8-4b65-80f1-87e1003b6f46\") " Jan 22 16:51:50 crc kubenswrapper[4758]: I0122 16:51:50.918401 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tl6z8\" (UniqueName: \"kubernetes.io/projected/4424ff23-60e8-4b65-80f1-87e1003b6f46-kube-api-access-tl6z8\") pod \"4424ff23-60e8-4b65-80f1-87e1003b6f46\" (UID: \"4424ff23-60e8-4b65-80f1-87e1003b6f46\") " Jan 22 16:51:50 crc kubenswrapper[4758]: I0122 16:51:50.918449 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4424ff23-60e8-4b65-80f1-87e1003b6f46-scripts\") pod \"4424ff23-60e8-4b65-80f1-87e1003b6f46\" (UID: \"4424ff23-60e8-4b65-80f1-87e1003b6f46\") " Jan 22 16:51:50 crc kubenswrapper[4758]: I0122 16:51:50.918486 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4424ff23-60e8-4b65-80f1-87e1003b6f46-config-data\") pod \"4424ff23-60e8-4b65-80f1-87e1003b6f46\" (UID: \"4424ff23-60e8-4b65-80f1-87e1003b6f46\") " Jan 22 16:51:50 crc kubenswrapper[4758]: I0122 16:51:50.918600 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4424ff23-60e8-4b65-80f1-87e1003b6f46-horizon-secret-key\") pod \"4424ff23-60e8-4b65-80f1-87e1003b6f46\" (UID: \"4424ff23-60e8-4b65-80f1-87e1003b6f46\") " Jan 22 16:51:50 crc kubenswrapper[4758]: I0122 16:51:50.919976 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4424ff23-60e8-4b65-80f1-87e1003b6f46-logs" (OuterVolumeSpecName: "logs") pod "4424ff23-60e8-4b65-80f1-87e1003b6f46" (UID: "4424ff23-60e8-4b65-80f1-87e1003b6f46"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:51:50 crc kubenswrapper[4758]: I0122 16:51:50.927725 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4424ff23-60e8-4b65-80f1-87e1003b6f46-scripts" (OuterVolumeSpecName: "scripts") pod "4424ff23-60e8-4b65-80f1-87e1003b6f46" (UID: "4424ff23-60e8-4b65-80f1-87e1003b6f46"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:51:50 crc kubenswrapper[4758]: I0122 16:51:50.929304 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4424ff23-60e8-4b65-80f1-87e1003b6f46-kube-api-access-tl6z8" (OuterVolumeSpecName: "kube-api-access-tl6z8") pod "4424ff23-60e8-4b65-80f1-87e1003b6f46" (UID: "4424ff23-60e8-4b65-80f1-87e1003b6f46"). InnerVolumeSpecName "kube-api-access-tl6z8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:51:50 crc kubenswrapper[4758]: I0122 16:51:50.929626 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4424ff23-60e8-4b65-80f1-87e1003b6f46-config-data" (OuterVolumeSpecName: "config-data") pod "4424ff23-60e8-4b65-80f1-87e1003b6f46" (UID: "4424ff23-60e8-4b65-80f1-87e1003b6f46"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:51:50 crc kubenswrapper[4758]: I0122 16:51:50.933289 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4424ff23-60e8-4b65-80f1-87e1003b6f46-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "4424ff23-60e8-4b65-80f1-87e1003b6f46" (UID: "4424ff23-60e8-4b65-80f1-87e1003b6f46"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:51:50 crc kubenswrapper[4758]: I0122 16:51:50.968908 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2l5dg"] Jan 22 16:51:51 crc kubenswrapper[4758]: I0122 16:51:51.020622 4758 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4424ff23-60e8-4b65-80f1-87e1003b6f46-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:51 crc kubenswrapper[4758]: I0122 16:51:51.020657 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4424ff23-60e8-4b65-80f1-87e1003b6f46-logs\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:51 crc kubenswrapper[4758]: I0122 16:51:51.020670 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tl6z8\" (UniqueName: \"kubernetes.io/projected/4424ff23-60e8-4b65-80f1-87e1003b6f46-kube-api-access-tl6z8\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:51 crc kubenswrapper[4758]: I0122 16:51:51.020684 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4424ff23-60e8-4b65-80f1-87e1003b6f46-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:51 crc kubenswrapper[4758]: I0122 16:51:51.020697 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4424ff23-60e8-4b65-80f1-87e1003b6f46-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:51 crc kubenswrapper[4758]: I0122 16:51:51.090247 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-88b76f788-th2jq"] Jan 22 16:51:51 crc kubenswrapper[4758]: E0122 16:51:51.281036 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.196:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Jan 22 16:51:51 crc kubenswrapper[4758]: E0122 16:51:51.281104 4758 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.196:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Jan 22 16:51:51 crc kubenswrapper[4758]: E0122 16:51:51.281241 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:38.102.83.196:5001/podified-master-centos10/openstack-barbican-api:watcher_latest,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lnz85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-dmssm_openstack(7a5061fa-23f9-42ce-9682-a3fd99d419d7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 16:51:51 crc kubenswrapper[4758]: E0122 16:51:51.282473 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-dmssm" podUID="7a5061fa-23f9-42ce-9682-a3fd99d419d7" Jan 22 16:51:51 crc kubenswrapper[4758]: W0122 16:51:51.329657 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad0bebb3_f086_4c81_8210_5ff9fed77ea4.slice/crio-df2cb96a282b935a76dd9f5a9f36c2cc8f0bf12a002dc0c8d37c19c710e556f1 WatchSource:0}: Error finding container df2cb96a282b935a76dd9f5a9f36c2cc8f0bf12a002dc0c8d37c19c710e556f1: Status 404 returned error can't find the container with id df2cb96a282b935a76dd9f5a9f36c2cc8f0bf12a002dc0c8d37c19c710e556f1 Jan 22 16:51:51 crc kubenswrapper[4758]: W0122 16:51:51.330540 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40487aaa_4c45_41b2_ab14_76477ed2f4bb.slice/crio-f3f9941d0319e5be68d31ff2956ac7851959f13ebf64cc637fe65406c78ee073 WatchSource:0}: Error finding container f3f9941d0319e5be68d31ff2956ac7851959f13ebf64cc637fe65406c78ee073: Status 404 returned error can't find the container with id f3f9941d0319e5be68d31ff2956ac7851959f13ebf64cc637fe65406c78ee073 Jan 22 16:51:51 crc kubenswrapper[4758]: I0122 16:51:51.391362 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57f558b485-lrj7s" event={"ID":"4424ff23-60e8-4b65-80f1-87e1003b6f46","Type":"ContainerDied","Data":"e22d74a76dd4a2b886dab039bebf7877b847c30eb40232a5256b67257958671c"} Jan 22 16:51:51 crc kubenswrapper[4758]: I0122 16:51:51.391386 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57f558b485-lrj7s" Jan 22 16:51:51 crc kubenswrapper[4758]: I0122 16:51:51.392951 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2l5dg" event={"ID":"ad0bebb3-f086-4c81-8210-5ff9fed77ea4","Type":"ContainerStarted","Data":"df2cb96a282b935a76dd9f5a9f36c2cc8f0bf12a002dc0c8d37c19c710e556f1"} Jan 22 16:51:51 crc kubenswrapper[4758]: I0122 16:51:51.396761 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-88b76f788-th2jq" event={"ID":"40487aaa-4c45-41b2-ab14-76477ed2f4bb","Type":"ContainerStarted","Data":"f3f9941d0319e5be68d31ff2956ac7851959f13ebf64cc637fe65406c78ee073"} Jan 22 16:51:51 crc kubenswrapper[4758]: I0122 16:51:51.400175 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-645cd9555c-62zx7" Jan 22 16:51:51 crc kubenswrapper[4758]: I0122 16:51:51.402141 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-55b94d9b56-4x8cx" event={"ID":"44cc928c-2531-4055-9b8f-b36957f3485d","Type":"ContainerStarted","Data":"1925eb74e5abe2f00b751aca5b2376cefb445d08dcafd0932af720821c976c3e"} Jan 22 16:51:51 crc kubenswrapper[4758]: I0122 16:51:51.406303 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-744dd76757-hj9wx" event={"ID":"6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e","Type":"ContainerStarted","Data":"332c1e7e71ca803ff31574fad1467642df76cd860af0a224a358848c1050417e"} Jan 22 16:51:51 crc kubenswrapper[4758]: E0122 16:51:51.407547 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.196:5001/podified-master-centos10/openstack-barbican-api:watcher_latest\\\"\"" pod="openstack/barbican-db-sync-dmssm" podUID="7a5061fa-23f9-42ce-9682-a3fd99d419d7" Jan 22 16:51:51 crc kubenswrapper[4758]: I0122 16:51:51.422267 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-645cd9555c-62zx7" podStartSLOduration=44.4222519 podStartE2EDuration="44.4222519s" podCreationTimestamp="2026-01-22 16:51:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:51:51.415572829 +0000 UTC m=+1332.898912124" watchObservedRunningTime="2026-01-22 16:51:51.4222519 +0000 UTC m=+1332.905591185" Jan 22 16:51:51 crc kubenswrapper[4758]: I0122 16:51:51.466102 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-57f558b485-lrj7s"] Jan 22 16:51:51 crc kubenswrapper[4758]: I0122 16:51:51.476067 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-57f558b485-lrj7s"] Jan 22 16:51:52 crc kubenswrapper[4758]: I0122 16:51:52.420719 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-645cd9555c-62zx7" event={"ID":"5e14bf40-a1bf-421a-acb7-f8e45f36dbf6","Type":"ContainerStarted","Data":"a3092421fabe49a1da5c7724ddb7e03bc903a811ef647c5ad13cf2afa32719ca"} Jan 22 16:51:52 crc kubenswrapper[4758]: I0122 16:51:52.422561 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"7b167566-11db-4fba-9e9b-711b7a5f950d","Type":"ContainerStarted","Data":"ee2bc907eed613f5169de3fa82eeb6f0b605e649b8c4113b8f56a330c31c6b38"} Jan 22 16:51:52 crc kubenswrapper[4758]: I0122 16:51:52.422713 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="7b167566-11db-4fba-9e9b-711b7a5f950d" containerName="watcher-api-log" containerID="cri-o://ca8a7fdb46a4f7dcb61ead11f8b04d5b481d20cfe1c641eb489a5d5929fe9c8a" gracePeriod=30 Jan 22 16:51:52 crc kubenswrapper[4758]: I0122 16:51:52.422807 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="7b167566-11db-4fba-9e9b-711b7a5f950d" containerName="watcher-api" containerID="cri-o://ee2bc907eed613f5169de3fa82eeb6f0b605e649b8c4113b8f56a330c31c6b38" gracePeriod=30 Jan 22 16:51:52 crc kubenswrapper[4758]: I0122 16:51:52.422845 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 22 16:51:52 crc kubenswrapper[4758]: I0122 16:51:52.429730 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="7b167566-11db-4fba-9e9b-711b7a5f950d" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.150:9322/\": EOF" Jan 22 16:51:52 crc kubenswrapper[4758]: I0122 16:51:52.441794 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=45.441726844 podStartE2EDuration="45.441726844s" podCreationTimestamp="2026-01-22 16:51:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:51:52.43972504 +0000 UTC m=+1333.923064325" watchObservedRunningTime="2026-01-22 16:51:52.441726844 +0000 UTC m=+1333.925066129" Jan 22 16:51:52 crc kubenswrapper[4758]: I0122 16:51:52.824080 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4424ff23-60e8-4b65-80f1-87e1003b6f46" path="/var/lib/kubelet/pods/4424ff23-60e8-4b65-80f1-87e1003b6f46/volumes" Jan 22 16:51:53 crc kubenswrapper[4758]: I0122 16:51:53.061145 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 22 16:51:53 crc kubenswrapper[4758]: I0122 16:51:53.431986 4758 generic.go:334] "Generic (PLEG): container finished" podID="7b167566-11db-4fba-9e9b-711b7a5f950d" containerID="ca8a7fdb46a4f7dcb61ead11f8b04d5b481d20cfe1c641eb489a5d5929fe9c8a" exitCode=143 Jan 22 16:51:53 crc kubenswrapper[4758]: I0122 16:51:53.432817 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"7b167566-11db-4fba-9e9b-711b7a5f950d","Type":"ContainerDied","Data":"ca8a7fdb46a4f7dcb61ead11f8b04d5b481d20cfe1c641eb489a5d5929fe9c8a"} Jan 22 16:51:55 crc kubenswrapper[4758]: I0122 16:51:55.168843 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="7b167566-11db-4fba-9e9b-711b7a5f950d" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.150:9322/\": read tcp 10.217.0.2:33914->10.217.0.150:9322: read: connection reset by peer" Jan 22 16:51:56 crc kubenswrapper[4758]: I0122 16:51:56.459058 4758 generic.go:334] "Generic (PLEG): container finished" podID="7b167566-11db-4fba-9e9b-711b7a5f950d" containerID="ee2bc907eed613f5169de3fa82eeb6f0b605e649b8c4113b8f56a330c31c6b38" exitCode=0 Jan 22 16:51:56 crc kubenswrapper[4758]: I0122 16:51:56.459102 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"7b167566-11db-4fba-9e9b-711b7a5f950d","Type":"ContainerDied","Data":"ee2bc907eed613f5169de3fa82eeb6f0b605e649b8c4113b8f56a330c31c6b38"} Jan 22 16:51:57 crc kubenswrapper[4758]: I0122 16:51:57.287300 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 22 16:51:57 crc kubenswrapper[4758]: E0122 16:51:57.342135 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.196:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Jan 22 16:51:57 crc kubenswrapper[4758]: E0122 16:51:57.342198 4758 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.196:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Jan 22 16:51:57 crc kubenswrapper[4758]: E0122 16:51:57.342331 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:38.102.83.196:5001/podified-master-centos10/openstack-cinder-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t5x5k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-529mh_openstack(b1666997-8287-4065-bcaf-409713fc6782): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 16:51:57 crc kubenswrapper[4758]: E0122 16:51:57.343547 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-529mh" podUID="b1666997-8287-4065-bcaf-409713fc6782" Jan 22 16:51:57 crc kubenswrapper[4758]: E0122 16:51:57.474944 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.196:5001/podified-master-centos10/openstack-cinder-api:watcher_latest\\\"\"" pod="openstack/cinder-db-sync-529mh" podUID="b1666997-8287-4065-bcaf-409713fc6782" Jan 22 16:51:57 crc kubenswrapper[4758]: E0122 16:51:57.573935 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.196:5001/podified-master-centos10/openstack-ceilometer-notification:watcher_latest" Jan 22 16:51:57 crc kubenswrapper[4758]: E0122 16:51:57.574241 4758 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.196:5001/podified-master-centos10/openstack-ceilometer-notification:watcher_latest" Jan 22 16:51:57 crc kubenswrapper[4758]: E0122 16:51:57.574439 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-notification-agent,Image:38.102.83.196:5001/podified-master-centos10/openstack-ceilometer-notification:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n7ch68dh55fhc8h5f5h578h67fhc9hdh589h575h698h685h94hd9hd6hb6hdbh9bhbdh684h76h69h5d7h99h57bhdbhb5hd7h5c5h5dbh65cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-notification-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-chd47,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/notificationhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(a67f1efb-4c74-4acd-9948-de1491a8479c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 16:51:57 crc kubenswrapper[4758]: I0122 16:51:57.864767 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 22 16:51:57 crc kubenswrapper[4758]: I0122 16:51:57.965268 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7b167566-11db-4fba-9e9b-711b7a5f950d-custom-prometheus-ca\") pod \"7b167566-11db-4fba-9e9b-711b7a5f950d\" (UID: \"7b167566-11db-4fba-9e9b-711b7a5f950d\") " Jan 22 16:51:57 crc kubenswrapper[4758]: I0122 16:51:57.965901 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b167566-11db-4fba-9e9b-711b7a5f950d-logs\") pod \"7b167566-11db-4fba-9e9b-711b7a5f950d\" (UID: \"7b167566-11db-4fba-9e9b-711b7a5f950d\") " Jan 22 16:51:57 crc kubenswrapper[4758]: I0122 16:51:57.966086 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b167566-11db-4fba-9e9b-711b7a5f950d-config-data\") pod \"7b167566-11db-4fba-9e9b-711b7a5f950d\" (UID: \"7b167566-11db-4fba-9e9b-711b7a5f950d\") " Jan 22 16:51:57 crc kubenswrapper[4758]: I0122 16:51:57.966263 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b167566-11db-4fba-9e9b-711b7a5f950d-combined-ca-bundle\") pod \"7b167566-11db-4fba-9e9b-711b7a5f950d\" (UID: \"7b167566-11db-4fba-9e9b-711b7a5f950d\") " Jan 22 16:51:57 crc kubenswrapper[4758]: I0122 16:51:57.966404 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prmgg\" (UniqueName: \"kubernetes.io/projected/7b167566-11db-4fba-9e9b-711b7a5f950d-kube-api-access-prmgg\") pod \"7b167566-11db-4fba-9e9b-711b7a5f950d\" (UID: \"7b167566-11db-4fba-9e9b-711b7a5f950d\") " Jan 22 16:51:57 crc kubenswrapper[4758]: I0122 16:51:57.968907 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b167566-11db-4fba-9e9b-711b7a5f950d-logs" (OuterVolumeSpecName: "logs") pod "7b167566-11db-4fba-9e9b-711b7a5f950d" (UID: "7b167566-11db-4fba-9e9b-711b7a5f950d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:51:57 crc kubenswrapper[4758]: I0122 16:51:57.984017 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b167566-11db-4fba-9e9b-711b7a5f950d-kube-api-access-prmgg" (OuterVolumeSpecName: "kube-api-access-prmgg") pod "7b167566-11db-4fba-9e9b-711b7a5f950d" (UID: "7b167566-11db-4fba-9e9b-711b7a5f950d"). InnerVolumeSpecName "kube-api-access-prmgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.068853 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b167566-11db-4fba-9e9b-711b7a5f950d-logs\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.068933 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prmgg\" (UniqueName: \"kubernetes.io/projected/7b167566-11db-4fba-9e9b-711b7a5f950d-kube-api-access-prmgg\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.153158 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b167566-11db-4fba-9e9b-711b7a5f950d-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "7b167566-11db-4fba-9e9b-711b7a5f950d" (UID: "7b167566-11db-4fba-9e9b-711b7a5f950d"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.170323 4758 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7b167566-11db-4fba-9e9b-711b7a5f950d-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.208963 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b167566-11db-4fba-9e9b-711b7a5f950d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b167566-11db-4fba-9e9b-711b7a5f950d" (UID: "7b167566-11db-4fba-9e9b-711b7a5f950d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.272859 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b167566-11db-4fba-9e9b-711b7a5f950d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.287297 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b167566-11db-4fba-9e9b-711b7a5f950d-config-data" (OuterVolumeSpecName: "config-data") pod "7b167566-11db-4fba-9e9b-711b7a5f950d" (UID: "7b167566-11db-4fba-9e9b-711b7a5f950d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.374977 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b167566-11db-4fba-9e9b-711b7a5f950d-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.492784 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"7b167566-11db-4fba-9e9b-711b7a5f950d","Type":"ContainerDied","Data":"0b344bff57236d08e3baf13f0393ce1fe41a00bd1b26c2436014a3fa2fd2a966"} Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.493168 4758 scope.go:117] "RemoveContainer" containerID="ee2bc907eed613f5169de3fa82eeb6f0b605e649b8c4113b8f56a330c31c6b38" Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.493348 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.506519 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b5d5b589c-8c4hx" event={"ID":"e3eee08c-7cca-4bd3-bcd2-f3702e470ff2","Type":"ContainerStarted","Data":"8bdd78becfb73f8d3bd1890964a73880ab03efb2e937200ea3fac388b7cf775e"} Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.506559 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b5d5b589c-8c4hx" event={"ID":"e3eee08c-7cca-4bd3-bcd2-f3702e470ff2","Type":"ContainerStarted","Data":"563966c491bebd90caa468ebe97ba454c532453d5ab1012d0fd7d6cc5ed5ff66"} Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.506682 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5b5d5b589c-8c4hx" podUID="e3eee08c-7cca-4bd3-bcd2-f3702e470ff2" containerName="horizon-log" containerID="cri-o://563966c491bebd90caa468ebe97ba454c532453d5ab1012d0fd7d6cc5ed5ff66" gracePeriod=30 Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.506957 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5b5d5b589c-8c4hx" podUID="e3eee08c-7cca-4bd3-bcd2-f3702e470ff2" containerName="horizon" containerID="cri-o://8bdd78becfb73f8d3bd1890964a73880ab03efb2e937200ea3fac388b7cf775e" gracePeriod=30 Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.518130 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2l5dg" event={"ID":"ad0bebb3-f086-4c81-8210-5ff9fed77ea4","Type":"ContainerStarted","Data":"9b2b3ca26420af022c92fa8fa71bffec91d0a63c273807336b9b11c84bcdab6e"} Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.531615 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ea53227e-7c78-42b4-959c-dd2531914be2","Type":"ContainerStarted","Data":"2281de046c6ce3884f86c4c8d3079b3033bb8b0a156ee418a39def692010ca33"} Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.532774 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5b5d5b589c-8c4hx" podStartSLOduration=4.199859219 podStartE2EDuration="51.532759672s" podCreationTimestamp="2026-01-22 16:51:07 +0000 UTC" firstStartedPulling="2026-01-22 16:51:10.243995841 +0000 UTC m=+1291.727335126" lastFinishedPulling="2026-01-22 16:51:57.576896294 +0000 UTC m=+1339.060235579" observedRunningTime="2026-01-22 16:51:58.531164329 +0000 UTC m=+1340.014503614" watchObservedRunningTime="2026-01-22 16:51:58.532759672 +0000 UTC m=+1340.016098957" Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.555815 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-88b76f788-th2jq" event={"ID":"40487aaa-4c45-41b2-ab14-76477ed2f4bb","Type":"ContainerStarted","Data":"f512c542a3f7080a3e0e9498fe8473553577ff1a142250d2654113eab457a261"} Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.573247 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-2l5dg" podStartSLOduration=34.573232027 podStartE2EDuration="34.573232027s" podCreationTimestamp="2026-01-22 16:51:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:51:58.549695864 +0000 UTC m=+1340.033035149" watchObservedRunningTime="2026-01-22 16:51:58.573232027 +0000 UTC m=+1340.056571312" Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.579617 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-55b94d9b56-4x8cx" event={"ID":"44cc928c-2531-4055-9b8f-b36957f3485d","Type":"ContainerStarted","Data":"dd8f75a58ea82eba9d86531184d0e6a5cb221c47d677a6d7b6f631159471462e"} Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.590821 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.601316 4758 scope.go:117] "RemoveContainer" containerID="ca8a7fdb46a4f7dcb61ead11f8b04d5b481d20cfe1c641eb489a5d5929fe9c8a" Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.616693 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-744dd76757-hj9wx" event={"ID":"6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e","Type":"ContainerStarted","Data":"03be0baef6e9d0e040038bc9408c12842e56c34ea8fb131382ccf9b85a67ae89"} Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.616909 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-744dd76757-hj9wx" podUID="6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e" containerName="horizon-log" containerID="cri-o://03be0baef6e9d0e040038bc9408c12842e56c34ea8fb131382ccf9b85a67ae89" gracePeriod=30 Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.617352 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-744dd76757-hj9wx" podUID="6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e" containerName="horizon" containerID="cri-o://60fe6d10518ecf7840c52f3d3028d2c8fc1ff34292e0a985a667d2e13644f112" gracePeriod=30 Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.617901 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.623437 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=4.103180074 podStartE2EDuration="51.623411105s" podCreationTimestamp="2026-01-22 16:51:07 +0000 UTC" firstStartedPulling="2026-01-22 16:51:10.193884457 +0000 UTC m=+1291.677223752" lastFinishedPulling="2026-01-22 16:51:57.714115498 +0000 UTC m=+1339.197454783" observedRunningTime="2026-01-22 16:51:58.585018148 +0000 UTC m=+1340.068357443" watchObservedRunningTime="2026-01-22 16:51:58.623411105 +0000 UTC m=+1340.106750390" Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.627241 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"9e324fa8-b3ee-4072-8a8d-5c08e771d0c0","Type":"ContainerStarted","Data":"f48b067c805b943bf203f726a9c5fe8e5de020511ec8991a6c86f7f1752ac774"} Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.630936 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-lv7h6" event={"ID":"1cc69af0-0ef0-4399-9084-e81419b65acd","Type":"ContainerStarted","Data":"3a7eb876a027926425012f48e1cd423431ed1fa33024a0073914b0d281905ffd"} Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.702960 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-645cd9555c-62zx7" Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.712186 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Jan 22 16:51:58 crc kubenswrapper[4758]: E0122 16:51:58.712552 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a9b58cb-7958-4dae-82ec-c435a970a8db" containerName="init" Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.712564 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a9b58cb-7958-4dae-82ec-c435a970a8db" containerName="init" Jan 22 16:51:58 crc kubenswrapper[4758]: E0122 16:51:58.712590 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b167566-11db-4fba-9e9b-711b7a5f950d" containerName="watcher-api" Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.712596 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b167566-11db-4fba-9e9b-711b7a5f950d" containerName="watcher-api" Jan 22 16:51:58 crc kubenswrapper[4758]: E0122 16:51:58.712606 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b167566-11db-4fba-9e9b-711b7a5f950d" containerName="watcher-api-log" Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.712613 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b167566-11db-4fba-9e9b-711b7a5f950d" containerName="watcher-api-log" Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.712850 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b167566-11db-4fba-9e9b-711b7a5f950d" containerName="watcher-api" Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.712866 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b167566-11db-4fba-9e9b-711b7a5f950d" containerName="watcher-api-log" Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.712875 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a9b58cb-7958-4dae-82ec-c435a970a8db" containerName="init" Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.713755 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.716863 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.720570 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-55b94d9b56-4x8cx" podStartSLOduration=42.720552706 podStartE2EDuration="42.720552706s" podCreationTimestamp="2026-01-22 16:51:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:51:58.60707472 +0000 UTC m=+1340.090413995" watchObservedRunningTime="2026-01-22 16:51:58.720552706 +0000 UTC m=+1340.203891991" Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.740094 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 22 16:51:58 crc kubenswrapper[4758]: E0122 16:51:58.760482 4758 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b167566_11db_4fba_9e9b_711b7a5f950d.slice/crio-0b344bff57236d08e3baf13f0393ce1fe41a00bd1b26c2436014a3fa2fd2a966\": RecentStats: unable to find data in memory cache]" Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.774492 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-744dd76757-hj9wx" podStartSLOduration=47.774467586 podStartE2EDuration="47.774467586s" podCreationTimestamp="2026-01-22 16:51:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:51:58.650803783 +0000 UTC m=+1340.134143068" watchObservedRunningTime="2026-01-22 16:51:58.774467586 +0000 UTC m=+1340.257806871" Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.789301 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=4.260055628 podStartE2EDuration="51.789278931s" podCreationTimestamp="2026-01-22 16:51:07 +0000 UTC" firstStartedPulling="2026-01-22 16:51:10.110327061 +0000 UTC m=+1291.593666346" lastFinishedPulling="2026-01-22 16:51:57.639550364 +0000 UTC m=+1339.122889649" observedRunningTime="2026-01-22 16:51:58.674415077 +0000 UTC m=+1340.157754372" watchObservedRunningTime="2026-01-22 16:51:58.789278931 +0000 UTC m=+1340.272618216" Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.790578 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7e024ebe-16b8-454b-b1ca-2e42e6883e65-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"7e024ebe-16b8-454b-b1ca-2e42e6883e65\") " pod="openstack/watcher-api-0" Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.791048 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e024ebe-16b8-454b-b1ca-2e42e6883e65-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"7e024ebe-16b8-454b-b1ca-2e42e6883e65\") " pod="openstack/watcher-api-0" Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.791245 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e024ebe-16b8-454b-b1ca-2e42e6883e65-config-data\") pod \"watcher-api-0\" (UID: \"7e024ebe-16b8-454b-b1ca-2e42e6883e65\") " pod="openstack/watcher-api-0" Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.791341 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwwc5\" (UniqueName: \"kubernetes.io/projected/7e024ebe-16b8-454b-b1ca-2e42e6883e65-kube-api-access-rwwc5\") pod \"watcher-api-0\" (UID: \"7e024ebe-16b8-454b-b1ca-2e42e6883e65\") " pod="openstack/watcher-api-0" Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.791489 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e024ebe-16b8-454b-b1ca-2e42e6883e65-logs\") pod \"watcher-api-0\" (UID: \"7e024ebe-16b8-454b-b1ca-2e42e6883e65\") " pod="openstack/watcher-api-0" Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.803419 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-lv7h6" podStartSLOduration=4.13904104 podStartE2EDuration="51.803390326s" podCreationTimestamp="2026-01-22 16:51:07 +0000 UTC" firstStartedPulling="2026-01-22 16:51:10.061119331 +0000 UTC m=+1291.544458616" lastFinishedPulling="2026-01-22 16:51:57.725468617 +0000 UTC m=+1339.208807902" observedRunningTime="2026-01-22 16:51:58.703233563 +0000 UTC m=+1340.186572848" watchObservedRunningTime="2026-01-22 16:51:58.803390326 +0000 UTC m=+1340.286729631" Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.849711 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b167566-11db-4fba-9e9b-711b7a5f950d" path="/var/lib/kubelet/pods/7b167566-11db-4fba-9e9b-711b7a5f950d/volumes" Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.850405 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5b5d5b589c-8c4hx" Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.887992 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b65dddd8f-twdkl"] Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.888535 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b65dddd8f-twdkl" podUID="287aac2e-b390-416b-be0e-4b8b07e5e486" containerName="dnsmasq-dns" containerID="cri-o://e65a760ba49ab042f2f458c16ce75b91642ab76eef2cc9b7a3371ff2c6b18aa5" gracePeriod=10 Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.895130 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e024ebe-16b8-454b-b1ca-2e42e6883e65-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"7e024ebe-16b8-454b-b1ca-2e42e6883e65\") " pod="openstack/watcher-api-0" Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.895305 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e024ebe-16b8-454b-b1ca-2e42e6883e65-config-data\") pod \"watcher-api-0\" (UID: \"7e024ebe-16b8-454b-b1ca-2e42e6883e65\") " pod="openstack/watcher-api-0" Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.895336 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwwc5\" (UniqueName: \"kubernetes.io/projected/7e024ebe-16b8-454b-b1ca-2e42e6883e65-kube-api-access-rwwc5\") pod \"watcher-api-0\" (UID: \"7e024ebe-16b8-454b-b1ca-2e42e6883e65\") " pod="openstack/watcher-api-0" Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.895430 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e024ebe-16b8-454b-b1ca-2e42e6883e65-logs\") pod \"watcher-api-0\" (UID: \"7e024ebe-16b8-454b-b1ca-2e42e6883e65\") " pod="openstack/watcher-api-0" Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.895485 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7e024ebe-16b8-454b-b1ca-2e42e6883e65-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"7e024ebe-16b8-454b-b1ca-2e42e6883e65\") " pod="openstack/watcher-api-0" Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.913304 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e024ebe-16b8-454b-b1ca-2e42e6883e65-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"7e024ebe-16b8-454b-b1ca-2e42e6883e65\") " pod="openstack/watcher-api-0" Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.914039 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e024ebe-16b8-454b-b1ca-2e42e6883e65-logs\") pod \"watcher-api-0\" (UID: \"7e024ebe-16b8-454b-b1ca-2e42e6883e65\") " pod="openstack/watcher-api-0" Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.930946 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7e024ebe-16b8-454b-b1ca-2e42e6883e65-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"7e024ebe-16b8-454b-b1ca-2e42e6883e65\") " pod="openstack/watcher-api-0" Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.932358 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e024ebe-16b8-454b-b1ca-2e42e6883e65-config-data\") pod \"watcher-api-0\" (UID: \"7e024ebe-16b8-454b-b1ca-2e42e6883e65\") " pod="openstack/watcher-api-0" Jan 22 16:51:58 crc kubenswrapper[4758]: I0122 16:51:58.943790 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwwc5\" (UniqueName: \"kubernetes.io/projected/7e024ebe-16b8-454b-b1ca-2e42e6883e65-kube-api-access-rwwc5\") pod \"watcher-api-0\" (UID: \"7e024ebe-16b8-454b-b1ca-2e42e6883e65\") " pod="openstack/watcher-api-0" Jan 22 16:51:59 crc kubenswrapper[4758]: I0122 16:51:59.095826 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 22 16:51:59 crc kubenswrapper[4758]: I0122 16:51:59.486530 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b65dddd8f-twdkl" Jan 22 16:51:59 crc kubenswrapper[4758]: I0122 16:51:59.524510 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjxs5\" (UniqueName: \"kubernetes.io/projected/287aac2e-b390-416b-be0e-4b8b07e5e486-kube-api-access-zjxs5\") pod \"287aac2e-b390-416b-be0e-4b8b07e5e486\" (UID: \"287aac2e-b390-416b-be0e-4b8b07e5e486\") " Jan 22 16:51:59 crc kubenswrapper[4758]: I0122 16:51:59.524696 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/287aac2e-b390-416b-be0e-4b8b07e5e486-dns-swift-storage-0\") pod \"287aac2e-b390-416b-be0e-4b8b07e5e486\" (UID: \"287aac2e-b390-416b-be0e-4b8b07e5e486\") " Jan 22 16:51:59 crc kubenswrapper[4758]: I0122 16:51:59.524799 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/287aac2e-b390-416b-be0e-4b8b07e5e486-ovsdbserver-nb\") pod \"287aac2e-b390-416b-be0e-4b8b07e5e486\" (UID: \"287aac2e-b390-416b-be0e-4b8b07e5e486\") " Jan 22 16:51:59 crc kubenswrapper[4758]: I0122 16:51:59.524839 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/287aac2e-b390-416b-be0e-4b8b07e5e486-config\") pod \"287aac2e-b390-416b-be0e-4b8b07e5e486\" (UID: \"287aac2e-b390-416b-be0e-4b8b07e5e486\") " Jan 22 16:51:59 crc kubenswrapper[4758]: I0122 16:51:59.524900 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/287aac2e-b390-416b-be0e-4b8b07e5e486-ovsdbserver-sb\") pod \"287aac2e-b390-416b-be0e-4b8b07e5e486\" (UID: \"287aac2e-b390-416b-be0e-4b8b07e5e486\") " Jan 22 16:51:59 crc kubenswrapper[4758]: I0122 16:51:59.524937 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/287aac2e-b390-416b-be0e-4b8b07e5e486-dns-svc\") pod \"287aac2e-b390-416b-be0e-4b8b07e5e486\" (UID: \"287aac2e-b390-416b-be0e-4b8b07e5e486\") " Jan 22 16:51:59 crc kubenswrapper[4758]: I0122 16:51:59.553304 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/287aac2e-b390-416b-be0e-4b8b07e5e486-kube-api-access-zjxs5" (OuterVolumeSpecName: "kube-api-access-zjxs5") pod "287aac2e-b390-416b-be0e-4b8b07e5e486" (UID: "287aac2e-b390-416b-be0e-4b8b07e5e486"). InnerVolumeSpecName "kube-api-access-zjxs5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:51:59 crc kubenswrapper[4758]: I0122 16:51:59.596522 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/287aac2e-b390-416b-be0e-4b8b07e5e486-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "287aac2e-b390-416b-be0e-4b8b07e5e486" (UID: "287aac2e-b390-416b-be0e-4b8b07e5e486"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:51:59 crc kubenswrapper[4758]: I0122 16:51:59.607644 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/287aac2e-b390-416b-be0e-4b8b07e5e486-config" (OuterVolumeSpecName: "config") pod "287aac2e-b390-416b-be0e-4b8b07e5e486" (UID: "287aac2e-b390-416b-be0e-4b8b07e5e486"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:51:59 crc kubenswrapper[4758]: I0122 16:51:59.622520 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/287aac2e-b390-416b-be0e-4b8b07e5e486-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "287aac2e-b390-416b-be0e-4b8b07e5e486" (UID: "287aac2e-b390-416b-be0e-4b8b07e5e486"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:51:59 crc kubenswrapper[4758]: I0122 16:51:59.628219 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/287aac2e-b390-416b-be0e-4b8b07e5e486-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "287aac2e-b390-416b-be0e-4b8b07e5e486" (UID: "287aac2e-b390-416b-be0e-4b8b07e5e486"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:51:59 crc kubenswrapper[4758]: I0122 16:51:59.634052 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjxs5\" (UniqueName: \"kubernetes.io/projected/287aac2e-b390-416b-be0e-4b8b07e5e486-kube-api-access-zjxs5\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:59 crc kubenswrapper[4758]: I0122 16:51:59.634087 4758 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/287aac2e-b390-416b-be0e-4b8b07e5e486-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:59 crc kubenswrapper[4758]: I0122 16:51:59.634099 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/287aac2e-b390-416b-be0e-4b8b07e5e486-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:59 crc kubenswrapper[4758]: I0122 16:51:59.634107 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/287aac2e-b390-416b-be0e-4b8b07e5e486-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:59 crc kubenswrapper[4758]: I0122 16:51:59.634116 4758 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/287aac2e-b390-416b-be0e-4b8b07e5e486-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:59 crc kubenswrapper[4758]: I0122 16:51:59.653020 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-88b76f788-th2jq" event={"ID":"40487aaa-4c45-41b2-ab14-76477ed2f4bb","Type":"ContainerStarted","Data":"3f804875d0ec8e65f89084335817802426f37c82f619dc121c0a2be09bd1b67f"} Jan 22 16:51:59 crc kubenswrapper[4758]: I0122 16:51:59.660284 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/287aac2e-b390-416b-be0e-4b8b07e5e486-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "287aac2e-b390-416b-be0e-4b8b07e5e486" (UID: "287aac2e-b390-416b-be0e-4b8b07e5e486"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:51:59 crc kubenswrapper[4758]: I0122 16:51:59.672497 4758 generic.go:334] "Generic (PLEG): container finished" podID="287aac2e-b390-416b-be0e-4b8b07e5e486" containerID="e65a760ba49ab042f2f458c16ce75b91642ab76eef2cc9b7a3371ff2c6b18aa5" exitCode=0 Jan 22 16:51:59 crc kubenswrapper[4758]: I0122 16:51:59.672589 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b65dddd8f-twdkl" event={"ID":"287aac2e-b390-416b-be0e-4b8b07e5e486","Type":"ContainerDied","Data":"e65a760ba49ab042f2f458c16ce75b91642ab76eef2cc9b7a3371ff2c6b18aa5"} Jan 22 16:51:59 crc kubenswrapper[4758]: I0122 16:51:59.672621 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b65dddd8f-twdkl" event={"ID":"287aac2e-b390-416b-be0e-4b8b07e5e486","Type":"ContainerDied","Data":"96efa6b78ec96646c06e67bce773705e39b77cd54bcd42152da02482720349f3"} Jan 22 16:51:59 crc kubenswrapper[4758]: I0122 16:51:59.672642 4758 scope.go:117] "RemoveContainer" containerID="e65a760ba49ab042f2f458c16ce75b91642ab76eef2cc9b7a3371ff2c6b18aa5" Jan 22 16:51:59 crc kubenswrapper[4758]: I0122 16:51:59.672787 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b65dddd8f-twdkl" Jan 22 16:51:59 crc kubenswrapper[4758]: I0122 16:51:59.685324 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-88b76f788-th2jq" podStartSLOduration=43.685301816 podStartE2EDuration="43.685301816s" podCreationTimestamp="2026-01-22 16:51:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:51:59.673914276 +0000 UTC m=+1341.157253561" watchObservedRunningTime="2026-01-22 16:51:59.685301816 +0000 UTC m=+1341.168641101" Jan 22 16:51:59 crc kubenswrapper[4758]: I0122 16:51:59.695059 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-55b94d9b56-4x8cx" event={"ID":"44cc928c-2531-4055-9b8f-b36957f3485d","Type":"ContainerStarted","Data":"8d3f2f2842bcef5940fbc10c2f032cc368070776c4813a43c0a47bb26615e675"} Jan 22 16:51:59 crc kubenswrapper[4758]: I0122 16:51:59.714562 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-744dd76757-hj9wx" event={"ID":"6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e","Type":"ContainerStarted","Data":"60fe6d10518ecf7840c52f3d3028d2c8fc1ff34292e0a985a667d2e13644f112"} Jan 22 16:51:59 crc kubenswrapper[4758]: I0122 16:51:59.730545 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9h9hb" event={"ID":"f8fe0f21-8912-4d6c-ba4f-6600456784e1","Type":"ContainerStarted","Data":"e372811a729ff0df8fbd6e21e7f66d2104eef17385e80d9766761eea836f31d5"} Jan 22 16:51:59 crc kubenswrapper[4758]: I0122 16:51:59.736180 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/287aac2e-b390-416b-be0e-4b8b07e5e486-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:59 crc kubenswrapper[4758]: I0122 16:51:59.737787 4758 scope.go:117] "RemoveContainer" containerID="badce9d7df947968779dba417b95d0ae50df8188051bae34446e57fe7f2c0266" Jan 22 16:51:59 crc kubenswrapper[4758]: I0122 16:51:59.750622 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 22 16:51:59 crc kubenswrapper[4758]: I0122 16:51:59.757814 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b65dddd8f-twdkl"] Jan 22 16:51:59 crc kubenswrapper[4758]: I0122 16:51:59.765855 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b65dddd8f-twdkl"] Jan 22 16:51:59 crc kubenswrapper[4758]: I0122 16:51:59.766748 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-9h9hb" podStartSLOduration=3.886462528 podStartE2EDuration="58.766720558s" podCreationTimestamp="2026-01-22 16:51:01 +0000 UTC" firstStartedPulling="2026-01-22 16:51:02.863071495 +0000 UTC m=+1284.346410780" lastFinishedPulling="2026-01-22 16:51:57.743329525 +0000 UTC m=+1339.226668810" observedRunningTime="2026-01-22 16:51:59.760602681 +0000 UTC m=+1341.243941966" watchObservedRunningTime="2026-01-22 16:51:59.766720558 +0000 UTC m=+1341.250059843" Jan 22 16:51:59 crc kubenswrapper[4758]: I0122 16:51:59.800799 4758 scope.go:117] "RemoveContainer" containerID="e65a760ba49ab042f2f458c16ce75b91642ab76eef2cc9b7a3371ff2c6b18aa5" Jan 22 16:51:59 crc kubenswrapper[4758]: E0122 16:51:59.801110 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e65a760ba49ab042f2f458c16ce75b91642ab76eef2cc9b7a3371ff2c6b18aa5\": container with ID starting with e65a760ba49ab042f2f458c16ce75b91642ab76eef2cc9b7a3371ff2c6b18aa5 not found: ID does not exist" containerID="e65a760ba49ab042f2f458c16ce75b91642ab76eef2cc9b7a3371ff2c6b18aa5" Jan 22 16:51:59 crc kubenswrapper[4758]: I0122 16:51:59.801176 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e65a760ba49ab042f2f458c16ce75b91642ab76eef2cc9b7a3371ff2c6b18aa5"} err="failed to get container status \"e65a760ba49ab042f2f458c16ce75b91642ab76eef2cc9b7a3371ff2c6b18aa5\": rpc error: code = NotFound desc = could not find container \"e65a760ba49ab042f2f458c16ce75b91642ab76eef2cc9b7a3371ff2c6b18aa5\": container with ID starting with e65a760ba49ab042f2f458c16ce75b91642ab76eef2cc9b7a3371ff2c6b18aa5 not found: ID does not exist" Jan 22 16:51:59 crc kubenswrapper[4758]: I0122 16:51:59.801197 4758 scope.go:117] "RemoveContainer" containerID="badce9d7df947968779dba417b95d0ae50df8188051bae34446e57fe7f2c0266" Jan 22 16:51:59 crc kubenswrapper[4758]: E0122 16:51:59.801622 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"badce9d7df947968779dba417b95d0ae50df8188051bae34446e57fe7f2c0266\": container with ID starting with badce9d7df947968779dba417b95d0ae50df8188051bae34446e57fe7f2c0266 not found: ID does not exist" containerID="badce9d7df947968779dba417b95d0ae50df8188051bae34446e57fe7f2c0266" Jan 22 16:51:59 crc kubenswrapper[4758]: I0122 16:51:59.801650 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"badce9d7df947968779dba417b95d0ae50df8188051bae34446e57fe7f2c0266"} err="failed to get container status \"badce9d7df947968779dba417b95d0ae50df8188051bae34446e57fe7f2c0266\": rpc error: code = NotFound desc = could not find container \"badce9d7df947968779dba417b95d0ae50df8188051bae34446e57fe7f2c0266\": container with ID starting with badce9d7df947968779dba417b95d0ae50df8188051bae34446e57fe7f2c0266 not found: ID does not exist" Jan 22 16:52:00 crc kubenswrapper[4758]: I0122 16:52:00.745326 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"7e024ebe-16b8-454b-b1ca-2e42e6883e65","Type":"ContainerStarted","Data":"9eeddd727171947e1836501aaef4a7fc5e321f11099ad1273369dfad7e7322f2"} Jan 22 16:52:00 crc kubenswrapper[4758]: I0122 16:52:00.745557 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"7e024ebe-16b8-454b-b1ca-2e42e6883e65","Type":"ContainerStarted","Data":"8227401b4532badd1f72d6e02c9e94c68bd6e847c6f48c5118ae6e7d97aa2a27"} Jan 22 16:52:00 crc kubenswrapper[4758]: I0122 16:52:00.745567 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"7e024ebe-16b8-454b-b1ca-2e42e6883e65","Type":"ContainerStarted","Data":"7d34d4fb6b65ecefc0772db37f4946a2a1b7986e257a44062b8da614f947ca2b"} Jan 22 16:52:00 crc kubenswrapper[4758]: I0122 16:52:00.746422 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 22 16:52:00 crc kubenswrapper[4758]: I0122 16:52:00.825485 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="287aac2e-b390-416b-be0e-4b8b07e5e486" path="/var/lib/kubelet/pods/287aac2e-b390-416b-be0e-4b8b07e5e486/volumes" Jan 22 16:52:01 crc kubenswrapper[4758]: I0122 16:52:01.556512 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-744dd76757-hj9wx" Jan 22 16:52:01 crc kubenswrapper[4758]: I0122 16:52:01.772301 4758 generic.go:334] "Generic (PLEG): container finished" podID="c276b685-1d06-4272-9eeb-7b759a8bffff" containerID="3d90a62b483d010a7a8dc323d0a9383e4c40248ba21a44fdc6e779c4e5730570" exitCode=0 Jan 22 16:52:01 crc kubenswrapper[4758]: I0122 16:52:01.772390 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-c52rv" event={"ID":"c276b685-1d06-4272-9eeb-7b759a8bffff","Type":"ContainerDied","Data":"3d90a62b483d010a7a8dc323d0a9383e4c40248ba21a44fdc6e779c4e5730570"} Jan 22 16:52:01 crc kubenswrapper[4758]: I0122 16:52:01.796367 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=3.7963469610000002 podStartE2EDuration="3.796346961s" podCreationTimestamp="2026-01-22 16:51:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:52:00.773128925 +0000 UTC m=+1342.256468210" watchObservedRunningTime="2026-01-22 16:52:01.796346961 +0000 UTC m=+1343.279686246" Jan 22 16:52:02 crc kubenswrapper[4758]: I0122 16:52:02.782047 4758 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 16:52:03 crc kubenswrapper[4758]: I0122 16:52:03.253984 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Jan 22 16:52:03 crc kubenswrapper[4758]: I0122 16:52:03.331375 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 22 16:52:04 crc kubenswrapper[4758]: I0122 16:52:04.097805 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 22 16:52:04 crc kubenswrapper[4758]: I0122 16:52:04.803071 4758 generic.go:334] "Generic (PLEG): container finished" podID="ad0bebb3-f086-4c81-8210-5ff9fed77ea4" containerID="9b2b3ca26420af022c92fa8fa71bffec91d0a63c273807336b9b11c84bcdab6e" exitCode=0 Jan 22 16:52:04 crc kubenswrapper[4758]: I0122 16:52:04.803167 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2l5dg" event={"ID":"ad0bebb3-f086-4c81-8210-5ff9fed77ea4","Type":"ContainerDied","Data":"9b2b3ca26420af022c92fa8fa71bffec91d0a63c273807336b9b11c84bcdab6e"} Jan 22 16:52:05 crc kubenswrapper[4758]: I0122 16:52:05.762257 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-c52rv" Jan 22 16:52:05 crc kubenswrapper[4758]: I0122 16:52:05.877531 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-c52rv" Jan 22 16:52:05 crc kubenswrapper[4758]: I0122 16:52:05.877914 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-c52rv" event={"ID":"c276b685-1d06-4272-9eeb-7b759a8bffff","Type":"ContainerDied","Data":"bdda7b4130b590b05b0c7299e6a52049afd08bb5a570e99c0c911722dd51e7fb"} Jan 22 16:52:05 crc kubenswrapper[4758]: I0122 16:52:05.877937 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdda7b4130b590b05b0c7299e6a52049afd08bb5a570e99c0c911722dd51e7fb" Jan 22 16:52:05 crc kubenswrapper[4758]: I0122 16:52:05.883241 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdswc\" (UniqueName: \"kubernetes.io/projected/c276b685-1d06-4272-9eeb-7b759a8bffff-kube-api-access-hdswc\") pod \"c276b685-1d06-4272-9eeb-7b759a8bffff\" (UID: \"c276b685-1d06-4272-9eeb-7b759a8bffff\") " Jan 22 16:52:05 crc kubenswrapper[4758]: I0122 16:52:05.883481 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c276b685-1d06-4272-9eeb-7b759a8bffff-config\") pod \"c276b685-1d06-4272-9eeb-7b759a8bffff\" (UID: \"c276b685-1d06-4272-9eeb-7b759a8bffff\") " Jan 22 16:52:05 crc kubenswrapper[4758]: I0122 16:52:05.883614 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c276b685-1d06-4272-9eeb-7b759a8bffff-combined-ca-bundle\") pod \"c276b685-1d06-4272-9eeb-7b759a8bffff\" (UID: \"c276b685-1d06-4272-9eeb-7b759a8bffff\") " Jan 22 16:52:05 crc kubenswrapper[4758]: I0122 16:52:05.916876 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c276b685-1d06-4272-9eeb-7b759a8bffff-kube-api-access-hdswc" (OuterVolumeSpecName: "kube-api-access-hdswc") pod "c276b685-1d06-4272-9eeb-7b759a8bffff" (UID: "c276b685-1d06-4272-9eeb-7b759a8bffff"). InnerVolumeSpecName "kube-api-access-hdswc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:52:05 crc kubenswrapper[4758]: I0122 16:52:05.927286 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c276b685-1d06-4272-9eeb-7b759a8bffff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c276b685-1d06-4272-9eeb-7b759a8bffff" (UID: "c276b685-1d06-4272-9eeb-7b759a8bffff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:05 crc kubenswrapper[4758]: I0122 16:52:05.954210 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c276b685-1d06-4272-9eeb-7b759a8bffff-config" (OuterVolumeSpecName: "config") pod "c276b685-1d06-4272-9eeb-7b759a8bffff" (UID: "c276b685-1d06-4272-9eeb-7b759a8bffff"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:05 crc kubenswrapper[4758]: I0122 16:52:05.985600 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdswc\" (UniqueName: \"kubernetes.io/projected/c276b685-1d06-4272-9eeb-7b759a8bffff-kube-api-access-hdswc\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:05 crc kubenswrapper[4758]: I0122 16:52:05.985671 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c276b685-1d06-4272-9eeb-7b759a8bffff-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:05 crc kubenswrapper[4758]: I0122 16:52:05.985682 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c276b685-1d06-4272-9eeb-7b759a8bffff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:06 crc kubenswrapper[4758]: I0122 16:52:06.342123 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2l5dg" Jan 22 16:52:06 crc kubenswrapper[4758]: I0122 16:52:06.412201 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad0bebb3-f086-4c81-8210-5ff9fed77ea4-config-data\") pod \"ad0bebb3-f086-4c81-8210-5ff9fed77ea4\" (UID: \"ad0bebb3-f086-4c81-8210-5ff9fed77ea4\") " Jan 22 16:52:06 crc kubenswrapper[4758]: I0122 16:52:06.412258 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad0bebb3-f086-4c81-8210-5ff9fed77ea4-combined-ca-bundle\") pod \"ad0bebb3-f086-4c81-8210-5ff9fed77ea4\" (UID: \"ad0bebb3-f086-4c81-8210-5ff9fed77ea4\") " Jan 22 16:52:06 crc kubenswrapper[4758]: I0122 16:52:06.412285 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ad0bebb3-f086-4c81-8210-5ff9fed77ea4-credential-keys\") pod \"ad0bebb3-f086-4c81-8210-5ff9fed77ea4\" (UID: \"ad0bebb3-f086-4c81-8210-5ff9fed77ea4\") " Jan 22 16:52:06 crc kubenswrapper[4758]: I0122 16:52:06.412308 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlflf\" (UniqueName: \"kubernetes.io/projected/ad0bebb3-f086-4c81-8210-5ff9fed77ea4-kube-api-access-nlflf\") pod \"ad0bebb3-f086-4c81-8210-5ff9fed77ea4\" (UID: \"ad0bebb3-f086-4c81-8210-5ff9fed77ea4\") " Jan 22 16:52:06 crc kubenswrapper[4758]: I0122 16:52:06.412381 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ad0bebb3-f086-4c81-8210-5ff9fed77ea4-fernet-keys\") pod \"ad0bebb3-f086-4c81-8210-5ff9fed77ea4\" (UID: \"ad0bebb3-f086-4c81-8210-5ff9fed77ea4\") " Jan 22 16:52:06 crc kubenswrapper[4758]: I0122 16:52:06.412400 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad0bebb3-f086-4c81-8210-5ff9fed77ea4-scripts\") pod \"ad0bebb3-f086-4c81-8210-5ff9fed77ea4\" (UID: \"ad0bebb3-f086-4c81-8210-5ff9fed77ea4\") " Jan 22 16:52:06 crc kubenswrapper[4758]: I0122 16:52:06.421732 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad0bebb3-f086-4c81-8210-5ff9fed77ea4-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "ad0bebb3-f086-4c81-8210-5ff9fed77ea4" (UID: "ad0bebb3-f086-4c81-8210-5ff9fed77ea4"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:06 crc kubenswrapper[4758]: I0122 16:52:06.433162 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad0bebb3-f086-4c81-8210-5ff9fed77ea4-kube-api-access-nlflf" (OuterVolumeSpecName: "kube-api-access-nlflf") pod "ad0bebb3-f086-4c81-8210-5ff9fed77ea4" (UID: "ad0bebb3-f086-4c81-8210-5ff9fed77ea4"). InnerVolumeSpecName "kube-api-access-nlflf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:52:06 crc kubenswrapper[4758]: I0122 16:52:06.437140 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad0bebb3-f086-4c81-8210-5ff9fed77ea4-scripts" (OuterVolumeSpecName: "scripts") pod "ad0bebb3-f086-4c81-8210-5ff9fed77ea4" (UID: "ad0bebb3-f086-4c81-8210-5ff9fed77ea4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:06 crc kubenswrapper[4758]: I0122 16:52:06.438779 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad0bebb3-f086-4c81-8210-5ff9fed77ea4-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "ad0bebb3-f086-4c81-8210-5ff9fed77ea4" (UID: "ad0bebb3-f086-4c81-8210-5ff9fed77ea4"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:06 crc kubenswrapper[4758]: I0122 16:52:06.457899 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad0bebb3-f086-4c81-8210-5ff9fed77ea4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ad0bebb3-f086-4c81-8210-5ff9fed77ea4" (UID: "ad0bebb3-f086-4c81-8210-5ff9fed77ea4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:06 crc kubenswrapper[4758]: I0122 16:52:06.460826 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad0bebb3-f086-4c81-8210-5ff9fed77ea4-config-data" (OuterVolumeSpecName: "config-data") pod "ad0bebb3-f086-4c81-8210-5ff9fed77ea4" (UID: "ad0bebb3-f086-4c81-8210-5ff9fed77ea4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:06 crc kubenswrapper[4758]: I0122 16:52:06.514200 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad0bebb3-f086-4c81-8210-5ff9fed77ea4-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:06 crc kubenswrapper[4758]: I0122 16:52:06.514238 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad0bebb3-f086-4c81-8210-5ff9fed77ea4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:06 crc kubenswrapper[4758]: I0122 16:52:06.514252 4758 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ad0bebb3-f086-4c81-8210-5ff9fed77ea4-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:06 crc kubenswrapper[4758]: I0122 16:52:06.514267 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlflf\" (UniqueName: \"kubernetes.io/projected/ad0bebb3-f086-4c81-8210-5ff9fed77ea4-kube-api-access-nlflf\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:06 crc kubenswrapper[4758]: I0122 16:52:06.514281 4758 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ad0bebb3-f086-4c81-8210-5ff9fed77ea4-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:06 crc kubenswrapper[4758]: I0122 16:52:06.514292 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad0bebb3-f086-4c81-8210-5ff9fed77ea4-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:06 crc kubenswrapper[4758]: I0122 16:52:06.901656 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-88b76f788-th2jq" Jan 22 16:52:06 crc kubenswrapper[4758]: I0122 16:52:06.902870 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-88b76f788-th2jq" Jan 22 16:52:06 crc kubenswrapper[4758]: I0122 16:52:06.945246 4758 generic.go:334] "Generic (PLEG): container finished" podID="1cc69af0-0ef0-4399-9084-e81419b65acd" containerID="3a7eb876a027926425012f48e1cd423431ed1fa33024a0073914b0d281905ffd" exitCode=0 Jan 22 16:52:06 crc kubenswrapper[4758]: I0122 16:52:06.945278 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-lv7h6" event={"ID":"1cc69af0-0ef0-4399-9084-e81419b65acd","Type":"ContainerDied","Data":"3a7eb876a027926425012f48e1cd423431ed1fa33024a0073914b0d281905ffd"} Jan 22 16:52:06 crc kubenswrapper[4758]: I0122 16:52:06.986960 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2l5dg" event={"ID":"ad0bebb3-f086-4c81-8210-5ff9fed77ea4","Type":"ContainerDied","Data":"df2cb96a282b935a76dd9f5a9f36c2cc8f0bf12a002dc0c8d37c19c710e556f1"} Jan 22 16:52:06 crc kubenswrapper[4758]: I0122 16:52:06.987001 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df2cb96a282b935a76dd9f5a9f36c2cc8f0bf12a002dc0c8d37c19c710e556f1" Jan 22 16:52:06 crc kubenswrapper[4758]: I0122 16:52:06.987069 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2l5dg" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.009683 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a67f1efb-4c74-4acd-9948-de1491a8479c","Type":"ContainerStarted","Data":"80422c6bd16684d478bb19c6ce24b0cfa026db240c2ed2e8d1e2c4600f669a05"} Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.017451 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dmssm" event={"ID":"7a5061fa-23f9-42ce-9682-a3fd99d419d7","Type":"ContainerStarted","Data":"d3b437dad77713b4711bcd032a920a37b7499f02df7fdea656e732ca1a489d0f"} Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.043850 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-647dd9b96f-tvdcp"] Jan 22 16:52:07 crc kubenswrapper[4758]: E0122 16:52:07.044241 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c276b685-1d06-4272-9eeb-7b759a8bffff" containerName="neutron-db-sync" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.044253 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="c276b685-1d06-4272-9eeb-7b759a8bffff" containerName="neutron-db-sync" Jan 22 16:52:07 crc kubenswrapper[4758]: E0122 16:52:07.044282 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="287aac2e-b390-416b-be0e-4b8b07e5e486" containerName="dnsmasq-dns" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.044289 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="287aac2e-b390-416b-be0e-4b8b07e5e486" containerName="dnsmasq-dns" Jan 22 16:52:07 crc kubenswrapper[4758]: E0122 16:52:07.044301 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad0bebb3-f086-4c81-8210-5ff9fed77ea4" containerName="keystone-bootstrap" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.044374 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad0bebb3-f086-4c81-8210-5ff9fed77ea4" containerName="keystone-bootstrap" Jan 22 16:52:07 crc kubenswrapper[4758]: E0122 16:52:07.044405 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="287aac2e-b390-416b-be0e-4b8b07e5e486" containerName="init" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.044411 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="287aac2e-b390-416b-be0e-4b8b07e5e486" containerName="init" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.044587 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="c276b685-1d06-4272-9eeb-7b759a8bffff" containerName="neutron-db-sync" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.044597 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="287aac2e-b390-416b-be0e-4b8b07e5e486" containerName="dnsmasq-dns" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.044605 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad0bebb3-f086-4c81-8210-5ff9fed77ea4" containerName="keystone-bootstrap" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.045858 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647dd9b96f-tvdcp" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.147863 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l295l\" (UniqueName: \"kubernetes.io/projected/13009e8c-ff8c-4429-ba2d-3a0053fe0ff4-kube-api-access-l295l\") pod \"dnsmasq-dns-647dd9b96f-tvdcp\" (UID: \"13009e8c-ff8c-4429-ba2d-3a0053fe0ff4\") " pod="openstack/dnsmasq-dns-647dd9b96f-tvdcp" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.147963 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13009e8c-ff8c-4429-ba2d-3a0053fe0ff4-dns-svc\") pod \"dnsmasq-dns-647dd9b96f-tvdcp\" (UID: \"13009e8c-ff8c-4429-ba2d-3a0053fe0ff4\") " pod="openstack/dnsmasq-dns-647dd9b96f-tvdcp" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.148068 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13009e8c-ff8c-4429-ba2d-3a0053fe0ff4-ovsdbserver-sb\") pod \"dnsmasq-dns-647dd9b96f-tvdcp\" (UID: \"13009e8c-ff8c-4429-ba2d-3a0053fe0ff4\") " pod="openstack/dnsmasq-dns-647dd9b96f-tvdcp" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.148086 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13009e8c-ff8c-4429-ba2d-3a0053fe0ff4-config\") pod \"dnsmasq-dns-647dd9b96f-tvdcp\" (UID: \"13009e8c-ff8c-4429-ba2d-3a0053fe0ff4\") " pod="openstack/dnsmasq-dns-647dd9b96f-tvdcp" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.148103 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13009e8c-ff8c-4429-ba2d-3a0053fe0ff4-ovsdbserver-nb\") pod \"dnsmasq-dns-647dd9b96f-tvdcp\" (UID: \"13009e8c-ff8c-4429-ba2d-3a0053fe0ff4\") " pod="openstack/dnsmasq-dns-647dd9b96f-tvdcp" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.148133 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13009e8c-ff8c-4429-ba2d-3a0053fe0ff4-dns-swift-storage-0\") pod \"dnsmasq-dns-647dd9b96f-tvdcp\" (UID: \"13009e8c-ff8c-4429-ba2d-3a0053fe0ff4\") " pod="openstack/dnsmasq-dns-647dd9b96f-tvdcp" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.212028 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5486585c8c-crbmm"] Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.213369 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5486585c8c-crbmm" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.218088 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-q7l7k" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.218312 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.218419 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.218519 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.218622 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.221965 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.242951 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-647dd9b96f-tvdcp"] Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.249462 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13009e8c-ff8c-4429-ba2d-3a0053fe0ff4-ovsdbserver-sb\") pod \"dnsmasq-dns-647dd9b96f-tvdcp\" (UID: \"13009e8c-ff8c-4429-ba2d-3a0053fe0ff4\") " pod="openstack/dnsmasq-dns-647dd9b96f-tvdcp" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.249504 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13009e8c-ff8c-4429-ba2d-3a0053fe0ff4-config\") pod \"dnsmasq-dns-647dd9b96f-tvdcp\" (UID: \"13009e8c-ff8c-4429-ba2d-3a0053fe0ff4\") " pod="openstack/dnsmasq-dns-647dd9b96f-tvdcp" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.249523 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13009e8c-ff8c-4429-ba2d-3a0053fe0ff4-ovsdbserver-nb\") pod \"dnsmasq-dns-647dd9b96f-tvdcp\" (UID: \"13009e8c-ff8c-4429-ba2d-3a0053fe0ff4\") " pod="openstack/dnsmasq-dns-647dd9b96f-tvdcp" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.249546 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13009e8c-ff8c-4429-ba2d-3a0053fe0ff4-dns-swift-storage-0\") pod \"dnsmasq-dns-647dd9b96f-tvdcp\" (UID: \"13009e8c-ff8c-4429-ba2d-3a0053fe0ff4\") " pod="openstack/dnsmasq-dns-647dd9b96f-tvdcp" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.249580 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b-combined-ca-bundle\") pod \"keystone-5486585c8c-crbmm\" (UID: \"e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b\") " pod="openstack/keystone-5486585c8c-crbmm" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.249603 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l295l\" (UniqueName: \"kubernetes.io/projected/13009e8c-ff8c-4429-ba2d-3a0053fe0ff4-kube-api-access-l295l\") pod \"dnsmasq-dns-647dd9b96f-tvdcp\" (UID: \"13009e8c-ff8c-4429-ba2d-3a0053fe0ff4\") " pod="openstack/dnsmasq-dns-647dd9b96f-tvdcp" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.249629 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b-public-tls-certs\") pod \"keystone-5486585c8c-crbmm\" (UID: \"e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b\") " pod="openstack/keystone-5486585c8c-crbmm" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.249676 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b-fernet-keys\") pod \"keystone-5486585c8c-crbmm\" (UID: \"e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b\") " pod="openstack/keystone-5486585c8c-crbmm" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.249699 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13009e8c-ff8c-4429-ba2d-3a0053fe0ff4-dns-svc\") pod \"dnsmasq-dns-647dd9b96f-tvdcp\" (UID: \"13009e8c-ff8c-4429-ba2d-3a0053fe0ff4\") " pod="openstack/dnsmasq-dns-647dd9b96f-tvdcp" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.249714 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbrcx\" (UniqueName: \"kubernetes.io/projected/e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b-kube-api-access-cbrcx\") pod \"keystone-5486585c8c-crbmm\" (UID: \"e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b\") " pod="openstack/keystone-5486585c8c-crbmm" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.249733 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b-internal-tls-certs\") pod \"keystone-5486585c8c-crbmm\" (UID: \"e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b\") " pod="openstack/keystone-5486585c8c-crbmm" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.249784 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b-scripts\") pod \"keystone-5486585c8c-crbmm\" (UID: \"e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b\") " pod="openstack/keystone-5486585c8c-crbmm" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.249819 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b-config-data\") pod \"keystone-5486585c8c-crbmm\" (UID: \"e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b\") " pod="openstack/keystone-5486585c8c-crbmm" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.249839 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b-credential-keys\") pod \"keystone-5486585c8c-crbmm\" (UID: \"e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b\") " pod="openstack/keystone-5486585c8c-crbmm" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.251118 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13009e8c-ff8c-4429-ba2d-3a0053fe0ff4-ovsdbserver-sb\") pod \"dnsmasq-dns-647dd9b96f-tvdcp\" (UID: \"13009e8c-ff8c-4429-ba2d-3a0053fe0ff4\") " pod="openstack/dnsmasq-dns-647dd9b96f-tvdcp" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.252594 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13009e8c-ff8c-4429-ba2d-3a0053fe0ff4-ovsdbserver-nb\") pod \"dnsmasq-dns-647dd9b96f-tvdcp\" (UID: \"13009e8c-ff8c-4429-ba2d-3a0053fe0ff4\") " pod="openstack/dnsmasq-dns-647dd9b96f-tvdcp" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.253058 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13009e8c-ff8c-4429-ba2d-3a0053fe0ff4-dns-svc\") pod \"dnsmasq-dns-647dd9b96f-tvdcp\" (UID: \"13009e8c-ff8c-4429-ba2d-3a0053fe0ff4\") " pod="openstack/dnsmasq-dns-647dd9b96f-tvdcp" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.258725 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13009e8c-ff8c-4429-ba2d-3a0053fe0ff4-dns-swift-storage-0\") pod \"dnsmasq-dns-647dd9b96f-tvdcp\" (UID: \"13009e8c-ff8c-4429-ba2d-3a0053fe0ff4\") " pod="openstack/dnsmasq-dns-647dd9b96f-tvdcp" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.267631 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13009e8c-ff8c-4429-ba2d-3a0053fe0ff4-config\") pod \"dnsmasq-dns-647dd9b96f-tvdcp\" (UID: \"13009e8c-ff8c-4429-ba2d-3a0053fe0ff4\") " pod="openstack/dnsmasq-dns-647dd9b96f-tvdcp" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.268392 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5486585c8c-crbmm"] Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.273157 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-dmssm" podStartSLOduration=4.781842028 podStartE2EDuration="1m0.27313676s" podCreationTimestamp="2026-01-22 16:51:07 +0000 UTC" firstStartedPulling="2026-01-22 16:51:10.241626328 +0000 UTC m=+1291.724965613" lastFinishedPulling="2026-01-22 16:52:05.73292106 +0000 UTC m=+1347.216260345" observedRunningTime="2026-01-22 16:52:07.045123999 +0000 UTC m=+1348.528463284" watchObservedRunningTime="2026-01-22 16:52:07.27313676 +0000 UTC m=+1348.756476045" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.288015 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-55b94d9b56-4x8cx" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.288278 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-55b94d9b56-4x8cx" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.297462 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l295l\" (UniqueName: \"kubernetes.io/projected/13009e8c-ff8c-4429-ba2d-3a0053fe0ff4-kube-api-access-l295l\") pod \"dnsmasq-dns-647dd9b96f-tvdcp\" (UID: \"13009e8c-ff8c-4429-ba2d-3a0053fe0ff4\") " pod="openstack/dnsmasq-dns-647dd9b96f-tvdcp" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.314812 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-f8f6c6576-zfqs4"] Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.316588 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f8f6c6576-zfqs4" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.324454 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.327757 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-zvr2k" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.327955 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.328043 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f8f6c6576-zfqs4"] Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.329979 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.366727 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b-combined-ca-bundle\") pod \"keystone-5486585c8c-crbmm\" (UID: \"e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b\") " pod="openstack/keystone-5486585c8c-crbmm" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.366821 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b-public-tls-certs\") pod \"keystone-5486585c8c-crbmm\" (UID: \"e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b\") " pod="openstack/keystone-5486585c8c-crbmm" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.366916 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b-fernet-keys\") pod \"keystone-5486585c8c-crbmm\" (UID: \"e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b\") " pod="openstack/keystone-5486585c8c-crbmm" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.366955 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbrcx\" (UniqueName: \"kubernetes.io/projected/e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b-kube-api-access-cbrcx\") pod \"keystone-5486585c8c-crbmm\" (UID: \"e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b\") " pod="openstack/keystone-5486585c8c-crbmm" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.366980 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b-internal-tls-certs\") pod \"keystone-5486585c8c-crbmm\" (UID: \"e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b\") " pod="openstack/keystone-5486585c8c-crbmm" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.367019 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b-scripts\") pod \"keystone-5486585c8c-crbmm\" (UID: \"e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b\") " pod="openstack/keystone-5486585c8c-crbmm" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.367075 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b-config-data\") pod \"keystone-5486585c8c-crbmm\" (UID: \"e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b\") " pod="openstack/keystone-5486585c8c-crbmm" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.367097 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b-credential-keys\") pod \"keystone-5486585c8c-crbmm\" (UID: \"e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b\") " pod="openstack/keystone-5486585c8c-crbmm" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.375773 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b-combined-ca-bundle\") pod \"keystone-5486585c8c-crbmm\" (UID: \"e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b\") " pod="openstack/keystone-5486585c8c-crbmm" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.378121 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b-internal-tls-certs\") pod \"keystone-5486585c8c-crbmm\" (UID: \"e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b\") " pod="openstack/keystone-5486585c8c-crbmm" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.378170 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b-credential-keys\") pod \"keystone-5486585c8c-crbmm\" (UID: \"e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b\") " pod="openstack/keystone-5486585c8c-crbmm" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.380032 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b-config-data\") pod \"keystone-5486585c8c-crbmm\" (UID: \"e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b\") " pod="openstack/keystone-5486585c8c-crbmm" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.380356 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b-public-tls-certs\") pod \"keystone-5486585c8c-crbmm\" (UID: \"e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b\") " pod="openstack/keystone-5486585c8c-crbmm" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.383439 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b-fernet-keys\") pod \"keystone-5486585c8c-crbmm\" (UID: \"e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b\") " pod="openstack/keystone-5486585c8c-crbmm" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.384097 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b-scripts\") pod \"keystone-5486585c8c-crbmm\" (UID: \"e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b\") " pod="openstack/keystone-5486585c8c-crbmm" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.397316 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbrcx\" (UniqueName: \"kubernetes.io/projected/e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b-kube-api-access-cbrcx\") pod \"keystone-5486585c8c-crbmm\" (UID: \"e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b\") " pod="openstack/keystone-5486585c8c-crbmm" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.468689 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7312e42-6737-4296-a35b-39bbb4a6f21b-combined-ca-bundle\") pod \"neutron-f8f6c6576-zfqs4\" (UID: \"b7312e42-6737-4296-a35b-39bbb4a6f21b\") " pod="openstack/neutron-f8f6c6576-zfqs4" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.468944 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b7312e42-6737-4296-a35b-39bbb4a6f21b-config\") pod \"neutron-f8f6c6576-zfqs4\" (UID: \"b7312e42-6737-4296-a35b-39bbb4a6f21b\") " pod="openstack/neutron-f8f6c6576-zfqs4" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.469104 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6stw\" (UniqueName: \"kubernetes.io/projected/b7312e42-6737-4296-a35b-39bbb4a6f21b-kube-api-access-x6stw\") pod \"neutron-f8f6c6576-zfqs4\" (UID: \"b7312e42-6737-4296-a35b-39bbb4a6f21b\") " pod="openstack/neutron-f8f6c6576-zfqs4" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.469237 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b7312e42-6737-4296-a35b-39bbb4a6f21b-httpd-config\") pod \"neutron-f8f6c6576-zfqs4\" (UID: \"b7312e42-6737-4296-a35b-39bbb4a6f21b\") " pod="openstack/neutron-f8f6c6576-zfqs4" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.469372 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7312e42-6737-4296-a35b-39bbb4a6f21b-ovndb-tls-certs\") pod \"neutron-f8f6c6576-zfqs4\" (UID: \"b7312e42-6737-4296-a35b-39bbb4a6f21b\") " pod="openstack/neutron-f8f6c6576-zfqs4" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.508440 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647dd9b96f-tvdcp" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.599619 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5486585c8c-crbmm" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.600088 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6stw\" (UniqueName: \"kubernetes.io/projected/b7312e42-6737-4296-a35b-39bbb4a6f21b-kube-api-access-x6stw\") pod \"neutron-f8f6c6576-zfqs4\" (UID: \"b7312e42-6737-4296-a35b-39bbb4a6f21b\") " pod="openstack/neutron-f8f6c6576-zfqs4" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.600151 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b7312e42-6737-4296-a35b-39bbb4a6f21b-httpd-config\") pod \"neutron-f8f6c6576-zfqs4\" (UID: \"b7312e42-6737-4296-a35b-39bbb4a6f21b\") " pod="openstack/neutron-f8f6c6576-zfqs4" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.600184 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7312e42-6737-4296-a35b-39bbb4a6f21b-ovndb-tls-certs\") pod \"neutron-f8f6c6576-zfqs4\" (UID: \"b7312e42-6737-4296-a35b-39bbb4a6f21b\") " pod="openstack/neutron-f8f6c6576-zfqs4" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.600266 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7312e42-6737-4296-a35b-39bbb4a6f21b-combined-ca-bundle\") pod \"neutron-f8f6c6576-zfqs4\" (UID: \"b7312e42-6737-4296-a35b-39bbb4a6f21b\") " pod="openstack/neutron-f8f6c6576-zfqs4" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.600322 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b7312e42-6737-4296-a35b-39bbb4a6f21b-config\") pod \"neutron-f8f6c6576-zfqs4\" (UID: \"b7312e42-6737-4296-a35b-39bbb4a6f21b\") " pod="openstack/neutron-f8f6c6576-zfqs4" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.608731 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b7312e42-6737-4296-a35b-39bbb4a6f21b-config\") pod \"neutron-f8f6c6576-zfqs4\" (UID: \"b7312e42-6737-4296-a35b-39bbb4a6f21b\") " pod="openstack/neutron-f8f6c6576-zfqs4" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.617653 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7312e42-6737-4296-a35b-39bbb4a6f21b-ovndb-tls-certs\") pod \"neutron-f8f6c6576-zfqs4\" (UID: \"b7312e42-6737-4296-a35b-39bbb4a6f21b\") " pod="openstack/neutron-f8f6c6576-zfqs4" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.626244 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b7312e42-6737-4296-a35b-39bbb4a6f21b-httpd-config\") pod \"neutron-f8f6c6576-zfqs4\" (UID: \"b7312e42-6737-4296-a35b-39bbb4a6f21b\") " pod="openstack/neutron-f8f6c6576-zfqs4" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.626626 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7312e42-6737-4296-a35b-39bbb4a6f21b-combined-ca-bundle\") pod \"neutron-f8f6c6576-zfqs4\" (UID: \"b7312e42-6737-4296-a35b-39bbb4a6f21b\") " pod="openstack/neutron-f8f6c6576-zfqs4" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.629242 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6stw\" (UniqueName: \"kubernetes.io/projected/b7312e42-6737-4296-a35b-39bbb4a6f21b-kube-api-access-x6stw\") pod \"neutron-f8f6c6576-zfqs4\" (UID: \"b7312e42-6737-4296-a35b-39bbb4a6f21b\") " pod="openstack/neutron-f8f6c6576-zfqs4" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.659356 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-86d8479bd8-rrvgj"] Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.666352 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86d8479bd8-rrvgj" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.696099 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-86d8479bd8-rrvgj"] Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.702420 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b37953c7-685d-4a7e-85fd-a2964e025825-combined-ca-bundle\") pod \"neutron-86d8479bd8-rrvgj\" (UID: \"b37953c7-685d-4a7e-85fd-a2964e025825\") " pod="openstack/neutron-86d8479bd8-rrvgj" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.702476 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b37953c7-685d-4a7e-85fd-a2964e025825-httpd-config\") pod \"neutron-86d8479bd8-rrvgj\" (UID: \"b37953c7-685d-4a7e-85fd-a2964e025825\") " pod="openstack/neutron-86d8479bd8-rrvgj" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.702504 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsgtl\" (UniqueName: \"kubernetes.io/projected/b37953c7-685d-4a7e-85fd-a2964e025825-kube-api-access-vsgtl\") pod \"neutron-86d8479bd8-rrvgj\" (UID: \"b37953c7-685d-4a7e-85fd-a2964e025825\") " pod="openstack/neutron-86d8479bd8-rrvgj" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.702530 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b37953c7-685d-4a7e-85fd-a2964e025825-ovndb-tls-certs\") pod \"neutron-86d8479bd8-rrvgj\" (UID: \"b37953c7-685d-4a7e-85fd-a2964e025825\") " pod="openstack/neutron-86d8479bd8-rrvgj" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.702550 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b37953c7-685d-4a7e-85fd-a2964e025825-config\") pod \"neutron-86d8479bd8-rrvgj\" (UID: \"b37953c7-685d-4a7e-85fd-a2964e025825\") " pod="openstack/neutron-86d8479bd8-rrvgj" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.770138 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f8f6c6576-zfqs4" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.803611 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b37953c7-685d-4a7e-85fd-a2964e025825-combined-ca-bundle\") pod \"neutron-86d8479bd8-rrvgj\" (UID: \"b37953c7-685d-4a7e-85fd-a2964e025825\") " pod="openstack/neutron-86d8479bd8-rrvgj" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.803704 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b37953c7-685d-4a7e-85fd-a2964e025825-httpd-config\") pod \"neutron-86d8479bd8-rrvgj\" (UID: \"b37953c7-685d-4a7e-85fd-a2964e025825\") " pod="openstack/neutron-86d8479bd8-rrvgj" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.803754 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsgtl\" (UniqueName: \"kubernetes.io/projected/b37953c7-685d-4a7e-85fd-a2964e025825-kube-api-access-vsgtl\") pod \"neutron-86d8479bd8-rrvgj\" (UID: \"b37953c7-685d-4a7e-85fd-a2964e025825\") " pod="openstack/neutron-86d8479bd8-rrvgj" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.803792 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b37953c7-685d-4a7e-85fd-a2964e025825-ovndb-tls-certs\") pod \"neutron-86d8479bd8-rrvgj\" (UID: \"b37953c7-685d-4a7e-85fd-a2964e025825\") " pod="openstack/neutron-86d8479bd8-rrvgj" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.803817 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b37953c7-685d-4a7e-85fd-a2964e025825-config\") pod \"neutron-86d8479bd8-rrvgj\" (UID: \"b37953c7-685d-4a7e-85fd-a2964e025825\") " pod="openstack/neutron-86d8479bd8-rrvgj" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.811521 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b37953c7-685d-4a7e-85fd-a2964e025825-combined-ca-bundle\") pod \"neutron-86d8479bd8-rrvgj\" (UID: \"b37953c7-685d-4a7e-85fd-a2964e025825\") " pod="openstack/neutron-86d8479bd8-rrvgj" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.819515 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b37953c7-685d-4a7e-85fd-a2964e025825-config\") pod \"neutron-86d8479bd8-rrvgj\" (UID: \"b37953c7-685d-4a7e-85fd-a2964e025825\") " pod="openstack/neutron-86d8479bd8-rrvgj" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.824482 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b37953c7-685d-4a7e-85fd-a2964e025825-httpd-config\") pod \"neutron-86d8479bd8-rrvgj\" (UID: \"b37953c7-685d-4a7e-85fd-a2964e025825\") " pod="openstack/neutron-86d8479bd8-rrvgj" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.825039 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b37953c7-685d-4a7e-85fd-a2964e025825-ovndb-tls-certs\") pod \"neutron-86d8479bd8-rrvgj\" (UID: \"b37953c7-685d-4a7e-85fd-a2964e025825\") " pod="openstack/neutron-86d8479bd8-rrvgj" Jan 22 16:52:07 crc kubenswrapper[4758]: I0122 16:52:07.830599 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsgtl\" (UniqueName: \"kubernetes.io/projected/b37953c7-685d-4a7e-85fd-a2964e025825-kube-api-access-vsgtl\") pod \"neutron-86d8479bd8-rrvgj\" (UID: \"b37953c7-685d-4a7e-85fd-a2964e025825\") " pod="openstack/neutron-86d8479bd8-rrvgj" Jan 22 16:52:08 crc kubenswrapper[4758]: I0122 16:52:08.127013 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86d8479bd8-rrvgj" Jan 22 16:52:08 crc kubenswrapper[4758]: I0122 16:52:08.167170 4758 generic.go:334] "Generic (PLEG): container finished" podID="ea53227e-7c78-42b4-959c-dd2531914be2" containerID="2281de046c6ce3884f86c4c8d3079b3033bb8b0a156ee418a39def692010ca33" exitCode=1 Jan 22 16:52:08 crc kubenswrapper[4758]: I0122 16:52:08.167694 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ea53227e-7c78-42b4-959c-dd2531914be2","Type":"ContainerDied","Data":"2281de046c6ce3884f86c4c8d3079b3033bb8b0a156ee418a39def692010ca33"} Jan 22 16:52:08 crc kubenswrapper[4758]: I0122 16:52:08.168400 4758 scope.go:117] "RemoveContainer" containerID="2281de046c6ce3884f86c4c8d3079b3033bb8b0a156ee418a39def692010ca33" Jan 22 16:52:08 crc kubenswrapper[4758]: I0122 16:52:08.255984 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Jan 22 16:52:08 crc kubenswrapper[4758]: I0122 16:52:08.318861 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 22 16:52:08 crc kubenswrapper[4758]: I0122 16:52:08.319563 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 22 16:52:08 crc kubenswrapper[4758]: I0122 16:52:08.320311 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 22 16:52:08 crc kubenswrapper[4758]: I0122 16:52:08.320326 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 22 16:52:08 crc kubenswrapper[4758]: I0122 16:52:08.348221 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Jan 22 16:52:08 crc kubenswrapper[4758]: I0122 16:52:08.466333 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5486585c8c-crbmm"] Jan 22 16:52:08 crc kubenswrapper[4758]: W0122 16:52:08.488035 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode86c0ccc_4e60_4edc_b8e1_6ba42b49fc1b.slice/crio-ba7c0e43f38c0cffae564dc6b5d786e1cedcfb22ed69cf7c25448b4a718e5ec1 WatchSource:0}: Error finding container ba7c0e43f38c0cffae564dc6b5d786e1cedcfb22ed69cf7c25448b4a718e5ec1: Status 404 returned error can't find the container with id ba7c0e43f38c0cffae564dc6b5d786e1cedcfb22ed69cf7c25448b4a718e5ec1 Jan 22 16:52:08 crc kubenswrapper[4758]: W0122 16:52:08.501205 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13009e8c_ff8c_4429_ba2d_3a0053fe0ff4.slice/crio-86345802f5a5a15aa9bf1c1de706ae793adceca42022e6cc8d86be975077107e WatchSource:0}: Error finding container 86345802f5a5a15aa9bf1c1de706ae793adceca42022e6cc8d86be975077107e: Status 404 returned error can't find the container with id 86345802f5a5a15aa9bf1c1de706ae793adceca42022e6cc8d86be975077107e Jan 22 16:52:08 crc kubenswrapper[4758]: I0122 16:52:08.521888 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-647dd9b96f-tvdcp"] Jan 22 16:52:09 crc kubenswrapper[4758]: I0122 16:52:09.034296 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-lv7h6" Jan 22 16:52:09 crc kubenswrapper[4758]: I0122 16:52:09.118297 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Jan 22 16:52:09 crc kubenswrapper[4758]: I0122 16:52:09.137073 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Jan 22 16:52:09 crc kubenswrapper[4758]: I0122 16:52:09.199574 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ea53227e-7c78-42b4-959c-dd2531914be2","Type":"ContainerStarted","Data":"879e0aeb8d1bcac2eefb400de2ed81acbc3af9e70161b2e8d9775267f2afb046"} Jan 22 16:52:09 crc kubenswrapper[4758]: I0122 16:52:09.220237 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwvtc\" (UniqueName: \"kubernetes.io/projected/1cc69af0-0ef0-4399-9084-e81419b65acd-kube-api-access-wwvtc\") pod \"1cc69af0-0ef0-4399-9084-e81419b65acd\" (UID: \"1cc69af0-0ef0-4399-9084-e81419b65acd\") " Jan 22 16:52:09 crc kubenswrapper[4758]: I0122 16:52:09.220282 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cc69af0-0ef0-4399-9084-e81419b65acd-config-data\") pod \"1cc69af0-0ef0-4399-9084-e81419b65acd\" (UID: \"1cc69af0-0ef0-4399-9084-e81419b65acd\") " Jan 22 16:52:09 crc kubenswrapper[4758]: I0122 16:52:09.220377 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1cc69af0-0ef0-4399-9084-e81419b65acd-logs\") pod \"1cc69af0-0ef0-4399-9084-e81419b65acd\" (UID: \"1cc69af0-0ef0-4399-9084-e81419b65acd\") " Jan 22 16:52:09 crc kubenswrapper[4758]: I0122 16:52:09.220456 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cc69af0-0ef0-4399-9084-e81419b65acd-combined-ca-bundle\") pod \"1cc69af0-0ef0-4399-9084-e81419b65acd\" (UID: \"1cc69af0-0ef0-4399-9084-e81419b65acd\") " Jan 22 16:52:09 crc kubenswrapper[4758]: I0122 16:52:09.220505 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cc69af0-0ef0-4399-9084-e81419b65acd-scripts\") pod \"1cc69af0-0ef0-4399-9084-e81419b65acd\" (UID: \"1cc69af0-0ef0-4399-9084-e81419b65acd\") " Jan 22 16:52:09 crc kubenswrapper[4758]: I0122 16:52:09.222151 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cc69af0-0ef0-4399-9084-e81419b65acd-logs" (OuterVolumeSpecName: "logs") pod "1cc69af0-0ef0-4399-9084-e81419b65acd" (UID: "1cc69af0-0ef0-4399-9084-e81419b65acd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:52:09 crc kubenswrapper[4758]: I0122 16:52:09.228015 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cc69af0-0ef0-4399-9084-e81419b65acd-kube-api-access-wwvtc" (OuterVolumeSpecName: "kube-api-access-wwvtc") pod "1cc69af0-0ef0-4399-9084-e81419b65acd" (UID: "1cc69af0-0ef0-4399-9084-e81419b65acd"). InnerVolumeSpecName "kube-api-access-wwvtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:52:09 crc kubenswrapper[4758]: I0122 16:52:09.239895 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cc69af0-0ef0-4399-9084-e81419b65acd-scripts" (OuterVolumeSpecName: "scripts") pod "1cc69af0-0ef0-4399-9084-e81419b65acd" (UID: "1cc69af0-0ef0-4399-9084-e81419b65acd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:09 crc kubenswrapper[4758]: I0122 16:52:09.246799 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-86d8479bd8-rrvgj"] Jan 22 16:52:09 crc kubenswrapper[4758]: I0122 16:52:09.258894 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647dd9b96f-tvdcp" event={"ID":"13009e8c-ff8c-4429-ba2d-3a0053fe0ff4","Type":"ContainerStarted","Data":"86345802f5a5a15aa9bf1c1de706ae793adceca42022e6cc8d86be975077107e"} Jan 22 16:52:09 crc kubenswrapper[4758]: I0122 16:52:09.261108 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cc69af0-0ef0-4399-9084-e81419b65acd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1cc69af0-0ef0-4399-9084-e81419b65acd" (UID: "1cc69af0-0ef0-4399-9084-e81419b65acd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:09 crc kubenswrapper[4758]: I0122 16:52:09.284952 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5486585c8c-crbmm" event={"ID":"e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b","Type":"ContainerStarted","Data":"ba7c0e43f38c0cffae564dc6b5d786e1cedcfb22ed69cf7c25448b4a718e5ec1"} Jan 22 16:52:09 crc kubenswrapper[4758]: I0122 16:52:09.287818 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-lv7h6" event={"ID":"1cc69af0-0ef0-4399-9084-e81419b65acd","Type":"ContainerDied","Data":"8859445bae632771d33099612a2d6cd150dac680a2abe866e9636c3e306c179d"} Jan 22 16:52:09 crc kubenswrapper[4758]: I0122 16:52:09.287947 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8859445bae632771d33099612a2d6cd150dac680a2abe866e9636c3e306c179d" Jan 22 16:52:09 crc kubenswrapper[4758]: I0122 16:52:09.288067 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-lv7h6" Jan 22 16:52:09 crc kubenswrapper[4758]: I0122 16:52:09.293375 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 22 16:52:09 crc kubenswrapper[4758]: I0122 16:52:09.328202 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwvtc\" (UniqueName: \"kubernetes.io/projected/1cc69af0-0ef0-4399-9084-e81419b65acd-kube-api-access-wwvtc\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:09 crc kubenswrapper[4758]: I0122 16:52:09.328231 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1cc69af0-0ef0-4399-9084-e81419b65acd-logs\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:09 crc kubenswrapper[4758]: I0122 16:52:09.328240 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cc69af0-0ef0-4399-9084-e81419b65acd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:09 crc kubenswrapper[4758]: I0122 16:52:09.328248 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cc69af0-0ef0-4399-9084-e81419b65acd-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:09 crc kubenswrapper[4758]: I0122 16:52:09.355655 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cc69af0-0ef0-4399-9084-e81419b65acd-config-data" (OuterVolumeSpecName: "config-data") pod "1cc69af0-0ef0-4399-9084-e81419b65acd" (UID: "1cc69af0-0ef0-4399-9084-e81419b65acd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:09 crc kubenswrapper[4758]: I0122 16:52:09.418867 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Jan 22 16:52:09 crc kubenswrapper[4758]: I0122 16:52:09.430111 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cc69af0-0ef0-4399-9084-e81419b65acd-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:09 crc kubenswrapper[4758]: I0122 16:52:09.512641 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Jan 22 16:52:09 crc kubenswrapper[4758]: I0122 16:52:09.770131 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f8f6c6576-zfqs4"] Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.233783 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6cd69747bd-jv5rb"] Jan 22 16:52:10 crc kubenswrapper[4758]: E0122 16:52:10.234454 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cc69af0-0ef0-4399-9084-e81419b65acd" containerName="placement-db-sync" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.234469 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cc69af0-0ef0-4399-9084-e81419b65acd" containerName="placement-db-sync" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.234632 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cc69af0-0ef0-4399-9084-e81419b65acd" containerName="placement-db-sync" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.236576 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6cd69747bd-jv5rb" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.246226 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.246483 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-n4qvk" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.246914 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.247025 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.247117 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.287824 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6cd69747bd-jv5rb"] Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.310131 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f8f6c6576-zfqs4" event={"ID":"b7312e42-6737-4296-a35b-39bbb4a6f21b","Type":"ContainerStarted","Data":"0c91657a572b3b34b8817f7c25202435a5ff9b50a99f94fed486d107c72a8bd0"} Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.310174 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f8f6c6576-zfqs4" event={"ID":"b7312e42-6737-4296-a35b-39bbb4a6f21b","Type":"ContainerStarted","Data":"cd241df4d9a9ca5fb55df0f9463dfe3812ee19ccbc679251cacb91b57217b4ea"} Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.311793 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86d8479bd8-rrvgj" event={"ID":"b37953c7-685d-4a7e-85fd-a2964e025825","Type":"ContainerStarted","Data":"1b6cc5ccbbfc7b0277304522d450bf801fa83ae1548aa7317a2ef97828b8b019"} Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.311818 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86d8479bd8-rrvgj" event={"ID":"b37953c7-685d-4a7e-85fd-a2964e025825","Type":"ContainerStarted","Data":"21830ff0562d03fac8b6c3dcf351712b2fa2309112b08dc1b3eb9338d5071507"} Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.318893 4758 generic.go:334] "Generic (PLEG): container finished" podID="13009e8c-ff8c-4429-ba2d-3a0053fe0ff4" containerID="b35efac72031395c7a23d19e43fdf246a2ace507230c2adefcc1424a200fa16a" exitCode=0 Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.318951 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647dd9b96f-tvdcp" event={"ID":"13009e8c-ff8c-4429-ba2d-3a0053fe0ff4","Type":"ContainerDied","Data":"b35efac72031395c7a23d19e43fdf246a2ace507230c2adefcc1424a200fa16a"} Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.327220 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5486585c8c-crbmm" event={"ID":"e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b","Type":"ContainerStarted","Data":"f6d2b2ffc1a32e0d119695131571c4bf04def41bb6e81b2a26186043f12e653b"} Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.380262 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e48d0711-47a0-4fe2-8341-7c4fc97e58b0-logs\") pod \"placement-6cd69747bd-jv5rb\" (UID: \"e48d0711-47a0-4fe2-8341-7c4fc97e58b0\") " pod="openstack/placement-6cd69747bd-jv5rb" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.380324 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e48d0711-47a0-4fe2-8341-7c4fc97e58b0-internal-tls-certs\") pod \"placement-6cd69747bd-jv5rb\" (UID: \"e48d0711-47a0-4fe2-8341-7c4fc97e58b0\") " pod="openstack/placement-6cd69747bd-jv5rb" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.380352 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm7vz\" (UniqueName: \"kubernetes.io/projected/e48d0711-47a0-4fe2-8341-7c4fc97e58b0-kube-api-access-tm7vz\") pod \"placement-6cd69747bd-jv5rb\" (UID: \"e48d0711-47a0-4fe2-8341-7c4fc97e58b0\") " pod="openstack/placement-6cd69747bd-jv5rb" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.380440 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e48d0711-47a0-4fe2-8341-7c4fc97e58b0-scripts\") pod \"placement-6cd69747bd-jv5rb\" (UID: \"e48d0711-47a0-4fe2-8341-7c4fc97e58b0\") " pod="openstack/placement-6cd69747bd-jv5rb" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.380470 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e48d0711-47a0-4fe2-8341-7c4fc97e58b0-public-tls-certs\") pod \"placement-6cd69747bd-jv5rb\" (UID: \"e48d0711-47a0-4fe2-8341-7c4fc97e58b0\") " pod="openstack/placement-6cd69747bd-jv5rb" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.380491 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e48d0711-47a0-4fe2-8341-7c4fc97e58b0-config-data\") pod \"placement-6cd69747bd-jv5rb\" (UID: \"e48d0711-47a0-4fe2-8341-7c4fc97e58b0\") " pod="openstack/placement-6cd69747bd-jv5rb" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.380513 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e48d0711-47a0-4fe2-8341-7c4fc97e58b0-combined-ca-bundle\") pod \"placement-6cd69747bd-jv5rb\" (UID: \"e48d0711-47a0-4fe2-8341-7c4fc97e58b0\") " pod="openstack/placement-6cd69747bd-jv5rb" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.447054 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5486585c8c-crbmm" podStartSLOduration=4.447032512 podStartE2EDuration="4.447032512s" podCreationTimestamp="2026-01-22 16:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:52:10.389112672 +0000 UTC m=+1351.872451957" watchObservedRunningTime="2026-01-22 16:52:10.447032512 +0000 UTC m=+1351.930371797" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.482510 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e48d0711-47a0-4fe2-8341-7c4fc97e58b0-scripts\") pod \"placement-6cd69747bd-jv5rb\" (UID: \"e48d0711-47a0-4fe2-8341-7c4fc97e58b0\") " pod="openstack/placement-6cd69747bd-jv5rb" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.482593 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e48d0711-47a0-4fe2-8341-7c4fc97e58b0-public-tls-certs\") pod \"placement-6cd69747bd-jv5rb\" (UID: \"e48d0711-47a0-4fe2-8341-7c4fc97e58b0\") " pod="openstack/placement-6cd69747bd-jv5rb" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.482620 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e48d0711-47a0-4fe2-8341-7c4fc97e58b0-config-data\") pod \"placement-6cd69747bd-jv5rb\" (UID: \"e48d0711-47a0-4fe2-8341-7c4fc97e58b0\") " pod="openstack/placement-6cd69747bd-jv5rb" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.482659 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e48d0711-47a0-4fe2-8341-7c4fc97e58b0-combined-ca-bundle\") pod \"placement-6cd69747bd-jv5rb\" (UID: \"e48d0711-47a0-4fe2-8341-7c4fc97e58b0\") " pod="openstack/placement-6cd69747bd-jv5rb" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.482872 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e48d0711-47a0-4fe2-8341-7c4fc97e58b0-logs\") pod \"placement-6cd69747bd-jv5rb\" (UID: \"e48d0711-47a0-4fe2-8341-7c4fc97e58b0\") " pod="openstack/placement-6cd69747bd-jv5rb" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.482977 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e48d0711-47a0-4fe2-8341-7c4fc97e58b0-internal-tls-certs\") pod \"placement-6cd69747bd-jv5rb\" (UID: \"e48d0711-47a0-4fe2-8341-7c4fc97e58b0\") " pod="openstack/placement-6cd69747bd-jv5rb" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.483047 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tm7vz\" (UniqueName: \"kubernetes.io/projected/e48d0711-47a0-4fe2-8341-7c4fc97e58b0-kube-api-access-tm7vz\") pod \"placement-6cd69747bd-jv5rb\" (UID: \"e48d0711-47a0-4fe2-8341-7c4fc97e58b0\") " pod="openstack/placement-6cd69747bd-jv5rb" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.486642 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e48d0711-47a0-4fe2-8341-7c4fc97e58b0-logs\") pod \"placement-6cd69747bd-jv5rb\" (UID: \"e48d0711-47a0-4fe2-8341-7c4fc97e58b0\") " pod="openstack/placement-6cd69747bd-jv5rb" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.490243 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e48d0711-47a0-4fe2-8341-7c4fc97e58b0-internal-tls-certs\") pod \"placement-6cd69747bd-jv5rb\" (UID: \"e48d0711-47a0-4fe2-8341-7c4fc97e58b0\") " pod="openstack/placement-6cd69747bd-jv5rb" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.492069 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e48d0711-47a0-4fe2-8341-7c4fc97e58b0-public-tls-certs\") pod \"placement-6cd69747bd-jv5rb\" (UID: \"e48d0711-47a0-4fe2-8341-7c4fc97e58b0\") " pod="openstack/placement-6cd69747bd-jv5rb" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.498835 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e48d0711-47a0-4fe2-8341-7c4fc97e58b0-scripts\") pod \"placement-6cd69747bd-jv5rb\" (UID: \"e48d0711-47a0-4fe2-8341-7c4fc97e58b0\") " pod="openstack/placement-6cd69747bd-jv5rb" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.506346 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e48d0711-47a0-4fe2-8341-7c4fc97e58b0-config-data\") pod \"placement-6cd69747bd-jv5rb\" (UID: \"e48d0711-47a0-4fe2-8341-7c4fc97e58b0\") " pod="openstack/placement-6cd69747bd-jv5rb" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.516453 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e48d0711-47a0-4fe2-8341-7c4fc97e58b0-combined-ca-bundle\") pod \"placement-6cd69747bd-jv5rb\" (UID: \"e48d0711-47a0-4fe2-8341-7c4fc97e58b0\") " pod="openstack/placement-6cd69747bd-jv5rb" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.543064 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm7vz\" (UniqueName: \"kubernetes.io/projected/e48d0711-47a0-4fe2-8341-7c4fc97e58b0-kube-api-access-tm7vz\") pod \"placement-6cd69747bd-jv5rb\" (UID: \"e48d0711-47a0-4fe2-8341-7c4fc97e58b0\") " pod="openstack/placement-6cd69747bd-jv5rb" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.563101 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-86d8479bd8-rrvgj"] Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.582563 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-877b57c45-cs9rd"] Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.585137 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6cd69747bd-jv5rb" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.590363 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-877b57c45-cs9rd" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.595108 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.595257 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.607170 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-877b57c45-cs9rd"] Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.690108 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d80f9d2-e7aa-4cc3-876f-0ecd9915704d-ovndb-tls-certs\") pod \"neutron-877b57c45-cs9rd\" (UID: \"1d80f9d2-e7aa-4cc3-876f-0ecd9915704d\") " pod="openstack/neutron-877b57c45-cs9rd" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.690159 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1d80f9d2-e7aa-4cc3-876f-0ecd9915704d-config\") pod \"neutron-877b57c45-cs9rd\" (UID: \"1d80f9d2-e7aa-4cc3-876f-0ecd9915704d\") " pod="openstack/neutron-877b57c45-cs9rd" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.690205 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d80f9d2-e7aa-4cc3-876f-0ecd9915704d-internal-tls-certs\") pod \"neutron-877b57c45-cs9rd\" (UID: \"1d80f9d2-e7aa-4cc3-876f-0ecd9915704d\") " pod="openstack/neutron-877b57c45-cs9rd" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.690228 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d80f9d2-e7aa-4cc3-876f-0ecd9915704d-combined-ca-bundle\") pod \"neutron-877b57c45-cs9rd\" (UID: \"1d80f9d2-e7aa-4cc3-876f-0ecd9915704d\") " pod="openstack/neutron-877b57c45-cs9rd" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.690294 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d80f9d2-e7aa-4cc3-876f-0ecd9915704d-public-tls-certs\") pod \"neutron-877b57c45-cs9rd\" (UID: \"1d80f9d2-e7aa-4cc3-876f-0ecd9915704d\") " pod="openstack/neutron-877b57c45-cs9rd" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.690343 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6jcw\" (UniqueName: \"kubernetes.io/projected/1d80f9d2-e7aa-4cc3-876f-0ecd9915704d-kube-api-access-h6jcw\") pod \"neutron-877b57c45-cs9rd\" (UID: \"1d80f9d2-e7aa-4cc3-876f-0ecd9915704d\") " pod="openstack/neutron-877b57c45-cs9rd" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.690365 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1d80f9d2-e7aa-4cc3-876f-0ecd9915704d-httpd-config\") pod \"neutron-877b57c45-cs9rd\" (UID: \"1d80f9d2-e7aa-4cc3-876f-0ecd9915704d\") " pod="openstack/neutron-877b57c45-cs9rd" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.791704 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d80f9d2-e7aa-4cc3-876f-0ecd9915704d-ovndb-tls-certs\") pod \"neutron-877b57c45-cs9rd\" (UID: \"1d80f9d2-e7aa-4cc3-876f-0ecd9915704d\") " pod="openstack/neutron-877b57c45-cs9rd" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.791786 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1d80f9d2-e7aa-4cc3-876f-0ecd9915704d-config\") pod \"neutron-877b57c45-cs9rd\" (UID: \"1d80f9d2-e7aa-4cc3-876f-0ecd9915704d\") " pod="openstack/neutron-877b57c45-cs9rd" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.791848 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d80f9d2-e7aa-4cc3-876f-0ecd9915704d-internal-tls-certs\") pod \"neutron-877b57c45-cs9rd\" (UID: \"1d80f9d2-e7aa-4cc3-876f-0ecd9915704d\") " pod="openstack/neutron-877b57c45-cs9rd" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.791878 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d80f9d2-e7aa-4cc3-876f-0ecd9915704d-combined-ca-bundle\") pod \"neutron-877b57c45-cs9rd\" (UID: \"1d80f9d2-e7aa-4cc3-876f-0ecd9915704d\") " pod="openstack/neutron-877b57c45-cs9rd" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.793908 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d80f9d2-e7aa-4cc3-876f-0ecd9915704d-public-tls-certs\") pod \"neutron-877b57c45-cs9rd\" (UID: \"1d80f9d2-e7aa-4cc3-876f-0ecd9915704d\") " pod="openstack/neutron-877b57c45-cs9rd" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.794007 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6jcw\" (UniqueName: \"kubernetes.io/projected/1d80f9d2-e7aa-4cc3-876f-0ecd9915704d-kube-api-access-h6jcw\") pod \"neutron-877b57c45-cs9rd\" (UID: \"1d80f9d2-e7aa-4cc3-876f-0ecd9915704d\") " pod="openstack/neutron-877b57c45-cs9rd" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.794035 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1d80f9d2-e7aa-4cc3-876f-0ecd9915704d-httpd-config\") pod \"neutron-877b57c45-cs9rd\" (UID: \"1d80f9d2-e7aa-4cc3-876f-0ecd9915704d\") " pod="openstack/neutron-877b57c45-cs9rd" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.806390 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1d80f9d2-e7aa-4cc3-876f-0ecd9915704d-config\") pod \"neutron-877b57c45-cs9rd\" (UID: \"1d80f9d2-e7aa-4cc3-876f-0ecd9915704d\") " pod="openstack/neutron-877b57c45-cs9rd" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.806432 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d80f9d2-e7aa-4cc3-876f-0ecd9915704d-combined-ca-bundle\") pod \"neutron-877b57c45-cs9rd\" (UID: \"1d80f9d2-e7aa-4cc3-876f-0ecd9915704d\") " pod="openstack/neutron-877b57c45-cs9rd" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.806456 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1d80f9d2-e7aa-4cc3-876f-0ecd9915704d-httpd-config\") pod \"neutron-877b57c45-cs9rd\" (UID: \"1d80f9d2-e7aa-4cc3-876f-0ecd9915704d\") " pod="openstack/neutron-877b57c45-cs9rd" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.806428 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d80f9d2-e7aa-4cc3-876f-0ecd9915704d-public-tls-certs\") pod \"neutron-877b57c45-cs9rd\" (UID: \"1d80f9d2-e7aa-4cc3-876f-0ecd9915704d\") " pod="openstack/neutron-877b57c45-cs9rd" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.813371 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d80f9d2-e7aa-4cc3-876f-0ecd9915704d-ovndb-tls-certs\") pod \"neutron-877b57c45-cs9rd\" (UID: \"1d80f9d2-e7aa-4cc3-876f-0ecd9915704d\") " pod="openstack/neutron-877b57c45-cs9rd" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.814334 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6jcw\" (UniqueName: \"kubernetes.io/projected/1d80f9d2-e7aa-4cc3-876f-0ecd9915704d-kube-api-access-h6jcw\") pod \"neutron-877b57c45-cs9rd\" (UID: \"1d80f9d2-e7aa-4cc3-876f-0ecd9915704d\") " pod="openstack/neutron-877b57c45-cs9rd" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.826663 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d80f9d2-e7aa-4cc3-876f-0ecd9915704d-internal-tls-certs\") pod \"neutron-877b57c45-cs9rd\" (UID: \"1d80f9d2-e7aa-4cc3-876f-0ecd9915704d\") " pod="openstack/neutron-877b57c45-cs9rd" Jan 22 16:52:10 crc kubenswrapper[4758]: I0122 16:52:10.907640 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-877b57c45-cs9rd" Jan 22 16:52:11 crc kubenswrapper[4758]: I0122 16:52:11.397488 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86d8479bd8-rrvgj" event={"ID":"b37953c7-685d-4a7e-85fd-a2964e025825","Type":"ContainerStarted","Data":"c10ae485f186912a6e35b078a622dbd0915dce04f6b1eb7a7a6feee6114d5ac9"} Jan 22 16:52:11 crc kubenswrapper[4758]: I0122 16:52:11.402630 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-86d8479bd8-rrvgj" Jan 22 16:52:11 crc kubenswrapper[4758]: I0122 16:52:11.430085 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-529mh" event={"ID":"b1666997-8287-4065-bcaf-409713fc6782","Type":"ContainerStarted","Data":"69ee03246e17adce8ca09b0c408259f38eddae14f39bc9e644a8110b0a4bfc78"} Jan 22 16:52:11 crc kubenswrapper[4758]: I0122 16:52:11.441101 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-86d8479bd8-rrvgj" podStartSLOduration=4.441078662 podStartE2EDuration="4.441078662s" podCreationTimestamp="2026-01-22 16:52:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:52:11.434252495 +0000 UTC m=+1352.917591780" watchObservedRunningTime="2026-01-22 16:52:11.441078662 +0000 UTC m=+1352.924417957" Jan 22 16:52:11 crc kubenswrapper[4758]: I0122 16:52:11.469134 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f8f6c6576-zfqs4" event={"ID":"b7312e42-6737-4296-a35b-39bbb4a6f21b","Type":"ContainerStarted","Data":"c0ef1600c909cea06f743be6661231c80d0f2cf31472785a373ddde21f6e6f4b"} Jan 22 16:52:11 crc kubenswrapper[4758]: I0122 16:52:11.469172 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5486585c8c-crbmm" Jan 22 16:52:11 crc kubenswrapper[4758]: I0122 16:52:11.469279 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-applier-0" podUID="9e324fa8-b3ee-4072-8a8d-5c08e771d0c0" containerName="watcher-applier" containerID="cri-o://f48b067c805b943bf203f726a9c5fe8e5de020511ec8991a6c86f7f1752ac774" gracePeriod=30 Jan 22 16:52:11 crc kubenswrapper[4758]: I0122 16:52:11.469344 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-f8f6c6576-zfqs4" Jan 22 16:52:11 crc kubenswrapper[4758]: I0122 16:52:11.527379 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-529mh" podStartSLOduration=5.601323347 podStartE2EDuration="1m4.527354996s" podCreationTimestamp="2026-01-22 16:51:07 +0000 UTC" firstStartedPulling="2026-01-22 16:51:10.219758342 +0000 UTC m=+1291.703097627" lastFinishedPulling="2026-01-22 16:52:09.145789991 +0000 UTC m=+1350.629129276" observedRunningTime="2026-01-22 16:52:11.467951285 +0000 UTC m=+1352.951290570" watchObservedRunningTime="2026-01-22 16:52:11.527354996 +0000 UTC m=+1353.010694281" Jan 22 16:52:11 crc kubenswrapper[4758]: I0122 16:52:11.554623 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-f8f6c6576-zfqs4" podStartSLOduration=4.554598629 podStartE2EDuration="4.554598629s" podCreationTimestamp="2026-01-22 16:52:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:52:11.532032153 +0000 UTC m=+1353.015371438" watchObservedRunningTime="2026-01-22 16:52:11.554598629 +0000 UTC m=+1353.037937914" Jan 22 16:52:11 crc kubenswrapper[4758]: I0122 16:52:11.699848 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6cd69747bd-jv5rb"] Jan 22 16:52:11 crc kubenswrapper[4758]: W0122 16:52:11.962032 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d80f9d2_e7aa_4cc3_876f_0ecd9915704d.slice/crio-b8fa5cfffe83bae6e71616d05f62e3479e0de50fdac329d6a6f0e28e28bb6ce8 WatchSource:0}: Error finding container b8fa5cfffe83bae6e71616d05f62e3479e0de50fdac329d6a6f0e28e28bb6ce8: Status 404 returned error can't find the container with id b8fa5cfffe83bae6e71616d05f62e3479e0de50fdac329d6a6f0e28e28bb6ce8 Jan 22 16:52:11 crc kubenswrapper[4758]: I0122 16:52:11.967604 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-877b57c45-cs9rd"] Jan 22 16:52:12 crc kubenswrapper[4758]: I0122 16:52:12.504705 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6cd69747bd-jv5rb" event={"ID":"e48d0711-47a0-4fe2-8341-7c4fc97e58b0","Type":"ContainerStarted","Data":"56d28b023eb5f2023b3ea61aa2b6cb99bdef0654b43b1e982e673ec620857fbd"} Jan 22 16:52:12 crc kubenswrapper[4758]: I0122 16:52:12.505139 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6cd69747bd-jv5rb" event={"ID":"e48d0711-47a0-4fe2-8341-7c4fc97e58b0","Type":"ContainerStarted","Data":"58e076bdb86e001ed79722dcaed99cac2ed132ba86f3921361cce052ffe816cd"} Jan 22 16:52:12 crc kubenswrapper[4758]: I0122 16:52:12.512949 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-877b57c45-cs9rd" event={"ID":"1d80f9d2-e7aa-4cc3-876f-0ecd9915704d","Type":"ContainerStarted","Data":"aaf078e9bce814668de8aed8af4d3156f902accf25ac3030a3b620e43fdc85c3"} Jan 22 16:52:12 crc kubenswrapper[4758]: I0122 16:52:12.513002 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-877b57c45-cs9rd" event={"ID":"1d80f9d2-e7aa-4cc3-876f-0ecd9915704d","Type":"ContainerStarted","Data":"b8fa5cfffe83bae6e71616d05f62e3479e0de50fdac329d6a6f0e28e28bb6ce8"} Jan 22 16:52:12 crc kubenswrapper[4758]: I0122 16:52:12.520649 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647dd9b96f-tvdcp" event={"ID":"13009e8c-ff8c-4429-ba2d-3a0053fe0ff4","Type":"ContainerStarted","Data":"07d58592b5fe3309684fc29c740b9416c6aab32053853beeb26cdde70d5380e2"} Jan 22 16:52:12 crc kubenswrapper[4758]: I0122 16:52:12.521672 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-647dd9b96f-tvdcp" Jan 22 16:52:12 crc kubenswrapper[4758]: I0122 16:52:12.522272 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-86d8479bd8-rrvgj" podUID="b37953c7-685d-4a7e-85fd-a2964e025825" containerName="neutron-api" containerID="cri-o://1b6cc5ccbbfc7b0277304522d450bf801fa83ae1548aa7317a2ef97828b8b019" gracePeriod=30 Jan 22 16:52:12 crc kubenswrapper[4758]: I0122 16:52:12.522329 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-86d8479bd8-rrvgj" podUID="b37953c7-685d-4a7e-85fd-a2964e025825" containerName="neutron-httpd" containerID="cri-o://c10ae485f186912a6e35b078a622dbd0915dce04f6b1eb7a7a6feee6114d5ac9" gracePeriod=30 Jan 22 16:52:13 crc kubenswrapper[4758]: E0122 16:52:13.268168 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f48b067c805b943bf203f726a9c5fe8e5de020511ec8991a6c86f7f1752ac774" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 16:52:13 crc kubenswrapper[4758]: E0122 16:52:13.277684 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f48b067c805b943bf203f726a9c5fe8e5de020511ec8991a6c86f7f1752ac774" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 16:52:13 crc kubenswrapper[4758]: E0122 16:52:13.285286 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f48b067c805b943bf203f726a9c5fe8e5de020511ec8991a6c86f7f1752ac774" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 16:52:13 crc kubenswrapper[4758]: E0122 16:52:13.285360 4758 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="9e324fa8-b3ee-4072-8a8d-5c08e771d0c0" containerName="watcher-applier" Jan 22 16:52:13 crc kubenswrapper[4758]: I0122 16:52:13.533093 4758 generic.go:334] "Generic (PLEG): container finished" podID="b37953c7-685d-4a7e-85fd-a2964e025825" containerID="c10ae485f186912a6e35b078a622dbd0915dce04f6b1eb7a7a6feee6114d5ac9" exitCode=0 Jan 22 16:52:13 crc kubenswrapper[4758]: I0122 16:52:13.533182 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86d8479bd8-rrvgj" event={"ID":"b37953c7-685d-4a7e-85fd-a2964e025825","Type":"ContainerDied","Data":"c10ae485f186912a6e35b078a622dbd0915dce04f6b1eb7a7a6feee6114d5ac9"} Jan 22 16:52:13 crc kubenswrapper[4758]: I0122 16:52:13.536384 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-877b57c45-cs9rd" event={"ID":"1d80f9d2-e7aa-4cc3-876f-0ecd9915704d","Type":"ContainerStarted","Data":"3354e1829d2a65e75fd28f64a8970c86004051bc7894e9c2f303ca846cf92db7"} Jan 22 16:52:13 crc kubenswrapper[4758]: I0122 16:52:13.536463 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-877b57c45-cs9rd" Jan 22 16:52:13 crc kubenswrapper[4758]: I0122 16:52:13.539556 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6cd69747bd-jv5rb" event={"ID":"e48d0711-47a0-4fe2-8341-7c4fc97e58b0","Type":"ContainerStarted","Data":"13cb0da205ba2d2e9645d6b5f8d16b02ecba05a59bc4acefdad8e9499ae3eb37"} Jan 22 16:52:13 crc kubenswrapper[4758]: I0122 16:52:13.570054 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-877b57c45-cs9rd" podStartSLOduration=3.570038025 podStartE2EDuration="3.570038025s" podCreationTimestamp="2026-01-22 16:52:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:52:13.566767845 +0000 UTC m=+1355.050107130" watchObservedRunningTime="2026-01-22 16:52:13.570038025 +0000 UTC m=+1355.053377310" Jan 22 16:52:13 crc kubenswrapper[4758]: I0122 16:52:13.570959 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-647dd9b96f-tvdcp" podStartSLOduration=7.57095342 podStartE2EDuration="7.57095342s" podCreationTimestamp="2026-01-22 16:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:52:12.550986793 +0000 UTC m=+1354.034326078" watchObservedRunningTime="2026-01-22 16:52:13.57095342 +0000 UTC m=+1355.054292705" Jan 22 16:52:13 crc kubenswrapper[4758]: I0122 16:52:13.599112 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6cd69747bd-jv5rb" podStartSLOduration=3.599081797 podStartE2EDuration="3.599081797s" podCreationTimestamp="2026-01-22 16:52:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:52:13.591585223 +0000 UTC m=+1355.074924528" watchObservedRunningTime="2026-01-22 16:52:13.599081797 +0000 UTC m=+1355.082421092" Jan 22 16:52:14 crc kubenswrapper[4758]: I0122 16:52:14.456463 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Jan 22 16:52:14 crc kubenswrapper[4758]: I0122 16:52:14.456929 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="7e024ebe-16b8-454b-b1ca-2e42e6883e65" containerName="watcher-api-log" containerID="cri-o://8227401b4532badd1f72d6e02c9e94c68bd6e847c6f48c5118ae6e7d97aa2a27" gracePeriod=30 Jan 22 16:52:14 crc kubenswrapper[4758]: I0122 16:52:14.457013 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="7e024ebe-16b8-454b-b1ca-2e42e6883e65" containerName="watcher-api" containerID="cri-o://9eeddd727171947e1836501aaef4a7fc5e321f11099ad1273369dfad7e7322f2" gracePeriod=30 Jan 22 16:52:14 crc kubenswrapper[4758]: I0122 16:52:14.672725 4758 generic.go:334] "Generic (PLEG): container finished" podID="7e024ebe-16b8-454b-b1ca-2e42e6883e65" containerID="8227401b4532badd1f72d6e02c9e94c68bd6e847c6f48c5118ae6e7d97aa2a27" exitCode=143 Jan 22 16:52:14 crc kubenswrapper[4758]: I0122 16:52:14.673844 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"7e024ebe-16b8-454b-b1ca-2e42e6883e65","Type":"ContainerDied","Data":"8227401b4532badd1f72d6e02c9e94c68bd6e847c6f48c5118ae6e7d97aa2a27"} Jan 22 16:52:14 crc kubenswrapper[4758]: I0122 16:52:14.674396 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6cd69747bd-jv5rb" Jan 22 16:52:14 crc kubenswrapper[4758]: I0122 16:52:14.674880 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6cd69747bd-jv5rb" Jan 22 16:52:16 crc kubenswrapper[4758]: I0122 16:52:16.883005 4758 generic.go:334] "Generic (PLEG): container finished" podID="ea53227e-7c78-42b4-959c-dd2531914be2" containerID="879e0aeb8d1bcac2eefb400de2ed81acbc3af9e70161b2e8d9775267f2afb046" exitCode=1 Jan 22 16:52:16 crc kubenswrapper[4758]: I0122 16:52:16.883416 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ea53227e-7c78-42b4-959c-dd2531914be2","Type":"ContainerDied","Data":"879e0aeb8d1bcac2eefb400de2ed81acbc3af9e70161b2e8d9775267f2afb046"} Jan 22 16:52:16 crc kubenswrapper[4758]: I0122 16:52:16.883453 4758 scope.go:117] "RemoveContainer" containerID="2281de046c6ce3884f86c4c8d3079b3033bb8b0a156ee418a39def692010ca33" Jan 22 16:52:16 crc kubenswrapper[4758]: I0122 16:52:16.884183 4758 scope.go:117] "RemoveContainer" containerID="879e0aeb8d1bcac2eefb400de2ed81acbc3af9e70161b2e8d9775267f2afb046" Jan 22 16:52:16 crc kubenswrapper[4758]: E0122 16:52:16.884483 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(ea53227e-7c78-42b4-959c-dd2531914be2)\"" pod="openstack/watcher-decision-engine-0" podUID="ea53227e-7c78-42b4-959c-dd2531914be2" Jan 22 16:52:16 crc kubenswrapper[4758]: I0122 16:52:16.900197 4758 generic.go:334] "Generic (PLEG): container finished" podID="7e024ebe-16b8-454b-b1ca-2e42e6883e65" containerID="9eeddd727171947e1836501aaef4a7fc5e321f11099ad1273369dfad7e7322f2" exitCode=0 Jan 22 16:52:16 crc kubenswrapper[4758]: I0122 16:52:16.901184 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"7e024ebe-16b8-454b-b1ca-2e42e6883e65","Type":"ContainerDied","Data":"9eeddd727171947e1836501aaef4a7fc5e321f11099ad1273369dfad7e7322f2"} Jan 22 16:52:17 crc kubenswrapper[4758]: I0122 16:52:17.198026 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 22 16:52:17 crc kubenswrapper[4758]: I0122 16:52:17.361340 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7e024ebe-16b8-454b-b1ca-2e42e6883e65-custom-prometheus-ca\") pod \"7e024ebe-16b8-454b-b1ca-2e42e6883e65\" (UID: \"7e024ebe-16b8-454b-b1ca-2e42e6883e65\") " Jan 22 16:52:17 crc kubenswrapper[4758]: I0122 16:52:17.361531 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e024ebe-16b8-454b-b1ca-2e42e6883e65-logs\") pod \"7e024ebe-16b8-454b-b1ca-2e42e6883e65\" (UID: \"7e024ebe-16b8-454b-b1ca-2e42e6883e65\") " Jan 22 16:52:17 crc kubenswrapper[4758]: I0122 16:52:17.361617 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e024ebe-16b8-454b-b1ca-2e42e6883e65-config-data\") pod \"7e024ebe-16b8-454b-b1ca-2e42e6883e65\" (UID: \"7e024ebe-16b8-454b-b1ca-2e42e6883e65\") " Jan 22 16:52:17 crc kubenswrapper[4758]: I0122 16:52:17.361716 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwwc5\" (UniqueName: \"kubernetes.io/projected/7e024ebe-16b8-454b-b1ca-2e42e6883e65-kube-api-access-rwwc5\") pod \"7e024ebe-16b8-454b-b1ca-2e42e6883e65\" (UID: \"7e024ebe-16b8-454b-b1ca-2e42e6883e65\") " Jan 22 16:52:17 crc kubenswrapper[4758]: I0122 16:52:17.361784 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e024ebe-16b8-454b-b1ca-2e42e6883e65-combined-ca-bundle\") pod \"7e024ebe-16b8-454b-b1ca-2e42e6883e65\" (UID: \"7e024ebe-16b8-454b-b1ca-2e42e6883e65\") " Jan 22 16:52:17 crc kubenswrapper[4758]: I0122 16:52:17.362111 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e024ebe-16b8-454b-b1ca-2e42e6883e65-logs" (OuterVolumeSpecName: "logs") pod "7e024ebe-16b8-454b-b1ca-2e42e6883e65" (UID: "7e024ebe-16b8-454b-b1ca-2e42e6883e65"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:52:17 crc kubenswrapper[4758]: I0122 16:52:17.362268 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e024ebe-16b8-454b-b1ca-2e42e6883e65-logs\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:17 crc kubenswrapper[4758]: I0122 16:52:17.407929 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e024ebe-16b8-454b-b1ca-2e42e6883e65-kube-api-access-rwwc5" (OuterVolumeSpecName: "kube-api-access-rwwc5") pod "7e024ebe-16b8-454b-b1ca-2e42e6883e65" (UID: "7e024ebe-16b8-454b-b1ca-2e42e6883e65"). InnerVolumeSpecName "kube-api-access-rwwc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:52:17 crc kubenswrapper[4758]: I0122 16:52:17.464831 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwwc5\" (UniqueName: \"kubernetes.io/projected/7e024ebe-16b8-454b-b1ca-2e42e6883e65-kube-api-access-rwwc5\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:17 crc kubenswrapper[4758]: I0122 16:52:17.508877 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e024ebe-16b8-454b-b1ca-2e42e6883e65-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e024ebe-16b8-454b-b1ca-2e42e6883e65" (UID: "7e024ebe-16b8-454b-b1ca-2e42e6883e65"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:17 crc kubenswrapper[4758]: I0122 16:52:17.513374 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e024ebe-16b8-454b-b1ca-2e42e6883e65-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "7e024ebe-16b8-454b-b1ca-2e42e6883e65" (UID: "7e024ebe-16b8-454b-b1ca-2e42e6883e65"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:17 crc kubenswrapper[4758]: I0122 16:52:17.517992 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-647dd9b96f-tvdcp" Jan 22 16:52:17 crc kubenswrapper[4758]: I0122 16:52:17.566337 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e024ebe-16b8-454b-b1ca-2e42e6883e65-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:17 crc kubenswrapper[4758]: I0122 16:52:17.566370 4758 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7e024ebe-16b8-454b-b1ca-2e42e6883e65-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:17 crc kubenswrapper[4758]: I0122 16:52:17.575094 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e024ebe-16b8-454b-b1ca-2e42e6883e65-config-data" (OuterVolumeSpecName: "config-data") pod "7e024ebe-16b8-454b-b1ca-2e42e6883e65" (UID: "7e024ebe-16b8-454b-b1ca-2e42e6883e65"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:17 crc kubenswrapper[4758]: I0122 16:52:17.614713 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-645cd9555c-62zx7"] Jan 22 16:52:17 crc kubenswrapper[4758]: I0122 16:52:17.618499 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-645cd9555c-62zx7" podUID="5e14bf40-a1bf-421a-acb7-f8e45f36dbf6" containerName="dnsmasq-dns" containerID="cri-o://a3092421fabe49a1da5c7724ddb7e03bc903a811ef647c5ad13cf2afa32719ca" gracePeriod=10 Jan 22 16:52:17 crc kubenswrapper[4758]: I0122 16:52:17.668482 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e024ebe-16b8-454b-b1ca-2e42e6883e65-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:17 crc kubenswrapper[4758]: I0122 16:52:17.933986 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"7e024ebe-16b8-454b-b1ca-2e42e6883e65","Type":"ContainerDied","Data":"7d34d4fb6b65ecefc0772db37f4946a2a1b7986e257a44062b8da614f947ca2b"} Jan 22 16:52:17 crc kubenswrapper[4758]: I0122 16:52:17.934037 4758 scope.go:117] "RemoveContainer" containerID="9eeddd727171947e1836501aaef4a7fc5e321f11099ad1273369dfad7e7322f2" Jan 22 16:52:17 crc kubenswrapper[4758]: I0122 16:52:17.934139 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 22 16:52:17 crc kubenswrapper[4758]: I0122 16:52:17.952762 4758 generic.go:334] "Generic (PLEG): container finished" podID="9e324fa8-b3ee-4072-8a8d-5c08e771d0c0" containerID="f48b067c805b943bf203f726a9c5fe8e5de020511ec8991a6c86f7f1752ac774" exitCode=0 Jan 22 16:52:17 crc kubenswrapper[4758]: I0122 16:52:17.952856 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"9e324fa8-b3ee-4072-8a8d-5c08e771d0c0","Type":"ContainerDied","Data":"f48b067c805b943bf203f726a9c5fe8e5de020511ec8991a6c86f7f1752ac774"} Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.007355 4758 scope.go:117] "RemoveContainer" containerID="8227401b4532badd1f72d6e02c9e94c68bd6e847c6f48c5118ae6e7d97aa2a27" Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.011576 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.033639 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.072472 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Jan 22 16:52:18 crc kubenswrapper[4758]: E0122 16:52:18.073120 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e024ebe-16b8-454b-b1ca-2e42e6883e65" containerName="watcher-api-log" Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.073139 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e024ebe-16b8-454b-b1ca-2e42e6883e65" containerName="watcher-api-log" Jan 22 16:52:18 crc kubenswrapper[4758]: E0122 16:52:18.073179 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e024ebe-16b8-454b-b1ca-2e42e6883e65" containerName="watcher-api" Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.073188 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e024ebe-16b8-454b-b1ca-2e42e6883e65" containerName="watcher-api" Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.073430 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e024ebe-16b8-454b-b1ca-2e42e6883e65" containerName="watcher-api" Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.073454 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e024ebe-16b8-454b-b1ca-2e42e6883e65" containerName="watcher-api-log" Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.074840 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.085960 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.086012 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.086185 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.108269 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.118257 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47q6b\" (UniqueName: \"kubernetes.io/projected/817e9de5-ef65-4caf-b47e-1cd6dd125daf-kube-api-access-47q6b\") pod \"watcher-api-0\" (UID: \"817e9de5-ef65-4caf-b47e-1cd6dd125daf\") " pod="openstack/watcher-api-0" Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.118317 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/817e9de5-ef65-4caf-b47e-1cd6dd125daf-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"817e9de5-ef65-4caf-b47e-1cd6dd125daf\") " pod="openstack/watcher-api-0" Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.118344 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/817e9de5-ef65-4caf-b47e-1cd6dd125daf-public-tls-certs\") pod \"watcher-api-0\" (UID: \"817e9de5-ef65-4caf-b47e-1cd6dd125daf\") " pod="openstack/watcher-api-0" Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.118382 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/817e9de5-ef65-4caf-b47e-1cd6dd125daf-logs\") pod \"watcher-api-0\" (UID: \"817e9de5-ef65-4caf-b47e-1cd6dd125daf\") " pod="openstack/watcher-api-0" Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.118403 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/817e9de5-ef65-4caf-b47e-1cd6dd125daf-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"817e9de5-ef65-4caf-b47e-1cd6dd125daf\") " pod="openstack/watcher-api-0" Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.118419 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/817e9de5-ef65-4caf-b47e-1cd6dd125daf-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"817e9de5-ef65-4caf-b47e-1cd6dd125daf\") " pod="openstack/watcher-api-0" Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.118440 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/817e9de5-ef65-4caf-b47e-1cd6dd125daf-config-data\") pod \"watcher-api-0\" (UID: \"817e9de5-ef65-4caf-b47e-1cd6dd125daf\") " pod="openstack/watcher-api-0" Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.323085 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.323307 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.323867 4758 scope.go:117] "RemoveContainer" containerID="879e0aeb8d1bcac2eefb400de2ed81acbc3af9e70161b2e8d9775267f2afb046" Jan 22 16:52:18 crc kubenswrapper[4758]: E0122 16:52:18.324122 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(ea53227e-7c78-42b4-959c-dd2531914be2)\"" pod="openstack/watcher-decision-engine-0" podUID="ea53227e-7c78-42b4-959c-dd2531914be2" Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.324439 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/817e9de5-ef65-4caf-b47e-1cd6dd125daf-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"817e9de5-ef65-4caf-b47e-1cd6dd125daf\") " pod="openstack/watcher-api-0" Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.326574 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/817e9de5-ef65-4caf-b47e-1cd6dd125daf-public-tls-certs\") pod \"watcher-api-0\" (UID: \"817e9de5-ef65-4caf-b47e-1cd6dd125daf\") " pod="openstack/watcher-api-0" Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.330061 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/817e9de5-ef65-4caf-b47e-1cd6dd125daf-logs\") pod \"watcher-api-0\" (UID: \"817e9de5-ef65-4caf-b47e-1cd6dd125daf\") " pod="openstack/watcher-api-0" Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.330239 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/817e9de5-ef65-4caf-b47e-1cd6dd125daf-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"817e9de5-ef65-4caf-b47e-1cd6dd125daf\") " pod="openstack/watcher-api-0" Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.330340 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/817e9de5-ef65-4caf-b47e-1cd6dd125daf-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"817e9de5-ef65-4caf-b47e-1cd6dd125daf\") " pod="openstack/watcher-api-0" Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.330453 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/817e9de5-ef65-4caf-b47e-1cd6dd125daf-config-data\") pod \"watcher-api-0\" (UID: \"817e9de5-ef65-4caf-b47e-1cd6dd125daf\") " pod="openstack/watcher-api-0" Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.330767 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47q6b\" (UniqueName: \"kubernetes.io/projected/817e9de5-ef65-4caf-b47e-1cd6dd125daf-kube-api-access-47q6b\") pod \"watcher-api-0\" (UID: \"817e9de5-ef65-4caf-b47e-1cd6dd125daf\") " pod="openstack/watcher-api-0" Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.332318 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/817e9de5-ef65-4caf-b47e-1cd6dd125daf-logs\") pod \"watcher-api-0\" (UID: \"817e9de5-ef65-4caf-b47e-1cd6dd125daf\") " pod="openstack/watcher-api-0" Jan 22 16:52:18 crc kubenswrapper[4758]: E0122 16:52:18.334371 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f48b067c805b943bf203f726a9c5fe8e5de020511ec8991a6c86f7f1752ac774 is running failed: container process not found" containerID="f48b067c805b943bf203f726a9c5fe8e5de020511ec8991a6c86f7f1752ac774" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.334419 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/817e9de5-ef65-4caf-b47e-1cd6dd125daf-public-tls-certs\") pod \"watcher-api-0\" (UID: \"817e9de5-ef65-4caf-b47e-1cd6dd125daf\") " pod="openstack/watcher-api-0" Jan 22 16:52:18 crc kubenswrapper[4758]: E0122 16:52:18.334968 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f48b067c805b943bf203f726a9c5fe8e5de020511ec8991a6c86f7f1752ac774 is running failed: container process not found" containerID="f48b067c805b943bf203f726a9c5fe8e5de020511ec8991a6c86f7f1752ac774" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 16:52:18 crc kubenswrapper[4758]: E0122 16:52:18.335503 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f48b067c805b943bf203f726a9c5fe8e5de020511ec8991a6c86f7f1752ac774 is running failed: container process not found" containerID="f48b067c805b943bf203f726a9c5fe8e5de020511ec8991a6c86f7f1752ac774" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 16:52:18 crc kubenswrapper[4758]: E0122 16:52:18.335528 4758 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f48b067c805b943bf203f726a9c5fe8e5de020511ec8991a6c86f7f1752ac774 is running failed: container process not found" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="9e324fa8-b3ee-4072-8a8d-5c08e771d0c0" containerName="watcher-applier" Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.340008 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/817e9de5-ef65-4caf-b47e-1cd6dd125daf-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"817e9de5-ef65-4caf-b47e-1cd6dd125daf\") " pod="openstack/watcher-api-0" Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.342927 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/817e9de5-ef65-4caf-b47e-1cd6dd125daf-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"817e9de5-ef65-4caf-b47e-1cd6dd125daf\") " pod="openstack/watcher-api-0" Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.354447 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/817e9de5-ef65-4caf-b47e-1cd6dd125daf-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"817e9de5-ef65-4caf-b47e-1cd6dd125daf\") " pod="openstack/watcher-api-0" Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.371574 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/817e9de5-ef65-4caf-b47e-1cd6dd125daf-config-data\") pod \"watcher-api-0\" (UID: \"817e9de5-ef65-4caf-b47e-1cd6dd125daf\") " pod="openstack/watcher-api-0" Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.386534 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47q6b\" (UniqueName: \"kubernetes.io/projected/817e9de5-ef65-4caf-b47e-1cd6dd125daf-kube-api-access-47q6b\") pod \"watcher-api-0\" (UID: \"817e9de5-ef65-4caf-b47e-1cd6dd125daf\") " pod="openstack/watcher-api-0" Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.433784 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.645520 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.647668 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-645cd9555c-62zx7" Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.895256 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e024ebe-16b8-454b-b1ca-2e42e6883e65" path="/var/lib/kubelet/pods/7e024ebe-16b8-454b-b1ca-2e42e6883e65/volumes" Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.976327 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e324fa8-b3ee-4072-8a8d-5c08e771d0c0-logs\") pod \"9e324fa8-b3ee-4072-8a8d-5c08e771d0c0\" (UID: \"9e324fa8-b3ee-4072-8a8d-5c08e771d0c0\") " Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.976420 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e14bf40-a1bf-421a-acb7-f8e45f36dbf6-dns-svc\") pod \"5e14bf40-a1bf-421a-acb7-f8e45f36dbf6\" (UID: \"5e14bf40-a1bf-421a-acb7-f8e45f36dbf6\") " Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.976463 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcj2v\" (UniqueName: \"kubernetes.io/projected/5e14bf40-a1bf-421a-acb7-f8e45f36dbf6-kube-api-access-fcj2v\") pod \"5e14bf40-a1bf-421a-acb7-f8e45f36dbf6\" (UID: \"5e14bf40-a1bf-421a-acb7-f8e45f36dbf6\") " Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.976554 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e14bf40-a1bf-421a-acb7-f8e45f36dbf6-ovsdbserver-nb\") pod \"5e14bf40-a1bf-421a-acb7-f8e45f36dbf6\" (UID: \"5e14bf40-a1bf-421a-acb7-f8e45f36dbf6\") " Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.976590 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5e14bf40-a1bf-421a-acb7-f8e45f36dbf6-dns-swift-storage-0\") pod \"5e14bf40-a1bf-421a-acb7-f8e45f36dbf6\" (UID: \"5e14bf40-a1bf-421a-acb7-f8e45f36dbf6\") " Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.976634 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e324fa8-b3ee-4072-8a8d-5c08e771d0c0-combined-ca-bundle\") pod \"9e324fa8-b3ee-4072-8a8d-5c08e771d0c0\" (UID: \"9e324fa8-b3ee-4072-8a8d-5c08e771d0c0\") " Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.976793 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e14bf40-a1bf-421a-acb7-f8e45f36dbf6-ovsdbserver-sb\") pod \"5e14bf40-a1bf-421a-acb7-f8e45f36dbf6\" (UID: \"5e14bf40-a1bf-421a-acb7-f8e45f36dbf6\") " Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.976861 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e324fa8-b3ee-4072-8a8d-5c08e771d0c0-config-data\") pod \"9e324fa8-b3ee-4072-8a8d-5c08e771d0c0\" (UID: \"9e324fa8-b3ee-4072-8a8d-5c08e771d0c0\") " Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.976918 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e14bf40-a1bf-421a-acb7-f8e45f36dbf6-config\") pod \"5e14bf40-a1bf-421a-acb7-f8e45f36dbf6\" (UID: \"5e14bf40-a1bf-421a-acb7-f8e45f36dbf6\") " Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.976948 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9b7xm\" (UniqueName: \"kubernetes.io/projected/9e324fa8-b3ee-4072-8a8d-5c08e771d0c0-kube-api-access-9b7xm\") pod \"9e324fa8-b3ee-4072-8a8d-5c08e771d0c0\" (UID: \"9e324fa8-b3ee-4072-8a8d-5c08e771d0c0\") " Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.989724 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e324fa8-b3ee-4072-8a8d-5c08e771d0c0-logs" (OuterVolumeSpecName: "logs") pod "9e324fa8-b3ee-4072-8a8d-5c08e771d0c0" (UID: "9e324fa8-b3ee-4072-8a8d-5c08e771d0c0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.994556 4758 generic.go:334] "Generic (PLEG): container finished" podID="5e14bf40-a1bf-421a-acb7-f8e45f36dbf6" containerID="a3092421fabe49a1da5c7724ddb7e03bc903a811ef647c5ad13cf2afa32719ca" exitCode=0 Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.994642 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-645cd9555c-62zx7" event={"ID":"5e14bf40-a1bf-421a-acb7-f8e45f36dbf6","Type":"ContainerDied","Data":"a3092421fabe49a1da5c7724ddb7e03bc903a811ef647c5ad13cf2afa32719ca"} Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.994673 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-645cd9555c-62zx7" event={"ID":"5e14bf40-a1bf-421a-acb7-f8e45f36dbf6","Type":"ContainerDied","Data":"036f040a3a0bd5395031fa867b9314d30fbb931e79054ae07f95937b4b56bf3d"} Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.994689 4758 scope.go:117] "RemoveContainer" containerID="a3092421fabe49a1da5c7724ddb7e03bc903a811ef647c5ad13cf2afa32719ca" Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.994882 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-645cd9555c-62zx7" Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.998850 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"9e324fa8-b3ee-4072-8a8d-5c08e771d0c0","Type":"ContainerDied","Data":"4d87ac8190614b24ac71a438a4e7643ba4fa34b3ba33d9f3c1be4c1c5737674a"} Jan 22 16:52:18 crc kubenswrapper[4758]: I0122 16:52:18.999709 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.002274 4758 generic.go:334] "Generic (PLEG): container finished" podID="7a5061fa-23f9-42ce-9682-a3fd99d419d7" containerID="d3b437dad77713b4711bcd032a920a37b7499f02df7fdea656e732ca1a489d0f" exitCode=0 Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.002484 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dmssm" event={"ID":"7a5061fa-23f9-42ce-9682-a3fd99d419d7","Type":"ContainerDied","Data":"d3b437dad77713b4711bcd032a920a37b7499f02df7fdea656e732ca1a489d0f"} Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.053909 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e324fa8-b3ee-4072-8a8d-5c08e771d0c0-kube-api-access-9b7xm" (OuterVolumeSpecName: "kube-api-access-9b7xm") pod "9e324fa8-b3ee-4072-8a8d-5c08e771d0c0" (UID: "9e324fa8-b3ee-4072-8a8d-5c08e771d0c0"). InnerVolumeSpecName "kube-api-access-9b7xm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.054027 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e14bf40-a1bf-421a-acb7-f8e45f36dbf6-kube-api-access-fcj2v" (OuterVolumeSpecName: "kube-api-access-fcj2v") pod "5e14bf40-a1bf-421a-acb7-f8e45f36dbf6" (UID: "5e14bf40-a1bf-421a-acb7-f8e45f36dbf6"). InnerVolumeSpecName "kube-api-access-fcj2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.058612 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e324fa8-b3ee-4072-8a8d-5c08e771d0c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9e324fa8-b3ee-4072-8a8d-5c08e771d0c0" (UID: "9e324fa8-b3ee-4072-8a8d-5c08e771d0c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.079519 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e324fa8-b3ee-4072-8a8d-5c08e771d0c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.079542 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9b7xm\" (UniqueName: \"kubernetes.io/projected/9e324fa8-b3ee-4072-8a8d-5c08e771d0c0-kube-api-access-9b7xm\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.079552 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e324fa8-b3ee-4072-8a8d-5c08e771d0c0-logs\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.079561 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcj2v\" (UniqueName: \"kubernetes.io/projected/5e14bf40-a1bf-421a-acb7-f8e45f36dbf6-kube-api-access-fcj2v\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.109483 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e324fa8-b3ee-4072-8a8d-5c08e771d0c0-config-data" (OuterVolumeSpecName: "config-data") pod "9e324fa8-b3ee-4072-8a8d-5c08e771d0c0" (UID: "9e324fa8-b3ee-4072-8a8d-5c08e771d0c0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.143037 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e14bf40-a1bf-421a-acb7-f8e45f36dbf6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5e14bf40-a1bf-421a-acb7-f8e45f36dbf6" (UID: "5e14bf40-a1bf-421a-acb7-f8e45f36dbf6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.151053 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e14bf40-a1bf-421a-acb7-f8e45f36dbf6-config" (OuterVolumeSpecName: "config") pod "5e14bf40-a1bf-421a-acb7-f8e45f36dbf6" (UID: "5e14bf40-a1bf-421a-acb7-f8e45f36dbf6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.181823 4758 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5e14bf40-a1bf-421a-acb7-f8e45f36dbf6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.181862 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e324fa8-b3ee-4072-8a8d-5c08e771d0c0-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.181875 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e14bf40-a1bf-421a-acb7-f8e45f36dbf6-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.193196 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e14bf40-a1bf-421a-acb7-f8e45f36dbf6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5e14bf40-a1bf-421a-acb7-f8e45f36dbf6" (UID: "5e14bf40-a1bf-421a-acb7-f8e45f36dbf6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.196713 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e14bf40-a1bf-421a-acb7-f8e45f36dbf6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5e14bf40-a1bf-421a-acb7-f8e45f36dbf6" (UID: "5e14bf40-a1bf-421a-acb7-f8e45f36dbf6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.197980 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e14bf40-a1bf-421a-acb7-f8e45f36dbf6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5e14bf40-a1bf-421a-acb7-f8e45f36dbf6" (UID: "5e14bf40-a1bf-421a-acb7-f8e45f36dbf6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.203220 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.274933 4758 scope.go:117] "RemoveContainer" containerID="15294c85aebd194a415199d89a3b65124e35991a9ad9c444e27da8310ca93f57" Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.284031 4758 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e14bf40-a1bf-421a-acb7-f8e45f36dbf6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.284060 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e14bf40-a1bf-421a-acb7-f8e45f36dbf6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.284073 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e14bf40-a1bf-421a-acb7-f8e45f36dbf6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.340075 4758 scope.go:117] "RemoveContainer" containerID="a3092421fabe49a1da5c7724ddb7e03bc903a811ef647c5ad13cf2afa32719ca" Jan 22 16:52:19 crc kubenswrapper[4758]: E0122 16:52:19.341205 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3092421fabe49a1da5c7724ddb7e03bc903a811ef647c5ad13cf2afa32719ca\": container with ID starting with a3092421fabe49a1da5c7724ddb7e03bc903a811ef647c5ad13cf2afa32719ca not found: ID does not exist" containerID="a3092421fabe49a1da5c7724ddb7e03bc903a811ef647c5ad13cf2afa32719ca" Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.341238 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3092421fabe49a1da5c7724ddb7e03bc903a811ef647c5ad13cf2afa32719ca"} err="failed to get container status \"a3092421fabe49a1da5c7724ddb7e03bc903a811ef647c5ad13cf2afa32719ca\": rpc error: code = NotFound desc = could not find container \"a3092421fabe49a1da5c7724ddb7e03bc903a811ef647c5ad13cf2afa32719ca\": container with ID starting with a3092421fabe49a1da5c7724ddb7e03bc903a811ef647c5ad13cf2afa32719ca not found: ID does not exist" Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.341273 4758 scope.go:117] "RemoveContainer" containerID="15294c85aebd194a415199d89a3b65124e35991a9ad9c444e27da8310ca93f57" Jan 22 16:52:19 crc kubenswrapper[4758]: E0122 16:52:19.341577 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15294c85aebd194a415199d89a3b65124e35991a9ad9c444e27da8310ca93f57\": container with ID starting with 15294c85aebd194a415199d89a3b65124e35991a9ad9c444e27da8310ca93f57 not found: ID does not exist" containerID="15294c85aebd194a415199d89a3b65124e35991a9ad9c444e27da8310ca93f57" Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.341596 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15294c85aebd194a415199d89a3b65124e35991a9ad9c444e27da8310ca93f57"} err="failed to get container status \"15294c85aebd194a415199d89a3b65124e35991a9ad9c444e27da8310ca93f57\": rpc error: code = NotFound desc = could not find container \"15294c85aebd194a415199d89a3b65124e35991a9ad9c444e27da8310ca93f57\": container with ID starting with 15294c85aebd194a415199d89a3b65124e35991a9ad9c444e27da8310ca93f57 not found: ID does not exist" Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.341610 4758 scope.go:117] "RemoveContainer" containerID="f48b067c805b943bf203f726a9c5fe8e5de020511ec8991a6c86f7f1752ac774" Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.436009 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.463243 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-applier-0"] Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.473939 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-645cd9555c-62zx7"] Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.487949 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-645cd9555c-62zx7"] Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.496727 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Jan 22 16:52:19 crc kubenswrapper[4758]: E0122 16:52:19.497368 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e14bf40-a1bf-421a-acb7-f8e45f36dbf6" containerName="dnsmasq-dns" Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.497390 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e14bf40-a1bf-421a-acb7-f8e45f36dbf6" containerName="dnsmasq-dns" Jan 22 16:52:19 crc kubenswrapper[4758]: E0122 16:52:19.497413 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e14bf40-a1bf-421a-acb7-f8e45f36dbf6" containerName="init" Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.497428 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e14bf40-a1bf-421a-acb7-f8e45f36dbf6" containerName="init" Jan 22 16:52:19 crc kubenswrapper[4758]: E0122 16:52:19.497456 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e324fa8-b3ee-4072-8a8d-5c08e771d0c0" containerName="watcher-applier" Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.497468 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e324fa8-b3ee-4072-8a8d-5c08e771d0c0" containerName="watcher-applier" Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.497776 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e324fa8-b3ee-4072-8a8d-5c08e771d0c0" containerName="watcher-applier" Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.497840 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e14bf40-a1bf-421a-acb7-f8e45f36dbf6" containerName="dnsmasq-dns" Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.499117 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.502021 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.510898 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.590949 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/400d3b29-16ae-4eeb-a00d-716c210a1947-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"400d3b29-16ae-4eeb-a00d-716c210a1947\") " pod="openstack/watcher-applier-0" Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.591392 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/400d3b29-16ae-4eeb-a00d-716c210a1947-logs\") pod \"watcher-applier-0\" (UID: \"400d3b29-16ae-4eeb-a00d-716c210a1947\") " pod="openstack/watcher-applier-0" Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.591565 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/400d3b29-16ae-4eeb-a00d-716c210a1947-config-data\") pod \"watcher-applier-0\" (UID: \"400d3b29-16ae-4eeb-a00d-716c210a1947\") " pod="openstack/watcher-applier-0" Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.591631 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfl78\" (UniqueName: \"kubernetes.io/projected/400d3b29-16ae-4eeb-a00d-716c210a1947-kube-api-access-vfl78\") pod \"watcher-applier-0\" (UID: \"400d3b29-16ae-4eeb-a00d-716c210a1947\") " pod="openstack/watcher-applier-0" Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.704724 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfl78\" (UniqueName: \"kubernetes.io/projected/400d3b29-16ae-4eeb-a00d-716c210a1947-kube-api-access-vfl78\") pod \"watcher-applier-0\" (UID: \"400d3b29-16ae-4eeb-a00d-716c210a1947\") " pod="openstack/watcher-applier-0" Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.704855 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/400d3b29-16ae-4eeb-a00d-716c210a1947-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"400d3b29-16ae-4eeb-a00d-716c210a1947\") " pod="openstack/watcher-applier-0" Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.704885 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/400d3b29-16ae-4eeb-a00d-716c210a1947-logs\") pod \"watcher-applier-0\" (UID: \"400d3b29-16ae-4eeb-a00d-716c210a1947\") " pod="openstack/watcher-applier-0" Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.704986 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/400d3b29-16ae-4eeb-a00d-716c210a1947-config-data\") pod \"watcher-applier-0\" (UID: \"400d3b29-16ae-4eeb-a00d-716c210a1947\") " pod="openstack/watcher-applier-0" Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.705677 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/400d3b29-16ae-4eeb-a00d-716c210a1947-logs\") pod \"watcher-applier-0\" (UID: \"400d3b29-16ae-4eeb-a00d-716c210a1947\") " pod="openstack/watcher-applier-0" Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.709648 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/400d3b29-16ae-4eeb-a00d-716c210a1947-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"400d3b29-16ae-4eeb-a00d-716c210a1947\") " pod="openstack/watcher-applier-0" Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.712936 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/400d3b29-16ae-4eeb-a00d-716c210a1947-config-data\") pod \"watcher-applier-0\" (UID: \"400d3b29-16ae-4eeb-a00d-716c210a1947\") " pod="openstack/watcher-applier-0" Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.729554 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfl78\" (UniqueName: \"kubernetes.io/projected/400d3b29-16ae-4eeb-a00d-716c210a1947-kube-api-access-vfl78\") pod \"watcher-applier-0\" (UID: \"400d3b29-16ae-4eeb-a00d-716c210a1947\") " pod="openstack/watcher-applier-0" Jan 22 16:52:19 crc kubenswrapper[4758]: I0122 16:52:19.887025 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 22 16:52:20 crc kubenswrapper[4758]: I0122 16:52:20.158016 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"817e9de5-ef65-4caf-b47e-1cd6dd125daf","Type":"ContainerStarted","Data":"be924813d217cda9851aa46dc89b376ac248bb73dbd68a6c9b0cd9f1ee0f5b0c"} Jan 22 16:52:20 crc kubenswrapper[4758]: I0122 16:52:20.159357 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"817e9de5-ef65-4caf-b47e-1cd6dd125daf","Type":"ContainerStarted","Data":"e08a30d85ed131788238c7ef58bb32d9e566b08cf01e647a69a757c2503e5276"} Jan 22 16:52:20 crc kubenswrapper[4758]: I0122 16:52:20.159435 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"817e9de5-ef65-4caf-b47e-1cd6dd125daf","Type":"ContainerStarted","Data":"59fa6ded6f3da700c223540bf44f807b5a532dbf9489543355d3449362150aa1"} Jan 22 16:52:20 crc kubenswrapper[4758]: I0122 16:52:20.161769 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 22 16:52:20 crc kubenswrapper[4758]: I0122 16:52:20.161947 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="817e9de5-ef65-4caf-b47e-1cd6dd125daf" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.170:9322/\": dial tcp 10.217.0.170:9322: connect: connection refused" Jan 22 16:52:20 crc kubenswrapper[4758]: I0122 16:52:20.819639 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e14bf40-a1bf-421a-acb7-f8e45f36dbf6" path="/var/lib/kubelet/pods/5e14bf40-a1bf-421a-acb7-f8e45f36dbf6/volumes" Jan 22 16:52:20 crc kubenswrapper[4758]: I0122 16:52:20.820657 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e324fa8-b3ee-4072-8a8d-5c08e771d0c0" path="/var/lib/kubelet/pods/9e324fa8-b3ee-4072-8a8d-5c08e771d0c0/volumes" Jan 22 16:52:20 crc kubenswrapper[4758]: I0122 16:52:20.838813 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-dmssm" Jan 22 16:52:20 crc kubenswrapper[4758]: I0122 16:52:20.860941 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=3.860920518 podStartE2EDuration="3.860920518s" podCreationTimestamp="2026-01-22 16:52:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:52:20.204054167 +0000 UTC m=+1361.687393452" watchObservedRunningTime="2026-01-22 16:52:20.860920518 +0000 UTC m=+1362.344259813" Jan 22 16:52:20 crc kubenswrapper[4758]: I0122 16:52:20.904779 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Jan 22 16:52:20 crc kubenswrapper[4758]: W0122 16:52:20.916048 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod400d3b29_16ae_4eeb_a00d_716c210a1947.slice/crio-3ea873461276367bc4099a287508436a380f49d04a1d0d0cb7c3182c22411542 WatchSource:0}: Error finding container 3ea873461276367bc4099a287508436a380f49d04a1d0d0cb7c3182c22411542: Status 404 returned error can't find the container with id 3ea873461276367bc4099a287508436a380f49d04a1d0d0cb7c3182c22411542 Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.143291 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a5061fa-23f9-42ce-9682-a3fd99d419d7-combined-ca-bundle\") pod \"7a5061fa-23f9-42ce-9682-a3fd99d419d7\" (UID: \"7a5061fa-23f9-42ce-9682-a3fd99d419d7\") " Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.143412 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnz85\" (UniqueName: \"kubernetes.io/projected/7a5061fa-23f9-42ce-9682-a3fd99d419d7-kube-api-access-lnz85\") pod \"7a5061fa-23f9-42ce-9682-a3fd99d419d7\" (UID: \"7a5061fa-23f9-42ce-9682-a3fd99d419d7\") " Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.143776 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7a5061fa-23f9-42ce-9682-a3fd99d419d7-db-sync-config-data\") pod \"7a5061fa-23f9-42ce-9682-a3fd99d419d7\" (UID: \"7a5061fa-23f9-42ce-9682-a3fd99d419d7\") " Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.268107 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a5061fa-23f9-42ce-9682-a3fd99d419d7-kube-api-access-lnz85" (OuterVolumeSpecName: "kube-api-access-lnz85") pod "7a5061fa-23f9-42ce-9682-a3fd99d419d7" (UID: "7a5061fa-23f9-42ce-9682-a3fd99d419d7"). InnerVolumeSpecName "kube-api-access-lnz85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.272215 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a5061fa-23f9-42ce-9682-a3fd99d419d7-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "7a5061fa-23f9-42ce-9682-a3fd99d419d7" (UID: "7a5061fa-23f9-42ce-9682-a3fd99d419d7"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.283469 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-dmssm" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.283534 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dmssm" event={"ID":"7a5061fa-23f9-42ce-9682-a3fd99d419d7","Type":"ContainerDied","Data":"06e8257cc9a7b2575bb3496493220632652948430f3d663277adc891aabc2e93"} Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.285279 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06e8257cc9a7b2575bb3496493220632652948430f3d663277adc891aabc2e93" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.289523 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"400d3b29-16ae-4eeb-a00d-716c210a1947","Type":"ContainerStarted","Data":"3ea873461276367bc4099a287508436a380f49d04a1d0d0cb7c3182c22411542"} Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.376645 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnz85\" (UniqueName: \"kubernetes.io/projected/7a5061fa-23f9-42ce-9682-a3fd99d419d7-kube-api-access-lnz85\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.376694 4758 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7a5061fa-23f9-42ce-9682-a3fd99d419d7-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.509897 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a5061fa-23f9-42ce-9682-a3fd99d419d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7a5061fa-23f9-42ce-9682-a3fd99d419d7" (UID: "7a5061fa-23f9-42ce-9682-a3fd99d419d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.596588 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a5061fa-23f9-42ce-9682-a3fd99d419d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.620280 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-88b76f788-th2jq" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.690326 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-775569c6d5-2vjq7"] Jan 22 16:52:21 crc kubenswrapper[4758]: E0122 16:52:21.690839 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a5061fa-23f9-42ce-9682-a3fd99d419d7" containerName="barbican-db-sync" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.690855 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a5061fa-23f9-42ce-9682-a3fd99d419d7" containerName="barbican-db-sync" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.691033 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a5061fa-23f9-42ce-9682-a3fd99d419d7" containerName="barbican-db-sync" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.692101 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-775569c6d5-2vjq7" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.710911 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.711798 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.711967 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-z4pqk" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.727612 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5fbd4457db-5gt55"] Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.733615 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5fbd4457db-5gt55" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.749929 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-775569c6d5-2vjq7"] Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.749767 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.766780 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5fbd4457db-5gt55"] Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.781117 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f9c99f667-72ftn"] Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.783150 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f9c99f667-72ftn" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.805581 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/925ad838-b20e-48b3-9ee7-08133afb7840-logs\") pod \"barbican-worker-775569c6d5-2vjq7\" (UID: \"925ad838-b20e-48b3-9ee7-08133afb7840\") " pod="openstack/barbican-worker-775569c6d5-2vjq7" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.805732 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmdz6\" (UniqueName: \"kubernetes.io/projected/925ad838-b20e-48b3-9ee7-08133afb7840-kube-api-access-xmdz6\") pod \"barbican-worker-775569c6d5-2vjq7\" (UID: \"925ad838-b20e-48b3-9ee7-08133afb7840\") " pod="openstack/barbican-worker-775569c6d5-2vjq7" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.805875 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/925ad838-b20e-48b3-9ee7-08133afb7840-combined-ca-bundle\") pod \"barbican-worker-775569c6d5-2vjq7\" (UID: \"925ad838-b20e-48b3-9ee7-08133afb7840\") " pod="openstack/barbican-worker-775569c6d5-2vjq7" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.806844 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/925ad838-b20e-48b3-9ee7-08133afb7840-config-data\") pod \"barbican-worker-775569c6d5-2vjq7\" (UID: \"925ad838-b20e-48b3-9ee7-08133afb7840\") " pod="openstack/barbican-worker-775569c6d5-2vjq7" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.810464 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f9c99f667-72ftn"] Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.815886 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/925ad838-b20e-48b3-9ee7-08133afb7840-config-data-custom\") pod \"barbican-worker-775569c6d5-2vjq7\" (UID: \"925ad838-b20e-48b3-9ee7-08133afb7840\") " pod="openstack/barbican-worker-775569c6d5-2vjq7" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.866341 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6b7cfcc9b6-tclz9"] Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.871972 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b7cfcc9b6-tclz9" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.875306 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6b7cfcc9b6-tclz9"] Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.876717 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.917488 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9mdj\" (UniqueName: \"kubernetes.io/projected/cd5b4616-f0db-4639-a791-c8882e65f6ca-kube-api-access-r9mdj\") pod \"barbican-api-6b7cfcc9b6-tclz9\" (UID: \"cd5b4616-f0db-4639-a791-c8882e65f6ca\") " pod="openstack/barbican-api-6b7cfcc9b6-tclz9" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.917537 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd5b4616-f0db-4639-a791-c8882e65f6ca-combined-ca-bundle\") pod \"barbican-api-6b7cfcc9b6-tclz9\" (UID: \"cd5b4616-f0db-4639-a791-c8882e65f6ca\") " pod="openstack/barbican-api-6b7cfcc9b6-tclz9" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.917565 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjg8n\" (UniqueName: \"kubernetes.io/projected/bbc57e1b-3cb7-4bce-91e8-d31356bf83ac-kube-api-access-jjg8n\") pod \"dnsmasq-dns-7f9c99f667-72ftn\" (UID: \"bbc57e1b-3cb7-4bce-91e8-d31356bf83ac\") " pod="openstack/dnsmasq-dns-7f9c99f667-72ftn" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.917582 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd5b4616-f0db-4639-a791-c8882e65f6ca-config-data-custom\") pod \"barbican-api-6b7cfcc9b6-tclz9\" (UID: \"cd5b4616-f0db-4639-a791-c8882e65f6ca\") " pod="openstack/barbican-api-6b7cfcc9b6-tclz9" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.917615 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bbc57e1b-3cb7-4bce-91e8-d31356bf83ac-ovsdbserver-sb\") pod \"dnsmasq-dns-7f9c99f667-72ftn\" (UID: \"bbc57e1b-3cb7-4bce-91e8-d31356bf83ac\") " pod="openstack/dnsmasq-dns-7f9c99f667-72ftn" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.917634 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbc57e1b-3cb7-4bce-91e8-d31356bf83ac-dns-svc\") pod \"dnsmasq-dns-7f9c99f667-72ftn\" (UID: \"bbc57e1b-3cb7-4bce-91e8-d31356bf83ac\") " pod="openstack/dnsmasq-dns-7f9c99f667-72ftn" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.917660 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4115ae1-f42e-40b7-b82a-74d7e4abfa77-logs\") pod \"barbican-keystone-listener-5fbd4457db-5gt55\" (UID: \"b4115ae1-f42e-40b7-b82a-74d7e4abfa77\") " pod="openstack/barbican-keystone-listener-5fbd4457db-5gt55" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.917706 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/925ad838-b20e-48b3-9ee7-08133afb7840-config-data-custom\") pod \"barbican-worker-775569c6d5-2vjq7\" (UID: \"925ad838-b20e-48b3-9ee7-08133afb7840\") " pod="openstack/barbican-worker-775569c6d5-2vjq7" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.917760 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/925ad838-b20e-48b3-9ee7-08133afb7840-logs\") pod \"barbican-worker-775569c6d5-2vjq7\" (UID: \"925ad838-b20e-48b3-9ee7-08133afb7840\") " pod="openstack/barbican-worker-775569c6d5-2vjq7" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.917785 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmdz6\" (UniqueName: \"kubernetes.io/projected/925ad838-b20e-48b3-9ee7-08133afb7840-kube-api-access-xmdz6\") pod \"barbican-worker-775569c6d5-2vjq7\" (UID: \"925ad838-b20e-48b3-9ee7-08133afb7840\") " pod="openstack/barbican-worker-775569c6d5-2vjq7" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.917800 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/925ad838-b20e-48b3-9ee7-08133afb7840-combined-ca-bundle\") pod \"barbican-worker-775569c6d5-2vjq7\" (UID: \"925ad838-b20e-48b3-9ee7-08133afb7840\") " pod="openstack/barbican-worker-775569c6d5-2vjq7" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.917835 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bbc57e1b-3cb7-4bce-91e8-d31356bf83ac-ovsdbserver-nb\") pod \"dnsmasq-dns-7f9c99f667-72ftn\" (UID: \"bbc57e1b-3cb7-4bce-91e8-d31356bf83ac\") " pod="openstack/dnsmasq-dns-7f9c99f667-72ftn" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.917854 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbc57e1b-3cb7-4bce-91e8-d31356bf83ac-config\") pod \"dnsmasq-dns-7f9c99f667-72ftn\" (UID: \"bbc57e1b-3cb7-4bce-91e8-d31356bf83ac\") " pod="openstack/dnsmasq-dns-7f9c99f667-72ftn" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.917882 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bbc57e1b-3cb7-4bce-91e8-d31356bf83ac-dns-swift-storage-0\") pod \"dnsmasq-dns-7f9c99f667-72ftn\" (UID: \"bbc57e1b-3cb7-4bce-91e8-d31356bf83ac\") " pod="openstack/dnsmasq-dns-7f9c99f667-72ftn" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.917914 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b4115ae1-f42e-40b7-b82a-74d7e4abfa77-config-data-custom\") pod \"barbican-keystone-listener-5fbd4457db-5gt55\" (UID: \"b4115ae1-f42e-40b7-b82a-74d7e4abfa77\") " pod="openstack/barbican-keystone-listener-5fbd4457db-5gt55" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.917935 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd5b4616-f0db-4639-a791-c8882e65f6ca-logs\") pod \"barbican-api-6b7cfcc9b6-tclz9\" (UID: \"cd5b4616-f0db-4639-a791-c8882e65f6ca\") " pod="openstack/barbican-api-6b7cfcc9b6-tclz9" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.917951 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4115ae1-f42e-40b7-b82a-74d7e4abfa77-config-data\") pod \"barbican-keystone-listener-5fbd4457db-5gt55\" (UID: \"b4115ae1-f42e-40b7-b82a-74d7e4abfa77\") " pod="openstack/barbican-keystone-listener-5fbd4457db-5gt55" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.917967 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/925ad838-b20e-48b3-9ee7-08133afb7840-config-data\") pod \"barbican-worker-775569c6d5-2vjq7\" (UID: \"925ad838-b20e-48b3-9ee7-08133afb7840\") " pod="openstack/barbican-worker-775569c6d5-2vjq7" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.917982 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd5b4616-f0db-4639-a791-c8882e65f6ca-config-data\") pod \"barbican-api-6b7cfcc9b6-tclz9\" (UID: \"cd5b4616-f0db-4639-a791-c8882e65f6ca\") " pod="openstack/barbican-api-6b7cfcc9b6-tclz9" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.918000 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4115ae1-f42e-40b7-b82a-74d7e4abfa77-combined-ca-bundle\") pod \"barbican-keystone-listener-5fbd4457db-5gt55\" (UID: \"b4115ae1-f42e-40b7-b82a-74d7e4abfa77\") " pod="openstack/barbican-keystone-listener-5fbd4457db-5gt55" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.918023 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dthf7\" (UniqueName: \"kubernetes.io/projected/b4115ae1-f42e-40b7-b82a-74d7e4abfa77-kube-api-access-dthf7\") pod \"barbican-keystone-listener-5fbd4457db-5gt55\" (UID: \"b4115ae1-f42e-40b7-b82a-74d7e4abfa77\") " pod="openstack/barbican-keystone-listener-5fbd4457db-5gt55" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.919152 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/925ad838-b20e-48b3-9ee7-08133afb7840-logs\") pod \"barbican-worker-775569c6d5-2vjq7\" (UID: \"925ad838-b20e-48b3-9ee7-08133afb7840\") " pod="openstack/barbican-worker-775569c6d5-2vjq7" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.925099 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/925ad838-b20e-48b3-9ee7-08133afb7840-config-data\") pod \"barbican-worker-775569c6d5-2vjq7\" (UID: \"925ad838-b20e-48b3-9ee7-08133afb7840\") " pod="openstack/barbican-worker-775569c6d5-2vjq7" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.931418 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/925ad838-b20e-48b3-9ee7-08133afb7840-combined-ca-bundle\") pod \"barbican-worker-775569c6d5-2vjq7\" (UID: \"925ad838-b20e-48b3-9ee7-08133afb7840\") " pod="openstack/barbican-worker-775569c6d5-2vjq7" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.935896 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/925ad838-b20e-48b3-9ee7-08133afb7840-config-data-custom\") pod \"barbican-worker-775569c6d5-2vjq7\" (UID: \"925ad838-b20e-48b3-9ee7-08133afb7840\") " pod="openstack/barbican-worker-775569c6d5-2vjq7" Jan 22 16:52:21 crc kubenswrapper[4758]: I0122 16:52:21.940653 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmdz6\" (UniqueName: \"kubernetes.io/projected/925ad838-b20e-48b3-9ee7-08133afb7840-kube-api-access-xmdz6\") pod \"barbican-worker-775569c6d5-2vjq7\" (UID: \"925ad838-b20e-48b3-9ee7-08133afb7840\") " pod="openstack/barbican-worker-775569c6d5-2vjq7" Jan 22 16:52:22 crc kubenswrapper[4758]: I0122 16:52:22.020638 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bbc57e1b-3cb7-4bce-91e8-d31356bf83ac-ovsdbserver-nb\") pod \"dnsmasq-dns-7f9c99f667-72ftn\" (UID: \"bbc57e1b-3cb7-4bce-91e8-d31356bf83ac\") " pod="openstack/dnsmasq-dns-7f9c99f667-72ftn" Jan 22 16:52:22 crc kubenswrapper[4758]: I0122 16:52:22.020693 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbc57e1b-3cb7-4bce-91e8-d31356bf83ac-config\") pod \"dnsmasq-dns-7f9c99f667-72ftn\" (UID: \"bbc57e1b-3cb7-4bce-91e8-d31356bf83ac\") " pod="openstack/dnsmasq-dns-7f9c99f667-72ftn" Jan 22 16:52:22 crc kubenswrapper[4758]: I0122 16:52:22.020720 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bbc57e1b-3cb7-4bce-91e8-d31356bf83ac-dns-swift-storage-0\") pod \"dnsmasq-dns-7f9c99f667-72ftn\" (UID: \"bbc57e1b-3cb7-4bce-91e8-d31356bf83ac\") " pod="openstack/dnsmasq-dns-7f9c99f667-72ftn" Jan 22 16:52:22 crc kubenswrapper[4758]: I0122 16:52:22.020794 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b4115ae1-f42e-40b7-b82a-74d7e4abfa77-config-data-custom\") pod \"barbican-keystone-listener-5fbd4457db-5gt55\" (UID: \"b4115ae1-f42e-40b7-b82a-74d7e4abfa77\") " pod="openstack/barbican-keystone-listener-5fbd4457db-5gt55" Jan 22 16:52:22 crc kubenswrapper[4758]: I0122 16:52:22.020826 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd5b4616-f0db-4639-a791-c8882e65f6ca-logs\") pod \"barbican-api-6b7cfcc9b6-tclz9\" (UID: \"cd5b4616-f0db-4639-a791-c8882e65f6ca\") " pod="openstack/barbican-api-6b7cfcc9b6-tclz9" Jan 22 16:52:22 crc kubenswrapper[4758]: I0122 16:52:22.020853 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4115ae1-f42e-40b7-b82a-74d7e4abfa77-config-data\") pod \"barbican-keystone-listener-5fbd4457db-5gt55\" (UID: \"b4115ae1-f42e-40b7-b82a-74d7e4abfa77\") " pod="openstack/barbican-keystone-listener-5fbd4457db-5gt55" Jan 22 16:52:22 crc kubenswrapper[4758]: I0122 16:52:22.020876 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd5b4616-f0db-4639-a791-c8882e65f6ca-config-data\") pod \"barbican-api-6b7cfcc9b6-tclz9\" (UID: \"cd5b4616-f0db-4639-a791-c8882e65f6ca\") " pod="openstack/barbican-api-6b7cfcc9b6-tclz9" Jan 22 16:52:22 crc kubenswrapper[4758]: I0122 16:52:22.022298 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4115ae1-f42e-40b7-b82a-74d7e4abfa77-combined-ca-bundle\") pod \"barbican-keystone-listener-5fbd4457db-5gt55\" (UID: \"b4115ae1-f42e-40b7-b82a-74d7e4abfa77\") " pod="openstack/barbican-keystone-listener-5fbd4457db-5gt55" Jan 22 16:52:22 crc kubenswrapper[4758]: I0122 16:52:22.022778 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dthf7\" (UniqueName: \"kubernetes.io/projected/b4115ae1-f42e-40b7-b82a-74d7e4abfa77-kube-api-access-dthf7\") pod \"barbican-keystone-listener-5fbd4457db-5gt55\" (UID: \"b4115ae1-f42e-40b7-b82a-74d7e4abfa77\") " pod="openstack/barbican-keystone-listener-5fbd4457db-5gt55" Jan 22 16:52:22 crc kubenswrapper[4758]: I0122 16:52:22.022924 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9mdj\" (UniqueName: \"kubernetes.io/projected/cd5b4616-f0db-4639-a791-c8882e65f6ca-kube-api-access-r9mdj\") pod \"barbican-api-6b7cfcc9b6-tclz9\" (UID: \"cd5b4616-f0db-4639-a791-c8882e65f6ca\") " pod="openstack/barbican-api-6b7cfcc9b6-tclz9" Jan 22 16:52:22 crc kubenswrapper[4758]: I0122 16:52:22.022974 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd5b4616-f0db-4639-a791-c8882e65f6ca-combined-ca-bundle\") pod \"barbican-api-6b7cfcc9b6-tclz9\" (UID: \"cd5b4616-f0db-4639-a791-c8882e65f6ca\") " pod="openstack/barbican-api-6b7cfcc9b6-tclz9" Jan 22 16:52:22 crc kubenswrapper[4758]: I0122 16:52:22.023019 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjg8n\" (UniqueName: \"kubernetes.io/projected/bbc57e1b-3cb7-4bce-91e8-d31356bf83ac-kube-api-access-jjg8n\") pod \"dnsmasq-dns-7f9c99f667-72ftn\" (UID: \"bbc57e1b-3cb7-4bce-91e8-d31356bf83ac\") " pod="openstack/dnsmasq-dns-7f9c99f667-72ftn" Jan 22 16:52:22 crc kubenswrapper[4758]: I0122 16:52:22.023043 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd5b4616-f0db-4639-a791-c8882e65f6ca-config-data-custom\") pod \"barbican-api-6b7cfcc9b6-tclz9\" (UID: \"cd5b4616-f0db-4639-a791-c8882e65f6ca\") " pod="openstack/barbican-api-6b7cfcc9b6-tclz9" Jan 22 16:52:22 crc kubenswrapper[4758]: I0122 16:52:22.023107 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bbc57e1b-3cb7-4bce-91e8-d31356bf83ac-ovsdbserver-sb\") pod \"dnsmasq-dns-7f9c99f667-72ftn\" (UID: \"bbc57e1b-3cb7-4bce-91e8-d31356bf83ac\") " pod="openstack/dnsmasq-dns-7f9c99f667-72ftn" Jan 22 16:52:22 crc kubenswrapper[4758]: I0122 16:52:22.023145 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbc57e1b-3cb7-4bce-91e8-d31356bf83ac-dns-svc\") pod \"dnsmasq-dns-7f9c99f667-72ftn\" (UID: \"bbc57e1b-3cb7-4bce-91e8-d31356bf83ac\") " pod="openstack/dnsmasq-dns-7f9c99f667-72ftn" Jan 22 16:52:22 crc kubenswrapper[4758]: I0122 16:52:22.023199 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4115ae1-f42e-40b7-b82a-74d7e4abfa77-logs\") pod \"barbican-keystone-listener-5fbd4457db-5gt55\" (UID: \"b4115ae1-f42e-40b7-b82a-74d7e4abfa77\") " pod="openstack/barbican-keystone-listener-5fbd4457db-5gt55" Jan 22 16:52:22 crc kubenswrapper[4758]: I0122 16:52:22.023927 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bbc57e1b-3cb7-4bce-91e8-d31356bf83ac-ovsdbserver-nb\") pod \"dnsmasq-dns-7f9c99f667-72ftn\" (UID: \"bbc57e1b-3cb7-4bce-91e8-d31356bf83ac\") " pod="openstack/dnsmasq-dns-7f9c99f667-72ftn" Jan 22 16:52:22 crc kubenswrapper[4758]: I0122 16:52:22.023928 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-775569c6d5-2vjq7" Jan 22 16:52:22 crc kubenswrapper[4758]: I0122 16:52:22.024206 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bbc57e1b-3cb7-4bce-91e8-d31356bf83ac-dns-swift-storage-0\") pod \"dnsmasq-dns-7f9c99f667-72ftn\" (UID: \"bbc57e1b-3cb7-4bce-91e8-d31356bf83ac\") " pod="openstack/dnsmasq-dns-7f9c99f667-72ftn" Jan 22 16:52:22 crc kubenswrapper[4758]: I0122 16:52:22.022918 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbc57e1b-3cb7-4bce-91e8-d31356bf83ac-config\") pod \"dnsmasq-dns-7f9c99f667-72ftn\" (UID: \"bbc57e1b-3cb7-4bce-91e8-d31356bf83ac\") " pod="openstack/dnsmasq-dns-7f9c99f667-72ftn" Jan 22 16:52:22 crc kubenswrapper[4758]: I0122 16:52:22.026187 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbc57e1b-3cb7-4bce-91e8-d31356bf83ac-dns-svc\") pod \"dnsmasq-dns-7f9c99f667-72ftn\" (UID: \"bbc57e1b-3cb7-4bce-91e8-d31356bf83ac\") " pod="openstack/dnsmasq-dns-7f9c99f667-72ftn" Jan 22 16:52:22 crc kubenswrapper[4758]: I0122 16:52:22.027049 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4115ae1-f42e-40b7-b82a-74d7e4abfa77-logs\") pod \"barbican-keystone-listener-5fbd4457db-5gt55\" (UID: \"b4115ae1-f42e-40b7-b82a-74d7e4abfa77\") " pod="openstack/barbican-keystone-listener-5fbd4457db-5gt55" Jan 22 16:52:22 crc kubenswrapper[4758]: I0122 16:52:22.029920 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bbc57e1b-3cb7-4bce-91e8-d31356bf83ac-ovsdbserver-sb\") pod \"dnsmasq-dns-7f9c99f667-72ftn\" (UID: \"bbc57e1b-3cb7-4bce-91e8-d31356bf83ac\") " pod="openstack/dnsmasq-dns-7f9c99f667-72ftn" Jan 22 16:52:22 crc kubenswrapper[4758]: I0122 16:52:22.031231 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd5b4616-f0db-4639-a791-c8882e65f6ca-logs\") pod \"barbican-api-6b7cfcc9b6-tclz9\" (UID: \"cd5b4616-f0db-4639-a791-c8882e65f6ca\") " pod="openstack/barbican-api-6b7cfcc9b6-tclz9" Jan 22 16:52:22 crc kubenswrapper[4758]: I0122 16:52:22.031666 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b4115ae1-f42e-40b7-b82a-74d7e4abfa77-config-data-custom\") pod \"barbican-keystone-listener-5fbd4457db-5gt55\" (UID: \"b4115ae1-f42e-40b7-b82a-74d7e4abfa77\") " pod="openstack/barbican-keystone-listener-5fbd4457db-5gt55" Jan 22 16:52:22 crc kubenswrapper[4758]: I0122 16:52:22.032409 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd5b4616-f0db-4639-a791-c8882e65f6ca-config-data-custom\") pod \"barbican-api-6b7cfcc9b6-tclz9\" (UID: \"cd5b4616-f0db-4639-a791-c8882e65f6ca\") " pod="openstack/barbican-api-6b7cfcc9b6-tclz9" Jan 22 16:52:22 crc kubenswrapper[4758]: I0122 16:52:22.048159 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd5b4616-f0db-4639-a791-c8882e65f6ca-combined-ca-bundle\") pod \"barbican-api-6b7cfcc9b6-tclz9\" (UID: \"cd5b4616-f0db-4639-a791-c8882e65f6ca\") " pod="openstack/barbican-api-6b7cfcc9b6-tclz9" Jan 22 16:52:22 crc kubenswrapper[4758]: I0122 16:52:22.052137 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4115ae1-f42e-40b7-b82a-74d7e4abfa77-config-data\") pod \"barbican-keystone-listener-5fbd4457db-5gt55\" (UID: \"b4115ae1-f42e-40b7-b82a-74d7e4abfa77\") " pod="openstack/barbican-keystone-listener-5fbd4457db-5gt55" Jan 22 16:52:22 crc kubenswrapper[4758]: I0122 16:52:22.052295 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd5b4616-f0db-4639-a791-c8882e65f6ca-config-data\") pod \"barbican-api-6b7cfcc9b6-tclz9\" (UID: \"cd5b4616-f0db-4639-a791-c8882e65f6ca\") " pod="openstack/barbican-api-6b7cfcc9b6-tclz9" Jan 22 16:52:22 crc kubenswrapper[4758]: I0122 16:52:22.055719 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjg8n\" (UniqueName: \"kubernetes.io/projected/bbc57e1b-3cb7-4bce-91e8-d31356bf83ac-kube-api-access-jjg8n\") pod \"dnsmasq-dns-7f9c99f667-72ftn\" (UID: \"bbc57e1b-3cb7-4bce-91e8-d31356bf83ac\") " pod="openstack/dnsmasq-dns-7f9c99f667-72ftn" Jan 22 16:52:22 crc kubenswrapper[4758]: I0122 16:52:22.056069 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9mdj\" (UniqueName: \"kubernetes.io/projected/cd5b4616-f0db-4639-a791-c8882e65f6ca-kube-api-access-r9mdj\") pod \"barbican-api-6b7cfcc9b6-tclz9\" (UID: \"cd5b4616-f0db-4639-a791-c8882e65f6ca\") " pod="openstack/barbican-api-6b7cfcc9b6-tclz9" Jan 22 16:52:22 crc kubenswrapper[4758]: I0122 16:52:22.056340 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dthf7\" (UniqueName: \"kubernetes.io/projected/b4115ae1-f42e-40b7-b82a-74d7e4abfa77-kube-api-access-dthf7\") pod \"barbican-keystone-listener-5fbd4457db-5gt55\" (UID: \"b4115ae1-f42e-40b7-b82a-74d7e4abfa77\") " pod="openstack/barbican-keystone-listener-5fbd4457db-5gt55" Jan 22 16:52:22 crc kubenswrapper[4758]: I0122 16:52:22.057390 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4115ae1-f42e-40b7-b82a-74d7e4abfa77-combined-ca-bundle\") pod \"barbican-keystone-listener-5fbd4457db-5gt55\" (UID: \"b4115ae1-f42e-40b7-b82a-74d7e4abfa77\") " pod="openstack/barbican-keystone-listener-5fbd4457db-5gt55" Jan 22 16:52:22 crc kubenswrapper[4758]: I0122 16:52:22.059523 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5fbd4457db-5gt55" Jan 22 16:52:22 crc kubenswrapper[4758]: I0122 16:52:22.226661 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b7cfcc9b6-tclz9" Jan 22 16:52:22 crc kubenswrapper[4758]: I0122 16:52:22.240201 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f9c99f667-72ftn" Jan 22 16:52:22 crc kubenswrapper[4758]: I0122 16:52:22.323625 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"400d3b29-16ae-4eeb-a00d-716c210a1947","Type":"ContainerStarted","Data":"49ba649eff45e2686fbcc95e421df75facb25001b302a3c30fe99c98d1d803df"} Jan 22 16:52:22 crc kubenswrapper[4758]: I0122 16:52:22.337016 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-55b94d9b56-4x8cx" podUID="44cc928c-2531-4055-9b8f-b36957f3485d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.161:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 16:52:22 crc kubenswrapper[4758]: I0122 16:52:22.348988 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=3.348962315 podStartE2EDuration="3.348962315s" podCreationTimestamp="2026-01-22 16:52:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:52:22.340411991 +0000 UTC m=+1363.823751266" watchObservedRunningTime="2026-01-22 16:52:22.348962315 +0000 UTC m=+1363.832301600" Jan 22 16:52:23 crc kubenswrapper[4758]: I0122 16:52:23.435397 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 22 16:52:23 crc kubenswrapper[4758]: I0122 16:52:23.436156 4758 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 16:52:24 crc kubenswrapper[4758]: I0122 16:52:24.716063 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-88b76f788-th2jq" Jan 22 16:52:24 crc kubenswrapper[4758]: I0122 16:52:24.890812 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Jan 22 16:52:25 crc kubenswrapper[4758]: I0122 16:52:25.871615 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 22 16:52:26 crc kubenswrapper[4758]: I0122 16:52:26.072808 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6c78f7b546-sv5rx"] Jan 22 16:52:26 crc kubenswrapper[4758]: I0122 16:52:26.074611 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6c78f7b546-sv5rx" Jan 22 16:52:26 crc kubenswrapper[4758]: I0122 16:52:26.077454 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 22 16:52:26 crc kubenswrapper[4758]: I0122 16:52:26.077813 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 22 16:52:26 crc kubenswrapper[4758]: I0122 16:52:26.106564 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6c78f7b546-sv5rx"] Jan 22 16:52:26 crc kubenswrapper[4758]: I0122 16:52:26.309248 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/177272b6-b55b-4e45-9336-d6227af172d0-combined-ca-bundle\") pod \"barbican-api-6c78f7b546-sv5rx\" (UID: \"177272b6-b55b-4e45-9336-d6227af172d0\") " pod="openstack/barbican-api-6c78f7b546-sv5rx" Jan 22 16:52:26 crc kubenswrapper[4758]: I0122 16:52:26.309302 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/177272b6-b55b-4e45-9336-d6227af172d0-public-tls-certs\") pod \"barbican-api-6c78f7b546-sv5rx\" (UID: \"177272b6-b55b-4e45-9336-d6227af172d0\") " pod="openstack/barbican-api-6c78f7b546-sv5rx" Jan 22 16:52:26 crc kubenswrapper[4758]: I0122 16:52:26.309482 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/177272b6-b55b-4e45-9336-d6227af172d0-logs\") pod \"barbican-api-6c78f7b546-sv5rx\" (UID: \"177272b6-b55b-4e45-9336-d6227af172d0\") " pod="openstack/barbican-api-6c78f7b546-sv5rx" Jan 22 16:52:26 crc kubenswrapper[4758]: I0122 16:52:26.309563 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/177272b6-b55b-4e45-9336-d6227af172d0-internal-tls-certs\") pod \"barbican-api-6c78f7b546-sv5rx\" (UID: \"177272b6-b55b-4e45-9336-d6227af172d0\") " pod="openstack/barbican-api-6c78f7b546-sv5rx" Jan 22 16:52:26 crc kubenswrapper[4758]: I0122 16:52:26.309596 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v89lh\" (UniqueName: \"kubernetes.io/projected/177272b6-b55b-4e45-9336-d6227af172d0-kube-api-access-v89lh\") pod \"barbican-api-6c78f7b546-sv5rx\" (UID: \"177272b6-b55b-4e45-9336-d6227af172d0\") " pod="openstack/barbican-api-6c78f7b546-sv5rx" Jan 22 16:52:26 crc kubenswrapper[4758]: I0122 16:52:26.309614 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/177272b6-b55b-4e45-9336-d6227af172d0-config-data-custom\") pod \"barbican-api-6c78f7b546-sv5rx\" (UID: \"177272b6-b55b-4e45-9336-d6227af172d0\") " pod="openstack/barbican-api-6c78f7b546-sv5rx" Jan 22 16:52:26 crc kubenswrapper[4758]: I0122 16:52:26.309723 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/177272b6-b55b-4e45-9336-d6227af172d0-config-data\") pod \"barbican-api-6c78f7b546-sv5rx\" (UID: \"177272b6-b55b-4e45-9336-d6227af172d0\") " pod="openstack/barbican-api-6c78f7b546-sv5rx" Jan 22 16:52:26 crc kubenswrapper[4758]: I0122 16:52:26.411277 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/177272b6-b55b-4e45-9336-d6227af172d0-config-data\") pod \"barbican-api-6c78f7b546-sv5rx\" (UID: \"177272b6-b55b-4e45-9336-d6227af172d0\") " pod="openstack/barbican-api-6c78f7b546-sv5rx" Jan 22 16:52:26 crc kubenswrapper[4758]: I0122 16:52:26.411372 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/177272b6-b55b-4e45-9336-d6227af172d0-combined-ca-bundle\") pod \"barbican-api-6c78f7b546-sv5rx\" (UID: \"177272b6-b55b-4e45-9336-d6227af172d0\") " pod="openstack/barbican-api-6c78f7b546-sv5rx" Jan 22 16:52:26 crc kubenswrapper[4758]: I0122 16:52:26.411427 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/177272b6-b55b-4e45-9336-d6227af172d0-public-tls-certs\") pod \"barbican-api-6c78f7b546-sv5rx\" (UID: \"177272b6-b55b-4e45-9336-d6227af172d0\") " pod="openstack/barbican-api-6c78f7b546-sv5rx" Jan 22 16:52:26 crc kubenswrapper[4758]: I0122 16:52:26.411507 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/177272b6-b55b-4e45-9336-d6227af172d0-logs\") pod \"barbican-api-6c78f7b546-sv5rx\" (UID: \"177272b6-b55b-4e45-9336-d6227af172d0\") " pod="openstack/barbican-api-6c78f7b546-sv5rx" Jan 22 16:52:26 crc kubenswrapper[4758]: I0122 16:52:26.411550 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/177272b6-b55b-4e45-9336-d6227af172d0-internal-tls-certs\") pod \"barbican-api-6c78f7b546-sv5rx\" (UID: \"177272b6-b55b-4e45-9336-d6227af172d0\") " pod="openstack/barbican-api-6c78f7b546-sv5rx" Jan 22 16:52:26 crc kubenswrapper[4758]: I0122 16:52:26.411583 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v89lh\" (UniqueName: \"kubernetes.io/projected/177272b6-b55b-4e45-9336-d6227af172d0-kube-api-access-v89lh\") pod \"barbican-api-6c78f7b546-sv5rx\" (UID: \"177272b6-b55b-4e45-9336-d6227af172d0\") " pod="openstack/barbican-api-6c78f7b546-sv5rx" Jan 22 16:52:26 crc kubenswrapper[4758]: I0122 16:52:26.411610 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/177272b6-b55b-4e45-9336-d6227af172d0-config-data-custom\") pod \"barbican-api-6c78f7b546-sv5rx\" (UID: \"177272b6-b55b-4e45-9336-d6227af172d0\") " pod="openstack/barbican-api-6c78f7b546-sv5rx" Jan 22 16:52:26 crc kubenswrapper[4758]: I0122 16:52:26.412399 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/177272b6-b55b-4e45-9336-d6227af172d0-logs\") pod \"barbican-api-6c78f7b546-sv5rx\" (UID: \"177272b6-b55b-4e45-9336-d6227af172d0\") " pod="openstack/barbican-api-6c78f7b546-sv5rx" Jan 22 16:52:26 crc kubenswrapper[4758]: I0122 16:52:26.417756 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/177272b6-b55b-4e45-9336-d6227af172d0-config-data-custom\") pod \"barbican-api-6c78f7b546-sv5rx\" (UID: \"177272b6-b55b-4e45-9336-d6227af172d0\") " pod="openstack/barbican-api-6c78f7b546-sv5rx" Jan 22 16:52:26 crc kubenswrapper[4758]: I0122 16:52:26.420339 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/177272b6-b55b-4e45-9336-d6227af172d0-public-tls-certs\") pod \"barbican-api-6c78f7b546-sv5rx\" (UID: \"177272b6-b55b-4e45-9336-d6227af172d0\") " pod="openstack/barbican-api-6c78f7b546-sv5rx" Jan 22 16:52:26 crc kubenswrapper[4758]: I0122 16:52:26.423790 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/177272b6-b55b-4e45-9336-d6227af172d0-config-data\") pod \"barbican-api-6c78f7b546-sv5rx\" (UID: \"177272b6-b55b-4e45-9336-d6227af172d0\") " pod="openstack/barbican-api-6c78f7b546-sv5rx" Jan 22 16:52:26 crc kubenswrapper[4758]: I0122 16:52:26.424281 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/177272b6-b55b-4e45-9336-d6227af172d0-combined-ca-bundle\") pod \"barbican-api-6c78f7b546-sv5rx\" (UID: \"177272b6-b55b-4e45-9336-d6227af172d0\") " pod="openstack/barbican-api-6c78f7b546-sv5rx" Jan 22 16:52:26 crc kubenswrapper[4758]: I0122 16:52:26.442479 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v89lh\" (UniqueName: \"kubernetes.io/projected/177272b6-b55b-4e45-9336-d6227af172d0-kube-api-access-v89lh\") pod \"barbican-api-6c78f7b546-sv5rx\" (UID: \"177272b6-b55b-4e45-9336-d6227af172d0\") " pod="openstack/barbican-api-6c78f7b546-sv5rx" Jan 22 16:52:26 crc kubenswrapper[4758]: I0122 16:52:26.450347 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/177272b6-b55b-4e45-9336-d6227af172d0-internal-tls-certs\") pod \"barbican-api-6c78f7b546-sv5rx\" (UID: \"177272b6-b55b-4e45-9336-d6227af172d0\") " pod="openstack/barbican-api-6c78f7b546-sv5rx" Jan 22 16:52:26 crc kubenswrapper[4758]: I0122 16:52:26.708559 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6c78f7b546-sv5rx" Jan 22 16:52:28 crc kubenswrapper[4758]: I0122 16:52:28.586012 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Jan 22 16:52:28 crc kubenswrapper[4758]: I0122 16:52:28.616485 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Jan 22 16:52:29 crc kubenswrapper[4758]: I0122 16:52:29.637050 4758 generic.go:334] "Generic (PLEG): container finished" podID="e3eee08c-7cca-4bd3-bcd2-f3702e470ff2" containerID="8bdd78becfb73f8d3bd1890964a73880ab03efb2e937200ea3fac388b7cf775e" exitCode=137 Jan 22 16:52:29 crc kubenswrapper[4758]: I0122 16:52:29.637296 4758 generic.go:334] "Generic (PLEG): container finished" podID="e3eee08c-7cca-4bd3-bcd2-f3702e470ff2" containerID="563966c491bebd90caa468ebe97ba454c532453d5ab1012d0fd7d6cc5ed5ff66" exitCode=137 Jan 22 16:52:29 crc kubenswrapper[4758]: I0122 16:52:29.637105 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b5d5b589c-8c4hx" event={"ID":"e3eee08c-7cca-4bd3-bcd2-f3702e470ff2","Type":"ContainerDied","Data":"8bdd78becfb73f8d3bd1890964a73880ab03efb2e937200ea3fac388b7cf775e"} Jan 22 16:52:29 crc kubenswrapper[4758]: I0122 16:52:29.637362 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b5d5b589c-8c4hx" event={"ID":"e3eee08c-7cca-4bd3-bcd2-f3702e470ff2","Type":"ContainerDied","Data":"563966c491bebd90caa468ebe97ba454c532453d5ab1012d0fd7d6cc5ed5ff66"} Jan 22 16:52:29 crc kubenswrapper[4758]: I0122 16:52:29.640695 4758 generic.go:334] "Generic (PLEG): container finished" podID="6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e" containerID="60fe6d10518ecf7840c52f3d3028d2c8fc1ff34292e0a985a667d2e13644f112" exitCode=137 Jan 22 16:52:29 crc kubenswrapper[4758]: I0122 16:52:29.640726 4758 generic.go:334] "Generic (PLEG): container finished" podID="6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e" containerID="03be0baef6e9d0e040038bc9408c12842e56c34ea8fb131382ccf9b85a67ae89" exitCode=137 Jan 22 16:52:29 crc kubenswrapper[4758]: I0122 16:52:29.640816 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-744dd76757-hj9wx" event={"ID":"6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e","Type":"ContainerDied","Data":"60fe6d10518ecf7840c52f3d3028d2c8fc1ff34292e0a985a667d2e13644f112"} Jan 22 16:52:29 crc kubenswrapper[4758]: I0122 16:52:29.640870 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-744dd76757-hj9wx" event={"ID":"6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e","Type":"ContainerDied","Data":"03be0baef6e9d0e040038bc9408c12842e56c34ea8fb131382ccf9b85a67ae89"} Jan 22 16:52:29 crc kubenswrapper[4758]: I0122 16:52:29.896151 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Jan 22 16:52:30 crc kubenswrapper[4758]: I0122 16:52:30.015921 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 22 16:52:30 crc kubenswrapper[4758]: I0122 16:52:30.020042 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Jan 22 16:52:30 crc kubenswrapper[4758]: I0122 16:52:30.214092 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-55b94d9b56-4x8cx" Jan 22 16:52:30 crc kubenswrapper[4758]: I0122 16:52:30.651621 4758 generic.go:334] "Generic (PLEG): container finished" podID="b1666997-8287-4065-bcaf-409713fc6782" containerID="69ee03246e17adce8ca09b0c408259f38eddae14f39bc9e644a8110b0a4bfc78" exitCode=0 Jan 22 16:52:30 crc kubenswrapper[4758]: I0122 16:52:30.651776 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-529mh" event={"ID":"b1666997-8287-4065-bcaf-409713fc6782","Type":"ContainerDied","Data":"69ee03246e17adce8ca09b0c408259f38eddae14f39bc9e644a8110b0a4bfc78"} Jan 22 16:52:30 crc kubenswrapper[4758]: I0122 16:52:30.736558 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Jan 22 16:52:30 crc kubenswrapper[4758]: I0122 16:52:30.812034 4758 scope.go:117] "RemoveContainer" containerID="879e0aeb8d1bcac2eefb400de2ed81acbc3af9e70161b2e8d9775267f2afb046" Jan 22 16:52:32 crc kubenswrapper[4758]: I0122 16:52:32.141583 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-744dd76757-hj9wx" Jan 22 16:52:32 crc kubenswrapper[4758]: I0122 16:52:32.343238 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sg4pf\" (UniqueName: \"kubernetes.io/projected/6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e-kube-api-access-sg4pf\") pod \"6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e\" (UID: \"6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e\") " Jan 22 16:52:32 crc kubenswrapper[4758]: I0122 16:52:32.343674 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e-logs\") pod \"6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e\" (UID: \"6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e\") " Jan 22 16:52:32 crc kubenswrapper[4758]: I0122 16:52:32.343713 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e-horizon-secret-key\") pod \"6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e\" (UID: \"6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e\") " Jan 22 16:52:32 crc kubenswrapper[4758]: I0122 16:52:32.343797 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e-scripts\") pod \"6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e\" (UID: \"6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e\") " Jan 22 16:52:32 crc kubenswrapper[4758]: I0122 16:52:32.343869 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e-config-data\") pod \"6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e\" (UID: \"6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e\") " Jan 22 16:52:32 crc kubenswrapper[4758]: I0122 16:52:32.344664 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e-logs" (OuterVolumeSpecName: "logs") pod "6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e" (UID: "6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:52:32 crc kubenswrapper[4758]: I0122 16:52:32.350778 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e" (UID: "6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:32 crc kubenswrapper[4758]: I0122 16:52:32.352241 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e-kube-api-access-sg4pf" (OuterVolumeSpecName: "kube-api-access-sg4pf") pod "6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e" (UID: "6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e"). InnerVolumeSpecName "kube-api-access-sg4pf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:52:32 crc kubenswrapper[4758]: I0122 16:52:32.370293 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e-scripts" (OuterVolumeSpecName: "scripts") pod "6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e" (UID: "6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:52:32 crc kubenswrapper[4758]: I0122 16:52:32.382787 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e-config-data" (OuterVolumeSpecName: "config-data") pod "6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e" (UID: "6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:52:32 crc kubenswrapper[4758]: I0122 16:52:32.447404 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e-logs\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:32 crc kubenswrapper[4758]: I0122 16:52:32.447438 4758 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:32 crc kubenswrapper[4758]: I0122 16:52:32.447450 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:32 crc kubenswrapper[4758]: I0122 16:52:32.447461 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:32 crc kubenswrapper[4758]: I0122 16:52:32.447473 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sg4pf\" (UniqueName: \"kubernetes.io/projected/6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e-kube-api-access-sg4pf\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:32 crc kubenswrapper[4758]: I0122 16:52:32.548965 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-55b94d9b56-4x8cx" Jan 22 16:52:32 crc kubenswrapper[4758]: I0122 16:52:32.615675 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6c78f7b546-sv5rx"] Jan 22 16:52:32 crc kubenswrapper[4758]: I0122 16:52:32.626215 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-88b76f788-th2jq"] Jan 22 16:52:32 crc kubenswrapper[4758]: I0122 16:52:32.626506 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-88b76f788-th2jq" podUID="40487aaa-4c45-41b2-ab14-76477ed2f4bb" containerName="horizon-log" containerID="cri-o://f512c542a3f7080a3e0e9498fe8473553577ff1a142250d2654113eab457a261" gracePeriod=30 Jan 22 16:52:32 crc kubenswrapper[4758]: I0122 16:52:32.626659 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-88b76f788-th2jq" podUID="40487aaa-4c45-41b2-ab14-76477ed2f4bb" containerName="horizon" containerID="cri-o://3f804875d0ec8e65f89084335817802426f37c82f619dc121c0a2be09bd1b67f" gracePeriod=30 Jan 22 16:52:32 crc kubenswrapper[4758]: I0122 16:52:32.680994 4758 generic.go:334] "Generic (PLEG): container finished" podID="f8fe0f21-8912-4d6c-ba4f-6600456784e1" containerID="e372811a729ff0df8fbd6e21e7f66d2104eef17385e80d9766761eea836f31d5" exitCode=0 Jan 22 16:52:32 crc kubenswrapper[4758]: I0122 16:52:32.681056 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9h9hb" event={"ID":"f8fe0f21-8912-4d6c-ba4f-6600456784e1","Type":"ContainerDied","Data":"e372811a729ff0df8fbd6e21e7f66d2104eef17385e80d9766761eea836f31d5"} Jan 22 16:52:32 crc kubenswrapper[4758]: I0122 16:52:32.693205 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-744dd76757-hj9wx" event={"ID":"6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e","Type":"ContainerDied","Data":"332c1e7e71ca803ff31574fad1467642df76cd860af0a224a358848c1050417e"} Jan 22 16:52:32 crc kubenswrapper[4758]: I0122 16:52:32.693270 4758 scope.go:117] "RemoveContainer" containerID="60fe6d10518ecf7840c52f3d3028d2c8fc1ff34292e0a985a667d2e13644f112" Jan 22 16:52:32 crc kubenswrapper[4758]: I0122 16:52:32.693272 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-744dd76757-hj9wx" Jan 22 16:52:32 crc kubenswrapper[4758]: I0122 16:52:32.768936 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-744dd76757-hj9wx"] Jan 22 16:52:32 crc kubenswrapper[4758]: I0122 16:52:32.793101 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-744dd76757-hj9wx"] Jan 22 16:52:32 crc kubenswrapper[4758]: I0122 16:52:32.821316 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e" path="/var/lib/kubelet/pods/6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e/volumes" Jan 22 16:52:32 crc kubenswrapper[4758]: E0122 16:52:32.969074 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 22 16:52:32 crc kubenswrapper[4758]: E0122 16:52:32.969619 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-chd47,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(a67f1efb-4c74-4acd-9948-de1491a8479c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 16:52:32 crc kubenswrapper[4758]: E0122 16:52:32.970983 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"ceilometer-notification-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="a67f1efb-4c74-4acd-9948-de1491a8479c" Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.122218 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-529mh" Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.199051 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b5d5b589c-8c4hx" Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.284393 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5x5k\" (UniqueName: \"kubernetes.io/projected/b1666997-8287-4065-bcaf-409713fc6782-kube-api-access-t5x5k\") pod \"b1666997-8287-4065-bcaf-409713fc6782\" (UID: \"b1666997-8287-4065-bcaf-409713fc6782\") " Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.284442 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1666997-8287-4065-bcaf-409713fc6782-scripts\") pod \"b1666997-8287-4065-bcaf-409713fc6782\" (UID: \"b1666997-8287-4065-bcaf-409713fc6782\") " Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.284484 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3eee08c-7cca-4bd3-bcd2-f3702e470ff2-config-data\") pod \"e3eee08c-7cca-4bd3-bcd2-f3702e470ff2\" (UID: \"e3eee08c-7cca-4bd3-bcd2-f3702e470ff2\") " Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.284550 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e3eee08c-7cca-4bd3-bcd2-f3702e470ff2-horizon-secret-key\") pod \"e3eee08c-7cca-4bd3-bcd2-f3702e470ff2\" (UID: \"e3eee08c-7cca-4bd3-bcd2-f3702e470ff2\") " Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.284571 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1666997-8287-4065-bcaf-409713fc6782-combined-ca-bundle\") pod \"b1666997-8287-4065-bcaf-409713fc6782\" (UID: \"b1666997-8287-4065-bcaf-409713fc6782\") " Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.284608 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b1666997-8287-4065-bcaf-409713fc6782-db-sync-config-data\") pod \"b1666997-8287-4065-bcaf-409713fc6782\" (UID: \"b1666997-8287-4065-bcaf-409713fc6782\") " Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.284625 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e3eee08c-7cca-4bd3-bcd2-f3702e470ff2-scripts\") pod \"e3eee08c-7cca-4bd3-bcd2-f3702e470ff2\" (UID: \"e3eee08c-7cca-4bd3-bcd2-f3702e470ff2\") " Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.284645 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3eee08c-7cca-4bd3-bcd2-f3702e470ff2-logs\") pod \"e3eee08c-7cca-4bd3-bcd2-f3702e470ff2\" (UID: \"e3eee08c-7cca-4bd3-bcd2-f3702e470ff2\") " Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.284671 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9n4qw\" (UniqueName: \"kubernetes.io/projected/e3eee08c-7cca-4bd3-bcd2-f3702e470ff2-kube-api-access-9n4qw\") pod \"e3eee08c-7cca-4bd3-bcd2-f3702e470ff2\" (UID: \"e3eee08c-7cca-4bd3-bcd2-f3702e470ff2\") " Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.284685 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b1666997-8287-4065-bcaf-409713fc6782-etc-machine-id\") pod \"b1666997-8287-4065-bcaf-409713fc6782\" (UID: \"b1666997-8287-4065-bcaf-409713fc6782\") " Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.284725 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1666997-8287-4065-bcaf-409713fc6782-config-data\") pod \"b1666997-8287-4065-bcaf-409713fc6782\" (UID: \"b1666997-8287-4065-bcaf-409713fc6782\") " Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.285007 4758 scope.go:117] "RemoveContainer" containerID="03be0baef6e9d0e040038bc9408c12842e56c34ea8fb131382ccf9b85a67ae89" Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.291236 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3eee08c-7cca-4bd3-bcd2-f3702e470ff2-logs" (OuterVolumeSpecName: "logs") pod "e3eee08c-7cca-4bd3-bcd2-f3702e470ff2" (UID: "e3eee08c-7cca-4bd3-bcd2-f3702e470ff2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.303552 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1666997-8287-4065-bcaf-409713fc6782-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b1666997-8287-4065-bcaf-409713fc6782" (UID: "b1666997-8287-4065-bcaf-409713fc6782"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.334067 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3eee08c-7cca-4bd3-bcd2-f3702e470ff2-kube-api-access-9n4qw" (OuterVolumeSpecName: "kube-api-access-9n4qw") pod "e3eee08c-7cca-4bd3-bcd2-f3702e470ff2" (UID: "e3eee08c-7cca-4bd3-bcd2-f3702e470ff2"). InnerVolumeSpecName "kube-api-access-9n4qw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.350154 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1666997-8287-4065-bcaf-409713fc6782-kube-api-access-t5x5k" (OuterVolumeSpecName: "kube-api-access-t5x5k") pod "b1666997-8287-4065-bcaf-409713fc6782" (UID: "b1666997-8287-4065-bcaf-409713fc6782"). InnerVolumeSpecName "kube-api-access-t5x5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.356009 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1666997-8287-4065-bcaf-409713fc6782-scripts" (OuterVolumeSpecName: "scripts") pod "b1666997-8287-4065-bcaf-409713fc6782" (UID: "b1666997-8287-4065-bcaf-409713fc6782"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.385034 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3eee08c-7cca-4bd3-bcd2-f3702e470ff2-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "e3eee08c-7cca-4bd3-bcd2-f3702e470ff2" (UID: "e3eee08c-7cca-4bd3-bcd2-f3702e470ff2"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.406530 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1666997-8287-4065-bcaf-409713fc6782-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b1666997-8287-4065-bcaf-409713fc6782" (UID: "b1666997-8287-4065-bcaf-409713fc6782"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.411239 4758 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e3eee08c-7cca-4bd3-bcd2-f3702e470ff2-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.411275 4758 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b1666997-8287-4065-bcaf-409713fc6782-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.411286 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3eee08c-7cca-4bd3-bcd2-f3702e470ff2-logs\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.411296 4758 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b1666997-8287-4065-bcaf-409713fc6782-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.411313 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9n4qw\" (UniqueName: \"kubernetes.io/projected/e3eee08c-7cca-4bd3-bcd2-f3702e470ff2-kube-api-access-9n4qw\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.411342 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5x5k\" (UniqueName: \"kubernetes.io/projected/b1666997-8287-4065-bcaf-409713fc6782-kube-api-access-t5x5k\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.411353 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1666997-8287-4065-bcaf-409713fc6782-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.461151 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3eee08c-7cca-4bd3-bcd2-f3702e470ff2-scripts" (OuterVolumeSpecName: "scripts") pod "e3eee08c-7cca-4bd3-bcd2-f3702e470ff2" (UID: "e3eee08c-7cca-4bd3-bcd2-f3702e470ff2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.470221 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3eee08c-7cca-4bd3-bcd2-f3702e470ff2-config-data" (OuterVolumeSpecName: "config-data") pod "e3eee08c-7cca-4bd3-bcd2-f3702e470ff2" (UID: "e3eee08c-7cca-4bd3-bcd2-f3702e470ff2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.497922 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1666997-8287-4065-bcaf-409713fc6782-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b1666997-8287-4065-bcaf-409713fc6782" (UID: "b1666997-8287-4065-bcaf-409713fc6782"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.511019 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1666997-8287-4065-bcaf-409713fc6782-config-data" (OuterVolumeSpecName: "config-data") pod "b1666997-8287-4065-bcaf-409713fc6782" (UID: "b1666997-8287-4065-bcaf-409713fc6782"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.512802 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1666997-8287-4065-bcaf-409713fc6782-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.513107 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3eee08c-7cca-4bd3-bcd2-f3702e470ff2-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.513117 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1666997-8287-4065-bcaf-409713fc6782-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.513128 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e3eee08c-7cca-4bd3-bcd2-f3702e470ff2-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:33 crc kubenswrapper[4758]: W0122 16:52:33.661509 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4115ae1_f42e_40b7_b82a_74d7e4abfa77.slice/crio-4396ad59bbe33e8cce9687f8d26313dc66ed3e1d51fad585edc378e92b28f70d WatchSource:0}: Error finding container 4396ad59bbe33e8cce9687f8d26313dc66ed3e1d51fad585edc378e92b28f70d: Status 404 returned error can't find the container with id 4396ad59bbe33e8cce9687f8d26313dc66ed3e1d51fad585edc378e92b28f70d Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.662416 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5fbd4457db-5gt55"] Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.725569 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c78f7b546-sv5rx" event={"ID":"177272b6-b55b-4e45-9336-d6227af172d0","Type":"ContainerStarted","Data":"a1e6caad3ceb84bfec62ddd5d7a6fb9a150c0e17495ac1d0cff1086df5747bf1"} Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.725622 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c78f7b546-sv5rx" event={"ID":"177272b6-b55b-4e45-9336-d6227af172d0","Type":"ContainerStarted","Data":"3c5de37bffa3079ff32a97f5f7c37abc1b338ada95661ce46699a38a1148bfd8"} Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.729400 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f9c99f667-72ftn"] Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.730791 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b5d5b589c-8c4hx" event={"ID":"e3eee08c-7cca-4bd3-bcd2-f3702e470ff2","Type":"ContainerDied","Data":"c1b9d0a6ef91826b41a41b65dcf6960a48909596a19e804626ac0264c304bbd5"} Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.730840 4758 scope.go:117] "RemoveContainer" containerID="8bdd78becfb73f8d3bd1890964a73880ab03efb2e937200ea3fac388b7cf775e" Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.730842 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b5d5b589c-8c4hx" Jan 22 16:52:33 crc kubenswrapper[4758]: W0122 16:52:33.731228 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbc57e1b_3cb7_4bce_91e8_d31356bf83ac.slice/crio-759d46188af680ed12e3bf189c53a0f6798ce1612e51c45faf3de9dcb10f6043 WatchSource:0}: Error finding container 759d46188af680ed12e3bf189c53a0f6798ce1612e51c45faf3de9dcb10f6043: Status 404 returned error can't find the container with id 759d46188af680ed12e3bf189c53a0f6798ce1612e51c45faf3de9dcb10f6043 Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.745401 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5fbd4457db-5gt55" event={"ID":"b4115ae1-f42e-40b7-b82a-74d7e4abfa77","Type":"ContainerStarted","Data":"4396ad59bbe33e8cce9687f8d26313dc66ed3e1d51fad585edc378e92b28f70d"} Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.747089 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-529mh" event={"ID":"b1666997-8287-4065-bcaf-409713fc6782","Type":"ContainerDied","Data":"d2857fc65ea5b8a2f731139ab65aa0f1107efabab815078a44ab02715be19125"} Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.747117 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2857fc65ea5b8a2f731139ab65aa0f1107efabab815078a44ab02715be19125" Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.747195 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-529mh" Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.761909 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a67f1efb-4c74-4acd-9948-de1491a8479c" containerName="sg-core" containerID="cri-o://80422c6bd16684d478bb19c6ce24b0cfa026db240c2ed2e8d1e2c4600f669a05" gracePeriod=30 Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.761990 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ea53227e-7c78-42b4-959c-dd2531914be2","Type":"ContainerStarted","Data":"a242bb86d02a02912959476d1e89c5801e3e8b0a179d33e8ede7e504d5a32eae"} Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.788307 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5b5d5b589c-8c4hx"] Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.814057 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5b5d5b589c-8c4hx"] Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.875601 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6b7cfcc9b6-tclz9"] Jan 22 16:52:33 crc kubenswrapper[4758]: I0122 16:52:33.990946 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-775569c6d5-2vjq7"] Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.021148 4758 scope.go:117] "RemoveContainer" containerID="563966c491bebd90caa468ebe97ba454c532453d5ab1012d0fd7d6cc5ed5ff66" Jan 22 16:52:34 crc kubenswrapper[4758]: W0122 16:52:34.041729 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd5b4616_f0db_4639_a791_c8882e65f6ca.slice/crio-cf8801fa574f38bac6462d4899dfd5f6f8aafde8709b294654b251dbb92a8825 WatchSource:0}: Error finding container cf8801fa574f38bac6462d4899dfd5f6f8aafde8709b294654b251dbb92a8825: Status 404 returned error can't find the container with id cf8801fa574f38bac6462d4899dfd5f6f8aafde8709b294654b251dbb92a8825 Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.315847 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9h9hb" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.344893 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s86q8\" (UniqueName: \"kubernetes.io/projected/f8fe0f21-8912-4d6c-ba4f-6600456784e1-kube-api-access-s86q8\") pod \"f8fe0f21-8912-4d6c-ba4f-6600456784e1\" (UID: \"f8fe0f21-8912-4d6c-ba4f-6600456784e1\") " Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.345061 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f8fe0f21-8912-4d6c-ba4f-6600456784e1-db-sync-config-data\") pod \"f8fe0f21-8912-4d6c-ba4f-6600456784e1\" (UID: \"f8fe0f21-8912-4d6c-ba4f-6600456784e1\") " Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.345093 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8fe0f21-8912-4d6c-ba4f-6600456784e1-combined-ca-bundle\") pod \"f8fe0f21-8912-4d6c-ba4f-6600456784e1\" (UID: \"f8fe0f21-8912-4d6c-ba4f-6600456784e1\") " Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.345140 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8fe0f21-8912-4d6c-ba4f-6600456784e1-config-data\") pod \"f8fe0f21-8912-4d6c-ba4f-6600456784e1\" (UID: \"f8fe0f21-8912-4d6c-ba4f-6600456784e1\") " Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.416435 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8fe0f21-8912-4d6c-ba4f-6600456784e1-kube-api-access-s86q8" (OuterVolumeSpecName: "kube-api-access-s86q8") pod "f8fe0f21-8912-4d6c-ba4f-6600456784e1" (UID: "f8fe0f21-8912-4d6c-ba4f-6600456784e1"). InnerVolumeSpecName "kube-api-access-s86q8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.416532 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8fe0f21-8912-4d6c-ba4f-6600456784e1-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f8fe0f21-8912-4d6c-ba4f-6600456784e1" (UID: "f8fe0f21-8912-4d6c-ba4f-6600456784e1"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.454201 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s86q8\" (UniqueName: \"kubernetes.io/projected/f8fe0f21-8912-4d6c-ba4f-6600456784e1-kube-api-access-s86q8\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.454234 4758 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f8fe0f21-8912-4d6c-ba4f-6600456784e1-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.466019 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 16:52:34 crc kubenswrapper[4758]: E0122 16:52:34.466588 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8fe0f21-8912-4d6c-ba4f-6600456784e1" containerName="glance-db-sync" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.466607 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8fe0f21-8912-4d6c-ba4f-6600456784e1" containerName="glance-db-sync" Jan 22 16:52:34 crc kubenswrapper[4758]: E0122 16:52:34.466622 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3eee08c-7cca-4bd3-bcd2-f3702e470ff2" containerName="horizon" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.466631 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3eee08c-7cca-4bd3-bcd2-f3702e470ff2" containerName="horizon" Jan 22 16:52:34 crc kubenswrapper[4758]: E0122 16:52:34.466648 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e" containerName="horizon-log" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.466656 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e" containerName="horizon-log" Jan 22 16:52:34 crc kubenswrapper[4758]: E0122 16:52:34.466679 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3eee08c-7cca-4bd3-bcd2-f3702e470ff2" containerName="horizon-log" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.466698 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3eee08c-7cca-4bd3-bcd2-f3702e470ff2" containerName="horizon-log" Jan 22 16:52:34 crc kubenswrapper[4758]: E0122 16:52:34.466733 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1666997-8287-4065-bcaf-409713fc6782" containerName="cinder-db-sync" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.466759 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1666997-8287-4065-bcaf-409713fc6782" containerName="cinder-db-sync" Jan 22 16:52:34 crc kubenswrapper[4758]: E0122 16:52:34.466774 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e" containerName="horizon" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.466783 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e" containerName="horizon" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.466999 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1666997-8287-4065-bcaf-409713fc6782" containerName="cinder-db-sync" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.467020 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3eee08c-7cca-4bd3-bcd2-f3702e470ff2" containerName="horizon-log" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.467032 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e" containerName="horizon-log" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.467042 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3eee08c-7cca-4bd3-bcd2-f3702e470ff2" containerName="horizon" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.467057 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8fe0f21-8912-4d6c-ba4f-6600456784e1" containerName="glance-db-sync" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.467075 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="6000f2b6-07a7-4c7f-98d8-4b6a26b3b09e" containerName="horizon" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.468349 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.479394 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-85hcg" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.479647 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.479847 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.480004 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.480812 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.524219 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f9c99f667-72ftn"] Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.556871 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d\") " pod="openstack/cinder-scheduler-0" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.561577 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d-config-data\") pod \"cinder-scheduler-0\" (UID: \"05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d\") " pod="openstack/cinder-scheduler-0" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.561640 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d-scripts\") pod \"cinder-scheduler-0\" (UID: \"05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d\") " pod="openstack/cinder-scheduler-0" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.561690 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgjk5\" (UniqueName: \"kubernetes.io/projected/05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d-kube-api-access-vgjk5\") pod \"cinder-scheduler-0\" (UID: \"05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d\") " pod="openstack/cinder-scheduler-0" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.561770 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d\") " pod="openstack/cinder-scheduler-0" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.561842 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d\") " pod="openstack/cinder-scheduler-0" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.568681 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8fe0f21-8912-4d6c-ba4f-6600456784e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f8fe0f21-8912-4d6c-ba4f-6600456784e1" (UID: "f8fe0f21-8912-4d6c-ba4f-6600456784e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.581683 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8596c585-sw7r9"] Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.584363 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8596c585-sw7r9" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.619681 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.634859 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8596c585-sw7r9"] Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.670449 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a67f1efb-4c74-4acd-9948-de1491a8479c-sg-core-conf-yaml\") pod \"a67f1efb-4c74-4acd-9948-de1491a8479c\" (UID: \"a67f1efb-4c74-4acd-9948-de1491a8479c\") " Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.670548 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67f1efb-4c74-4acd-9948-de1491a8479c-combined-ca-bundle\") pod \"a67f1efb-4c74-4acd-9948-de1491a8479c\" (UID: \"a67f1efb-4c74-4acd-9948-de1491a8479c\") " Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.670570 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a67f1efb-4c74-4acd-9948-de1491a8479c-config-data\") pod \"a67f1efb-4c74-4acd-9948-de1491a8479c\" (UID: \"a67f1efb-4c74-4acd-9948-de1491a8479c\") " Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.670585 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a67f1efb-4c74-4acd-9948-de1491a8479c-scripts\") pod \"a67f1efb-4c74-4acd-9948-de1491a8479c\" (UID: \"a67f1efb-4c74-4acd-9948-de1491a8479c\") " Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.670612 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a67f1efb-4c74-4acd-9948-de1491a8479c-log-httpd\") pod \"a67f1efb-4c74-4acd-9948-de1491a8479c\" (UID: \"a67f1efb-4c74-4acd-9948-de1491a8479c\") " Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.670638 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chd47\" (UniqueName: \"kubernetes.io/projected/a67f1efb-4c74-4acd-9948-de1491a8479c-kube-api-access-chd47\") pod \"a67f1efb-4c74-4acd-9948-de1491a8479c\" (UID: \"a67f1efb-4c74-4acd-9948-de1491a8479c\") " Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.670701 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a67f1efb-4c74-4acd-9948-de1491a8479c-run-httpd\") pod \"a67f1efb-4c74-4acd-9948-de1491a8479c\" (UID: \"a67f1efb-4c74-4acd-9948-de1491a8479c\") " Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.670957 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e140fc6a-db89-4748-be82-94765061de55-ovsdbserver-sb\") pod \"dnsmasq-dns-8596c585-sw7r9\" (UID: \"e140fc6a-db89-4748-be82-94765061de55\") " pod="openstack/dnsmasq-dns-8596c585-sw7r9" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.670983 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d-scripts\") pod \"cinder-scheduler-0\" (UID: \"05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d\") " pod="openstack/cinder-scheduler-0" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.671003 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgf64\" (UniqueName: \"kubernetes.io/projected/e140fc6a-db89-4748-be82-94765061de55-kube-api-access-bgf64\") pod \"dnsmasq-dns-8596c585-sw7r9\" (UID: \"e140fc6a-db89-4748-be82-94765061de55\") " pod="openstack/dnsmasq-dns-8596c585-sw7r9" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.671031 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgjk5\" (UniqueName: \"kubernetes.io/projected/05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d-kube-api-access-vgjk5\") pod \"cinder-scheduler-0\" (UID: \"05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d\") " pod="openstack/cinder-scheduler-0" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.671064 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d\") " pod="openstack/cinder-scheduler-0" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.671098 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d\") " pod="openstack/cinder-scheduler-0" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.671349 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e140fc6a-db89-4748-be82-94765061de55-dns-swift-storage-0\") pod \"dnsmasq-dns-8596c585-sw7r9\" (UID: \"e140fc6a-db89-4748-be82-94765061de55\") " pod="openstack/dnsmasq-dns-8596c585-sw7r9" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.671432 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e140fc6a-db89-4748-be82-94765061de55-ovsdbserver-nb\") pod \"dnsmasq-dns-8596c585-sw7r9\" (UID: \"e140fc6a-db89-4748-be82-94765061de55\") " pod="openstack/dnsmasq-dns-8596c585-sw7r9" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.671452 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d\") " pod="openstack/cinder-scheduler-0" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.671511 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e140fc6a-db89-4748-be82-94765061de55-dns-svc\") pod \"dnsmasq-dns-8596c585-sw7r9\" (UID: \"e140fc6a-db89-4748-be82-94765061de55\") " pod="openstack/dnsmasq-dns-8596c585-sw7r9" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.671531 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e140fc6a-db89-4748-be82-94765061de55-config\") pod \"dnsmasq-dns-8596c585-sw7r9\" (UID: \"e140fc6a-db89-4748-be82-94765061de55\") " pod="openstack/dnsmasq-dns-8596c585-sw7r9" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.671573 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d-config-data\") pod \"cinder-scheduler-0\" (UID: \"05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d\") " pod="openstack/cinder-scheduler-0" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.671633 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8fe0f21-8912-4d6c-ba4f-6600456784e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.684857 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a67f1efb-4c74-4acd-9948-de1491a8479c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a67f1efb-4c74-4acd-9948-de1491a8479c" (UID: "a67f1efb-4c74-4acd-9948-de1491a8479c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.689214 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d\") " pod="openstack/cinder-scheduler-0" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.689551 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a67f1efb-4c74-4acd-9948-de1491a8479c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a67f1efb-4c74-4acd-9948-de1491a8479c" (UID: "a67f1efb-4c74-4acd-9948-de1491a8479c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.709400 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 22 16:52:34 crc kubenswrapper[4758]: E0122 16:52:34.709811 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a67f1efb-4c74-4acd-9948-de1491a8479c" containerName="sg-core" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.709827 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a67f1efb-4c74-4acd-9948-de1491a8479c" containerName="sg-core" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.710012 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a67f1efb-4c74-4acd-9948-de1491a8479c" containerName="sg-core" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.711114 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.714115 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.725981 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a67f1efb-4c74-4acd-9948-de1491a8479c-config-data" (OuterVolumeSpecName: "config-data") pod "a67f1efb-4c74-4acd-9948-de1491a8479c" (UID: "a67f1efb-4c74-4acd-9948-de1491a8479c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.726882 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a67f1efb-4c74-4acd-9948-de1491a8479c-kube-api-access-chd47" (OuterVolumeSpecName: "kube-api-access-chd47") pod "a67f1efb-4c74-4acd-9948-de1491a8479c" (UID: "a67f1efb-4c74-4acd-9948-de1491a8479c"). InnerVolumeSpecName "kube-api-access-chd47". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.729067 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.735353 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d-config-data\") pod \"cinder-scheduler-0\" (UID: \"05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d\") " pod="openstack/cinder-scheduler-0" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.746821 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a67f1efb-4c74-4acd-9948-de1491a8479c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a67f1efb-4c74-4acd-9948-de1491a8479c" (UID: "a67f1efb-4c74-4acd-9948-de1491a8479c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.747223 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d-scripts\") pod \"cinder-scheduler-0\" (UID: \"05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d\") " pod="openstack/cinder-scheduler-0" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.748592 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d\") " pod="openstack/cinder-scheduler-0" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.750855 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a67f1efb-4c74-4acd-9948-de1491a8479c-scripts" (OuterVolumeSpecName: "scripts") pod "a67f1efb-4c74-4acd-9948-de1491a8479c" (UID: "a67f1efb-4c74-4acd-9948-de1491a8479c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.750919 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d\") " pod="openstack/cinder-scheduler-0" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.751781 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8fe0f21-8912-4d6c-ba4f-6600456784e1-config-data" (OuterVolumeSpecName: "config-data") pod "f8fe0f21-8912-4d6c-ba4f-6600456784e1" (UID: "f8fe0f21-8912-4d6c-ba4f-6600456784e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.758307 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgjk5\" (UniqueName: \"kubernetes.io/projected/05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d-kube-api-access-vgjk5\") pod \"cinder-scheduler-0\" (UID: \"05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d\") " pod="openstack/cinder-scheduler-0" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.769605 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a67f1efb-4c74-4acd-9948-de1491a8479c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a67f1efb-4c74-4acd-9948-de1491a8479c" (UID: "a67f1efb-4c74-4acd-9948-de1491a8479c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.788099 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a5401a8-4432-405a-8cdd-06d21ee90ece-logs\") pod \"cinder-api-0\" (UID: \"0a5401a8-4432-405a-8cdd-06d21ee90ece\") " pod="openstack/cinder-api-0" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.788174 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e140fc6a-db89-4748-be82-94765061de55-ovsdbserver-nb\") pod \"dnsmasq-dns-8596c585-sw7r9\" (UID: \"e140fc6a-db89-4748-be82-94765061de55\") " pod="openstack/dnsmasq-dns-8596c585-sw7r9" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.788212 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0a5401a8-4432-405a-8cdd-06d21ee90ece-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0a5401a8-4432-405a-8cdd-06d21ee90ece\") " pod="openstack/cinder-api-0" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.788254 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a5401a8-4432-405a-8cdd-06d21ee90ece-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0a5401a8-4432-405a-8cdd-06d21ee90ece\") " pod="openstack/cinder-api-0" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.789033 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e140fc6a-db89-4748-be82-94765061de55-dns-svc\") pod \"dnsmasq-dns-8596c585-sw7r9\" (UID: \"e140fc6a-db89-4748-be82-94765061de55\") " pod="openstack/dnsmasq-dns-8596c585-sw7r9" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.789051 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e140fc6a-db89-4748-be82-94765061de55-ovsdbserver-nb\") pod \"dnsmasq-dns-8596c585-sw7r9\" (UID: \"e140fc6a-db89-4748-be82-94765061de55\") " pod="openstack/dnsmasq-dns-8596c585-sw7r9" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.789618 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e140fc6a-db89-4748-be82-94765061de55-dns-svc\") pod \"dnsmasq-dns-8596c585-sw7r9\" (UID: \"e140fc6a-db89-4748-be82-94765061de55\") " pod="openstack/dnsmasq-dns-8596c585-sw7r9" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.794733 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e140fc6a-db89-4748-be82-94765061de55-config\") pod \"dnsmasq-dns-8596c585-sw7r9\" (UID: \"e140fc6a-db89-4748-be82-94765061de55\") " pod="openstack/dnsmasq-dns-8596c585-sw7r9" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.794807 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a5401a8-4432-405a-8cdd-06d21ee90ece-scripts\") pod \"cinder-api-0\" (UID: \"0a5401a8-4432-405a-8cdd-06d21ee90ece\") " pod="openstack/cinder-api-0" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.794967 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e140fc6a-db89-4748-be82-94765061de55-ovsdbserver-sb\") pod \"dnsmasq-dns-8596c585-sw7r9\" (UID: \"e140fc6a-db89-4748-be82-94765061de55\") " pod="openstack/dnsmasq-dns-8596c585-sw7r9" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.795527 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgf64\" (UniqueName: \"kubernetes.io/projected/e140fc6a-db89-4748-be82-94765061de55-kube-api-access-bgf64\") pod \"dnsmasq-dns-8596c585-sw7r9\" (UID: \"e140fc6a-db89-4748-be82-94765061de55\") " pod="openstack/dnsmasq-dns-8596c585-sw7r9" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.795579 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a5401a8-4432-405a-8cdd-06d21ee90ece-config-data\") pod \"cinder-api-0\" (UID: \"0a5401a8-4432-405a-8cdd-06d21ee90ece\") " pod="openstack/cinder-api-0" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.795659 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0a5401a8-4432-405a-8cdd-06d21ee90ece-config-data-custom\") pod \"cinder-api-0\" (UID: \"0a5401a8-4432-405a-8cdd-06d21ee90ece\") " pod="openstack/cinder-api-0" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.795766 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e140fc6a-db89-4748-be82-94765061de55-dns-swift-storage-0\") pod \"dnsmasq-dns-8596c585-sw7r9\" (UID: \"e140fc6a-db89-4748-be82-94765061de55\") " pod="openstack/dnsmasq-dns-8596c585-sw7r9" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.795801 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5m9h\" (UniqueName: \"kubernetes.io/projected/0a5401a8-4432-405a-8cdd-06d21ee90ece-kube-api-access-c5m9h\") pod \"cinder-api-0\" (UID: \"0a5401a8-4432-405a-8cdd-06d21ee90ece\") " pod="openstack/cinder-api-0" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.795912 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8fe0f21-8912-4d6c-ba4f-6600456784e1-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.795925 4758 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a67f1efb-4c74-4acd-9948-de1491a8479c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.795935 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67f1efb-4c74-4acd-9948-de1491a8479c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.795943 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a67f1efb-4c74-4acd-9948-de1491a8479c-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.795951 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a67f1efb-4c74-4acd-9948-de1491a8479c-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.795959 4758 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a67f1efb-4c74-4acd-9948-de1491a8479c-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.795979 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chd47\" (UniqueName: \"kubernetes.io/projected/a67f1efb-4c74-4acd-9948-de1491a8479c-kube-api-access-chd47\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.795989 4758 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a67f1efb-4c74-4acd-9948-de1491a8479c-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.796283 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e140fc6a-db89-4748-be82-94765061de55-config\") pod \"dnsmasq-dns-8596c585-sw7r9\" (UID: \"e140fc6a-db89-4748-be82-94765061de55\") " pod="openstack/dnsmasq-dns-8596c585-sw7r9" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.796542 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e140fc6a-db89-4748-be82-94765061de55-ovsdbserver-sb\") pod \"dnsmasq-dns-8596c585-sw7r9\" (UID: \"e140fc6a-db89-4748-be82-94765061de55\") " pod="openstack/dnsmasq-dns-8596c585-sw7r9" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.797045 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e140fc6a-db89-4748-be82-94765061de55-dns-swift-storage-0\") pod \"dnsmasq-dns-8596c585-sw7r9\" (UID: \"e140fc6a-db89-4748-be82-94765061de55\") " pod="openstack/dnsmasq-dns-8596c585-sw7r9" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.812589 4758 generic.go:334] "Generic (PLEG): container finished" podID="40487aaa-4c45-41b2-ab14-76477ed2f4bb" containerID="3f804875d0ec8e65f89084335817802426f37c82f619dc121c0a2be09bd1b67f" exitCode=0 Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.820065 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgf64\" (UniqueName: \"kubernetes.io/projected/e140fc6a-db89-4748-be82-94765061de55-kube-api-access-bgf64\") pod \"dnsmasq-dns-8596c585-sw7r9\" (UID: \"e140fc6a-db89-4748-be82-94765061de55\") " pod="openstack/dnsmasq-dns-8596c585-sw7r9" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.844342 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3eee08c-7cca-4bd3-bcd2-f3702e470ff2" path="/var/lib/kubelet/pods/e3eee08c-7cca-4bd3-bcd2-f3702e470ff2/volumes" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.845250 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6c78f7b546-sv5rx" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.845280 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6c78f7b546-sv5rx" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.845294 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-88b76f788-th2jq" event={"ID":"40487aaa-4c45-41b2-ab14-76477ed2f4bb","Type":"ContainerDied","Data":"3f804875d0ec8e65f89084335817802426f37c82f619dc121c0a2be09bd1b67f"} Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.845318 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-775569c6d5-2vjq7" event={"ID":"925ad838-b20e-48b3-9ee7-08133afb7840","Type":"ContainerStarted","Data":"5a8f42700650b8976568fdacd831e3e0c3ac1e9afb3511d4f814eae842f12aef"} Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.845333 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b7cfcc9b6-tclz9" event={"ID":"cd5b4616-f0db-4639-a791-c8882e65f6ca","Type":"ContainerStarted","Data":"cf8801fa574f38bac6462d4899dfd5f6f8aafde8709b294654b251dbb92a8825"} Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.845348 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c78f7b546-sv5rx" event={"ID":"177272b6-b55b-4e45-9336-d6227af172d0","Type":"ContainerStarted","Data":"457dbe994d7f6b89e727795f6a033f4b849df07959743b683586b9fc8a37a069"} Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.846000 4758 generic.go:334] "Generic (PLEG): container finished" podID="a67f1efb-4c74-4acd-9948-de1491a8479c" containerID="80422c6bd16684d478bb19c6ce24b0cfa026db240c2ed2e8d1e2c4600f669a05" exitCode=2 Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.846066 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a67f1efb-4c74-4acd-9948-de1491a8479c","Type":"ContainerDied","Data":"80422c6bd16684d478bb19c6ce24b0cfa026db240c2ed2e8d1e2c4600f669a05"} Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.846090 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a67f1efb-4c74-4acd-9948-de1491a8479c","Type":"ContainerDied","Data":"d7a1cf246f1b5bd5c74c6e6f6c8fb54d02ec6635810328fd6150b356007a2a66"} Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.846107 4758 scope.go:117] "RemoveContainer" containerID="80422c6bd16684d478bb19c6ce24b0cfa026db240c2ed2e8d1e2c4600f669a05" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.846208 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.855874 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9h9hb" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.857176 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9h9hb" event={"ID":"f8fe0f21-8912-4d6c-ba4f-6600456784e1","Type":"ContainerDied","Data":"95c2dcfb21c4dfe180e2269eb4ff18ff5560a69c3d80dca474a1d910c79f3cdb"} Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.857217 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95c2dcfb21c4dfe180e2269eb4ff18ff5560a69c3d80dca474a1d910c79f3cdb" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.876848 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.879533 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6c78f7b546-sv5rx" podStartSLOduration=8.879511784 podStartE2EDuration="8.879511784s" podCreationTimestamp="2026-01-22 16:52:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:52:34.862132109 +0000 UTC m=+1376.345471394" watchObservedRunningTime="2026-01-22 16:52:34.879511784 +0000 UTC m=+1376.362851069" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.879921 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f9c99f667-72ftn" event={"ID":"bbc57e1b-3cb7-4bce-91e8-d31356bf83ac","Type":"ContainerStarted","Data":"759d46188af680ed12e3bf189c53a0f6798ce1612e51c45faf3de9dcb10f6043"} Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.897447 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a5401a8-4432-405a-8cdd-06d21ee90ece-config-data\") pod \"cinder-api-0\" (UID: \"0a5401a8-4432-405a-8cdd-06d21ee90ece\") " pod="openstack/cinder-api-0" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.898685 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0a5401a8-4432-405a-8cdd-06d21ee90ece-config-data-custom\") pod \"cinder-api-0\" (UID: \"0a5401a8-4432-405a-8cdd-06d21ee90ece\") " pod="openstack/cinder-api-0" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.898866 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5m9h\" (UniqueName: \"kubernetes.io/projected/0a5401a8-4432-405a-8cdd-06d21ee90ece-kube-api-access-c5m9h\") pod \"cinder-api-0\" (UID: \"0a5401a8-4432-405a-8cdd-06d21ee90ece\") " pod="openstack/cinder-api-0" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.898954 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a5401a8-4432-405a-8cdd-06d21ee90ece-logs\") pod \"cinder-api-0\" (UID: \"0a5401a8-4432-405a-8cdd-06d21ee90ece\") " pod="openstack/cinder-api-0" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.899049 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0a5401a8-4432-405a-8cdd-06d21ee90ece-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0a5401a8-4432-405a-8cdd-06d21ee90ece\") " pod="openstack/cinder-api-0" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.899090 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a5401a8-4432-405a-8cdd-06d21ee90ece-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0a5401a8-4432-405a-8cdd-06d21ee90ece\") " pod="openstack/cinder-api-0" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.899256 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a5401a8-4432-405a-8cdd-06d21ee90ece-scripts\") pod \"cinder-api-0\" (UID: \"0a5401a8-4432-405a-8cdd-06d21ee90ece\") " pod="openstack/cinder-api-0" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.900547 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0a5401a8-4432-405a-8cdd-06d21ee90ece-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0a5401a8-4432-405a-8cdd-06d21ee90ece\") " pod="openstack/cinder-api-0" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.900966 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a5401a8-4432-405a-8cdd-06d21ee90ece-logs\") pod \"cinder-api-0\" (UID: \"0a5401a8-4432-405a-8cdd-06d21ee90ece\") " pod="openstack/cinder-api-0" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.901810 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a5401a8-4432-405a-8cdd-06d21ee90ece-config-data\") pod \"cinder-api-0\" (UID: \"0a5401a8-4432-405a-8cdd-06d21ee90ece\") " pod="openstack/cinder-api-0" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.908509 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a5401a8-4432-405a-8cdd-06d21ee90ece-scripts\") pod \"cinder-api-0\" (UID: \"0a5401a8-4432-405a-8cdd-06d21ee90ece\") " pod="openstack/cinder-api-0" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.909020 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0a5401a8-4432-405a-8cdd-06d21ee90ece-config-data-custom\") pod \"cinder-api-0\" (UID: \"0a5401a8-4432-405a-8cdd-06d21ee90ece\") " pod="openstack/cinder-api-0" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.909793 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a5401a8-4432-405a-8cdd-06d21ee90ece-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0a5401a8-4432-405a-8cdd-06d21ee90ece\") " pod="openstack/cinder-api-0" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.917187 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8596c585-sw7r9" Jan 22 16:52:34 crc kubenswrapper[4758]: I0122 16:52:34.926396 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5m9h\" (UniqueName: \"kubernetes.io/projected/0a5401a8-4432-405a-8cdd-06d21ee90ece-kube-api-access-c5m9h\") pod \"cinder-api-0\" (UID: \"0a5401a8-4432-405a-8cdd-06d21ee90ece\") " pod="openstack/cinder-api-0" Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.070236 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.265932 4758 scope.go:117] "RemoveContainer" containerID="80422c6bd16684d478bb19c6ce24b0cfa026db240c2ed2e8d1e2c4600f669a05" Jan 22 16:52:35 crc kubenswrapper[4758]: E0122 16:52:35.272439 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80422c6bd16684d478bb19c6ce24b0cfa026db240c2ed2e8d1e2c4600f669a05\": container with ID starting with 80422c6bd16684d478bb19c6ce24b0cfa026db240c2ed2e8d1e2c4600f669a05 not found: ID does not exist" containerID="80422c6bd16684d478bb19c6ce24b0cfa026db240c2ed2e8d1e2c4600f669a05" Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.272484 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80422c6bd16684d478bb19c6ce24b0cfa026db240c2ed2e8d1e2c4600f669a05"} err="failed to get container status \"80422c6bd16684d478bb19c6ce24b0cfa026db240c2ed2e8d1e2c4600f669a05\": rpc error: code = NotFound desc = could not find container \"80422c6bd16684d478bb19c6ce24b0cfa026db240c2ed2e8d1e2c4600f669a05\": container with ID starting with 80422c6bd16684d478bb19c6ce24b0cfa026db240c2ed2e8d1e2c4600f669a05 not found: ID does not exist" Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.354816 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.368988 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.391876 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.394972 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.403955 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.404262 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.405420 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.435587 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2\") " pod="openstack/ceilometer-0" Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.435649 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2-run-httpd\") pod \"ceilometer-0\" (UID: \"e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2\") " pod="openstack/ceilometer-0" Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.435865 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2-config-data\") pod \"ceilometer-0\" (UID: \"e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2\") " pod="openstack/ceilometer-0" Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.436019 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b79c9\" (UniqueName: \"kubernetes.io/projected/e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2-kube-api-access-b79c9\") pod \"ceilometer-0\" (UID: \"e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2\") " pod="openstack/ceilometer-0" Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.436254 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2\") " pod="openstack/ceilometer-0" Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.436318 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2-log-httpd\") pod \"ceilometer-0\" (UID: \"e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2\") " pod="openstack/ceilometer-0" Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.436364 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2-scripts\") pod \"ceilometer-0\" (UID: \"e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2\") " pod="openstack/ceilometer-0" Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.468473 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8596c585-sw7r9"] Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.511811 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f7466dcbf-g984f"] Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.513613 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f7466dcbf-g984f" Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.515911 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f7466dcbf-g984f"] Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.540866 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2-log-httpd\") pod \"ceilometer-0\" (UID: \"e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2\") " pod="openstack/ceilometer-0" Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.540913 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2-scripts\") pod \"ceilometer-0\" (UID: \"e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2\") " pod="openstack/ceilometer-0" Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.540985 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2\") " pod="openstack/ceilometer-0" Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.541011 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2-run-httpd\") pod \"ceilometer-0\" (UID: \"e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2\") " pod="openstack/ceilometer-0" Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.541036 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2-config-data\") pod \"ceilometer-0\" (UID: \"e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2\") " pod="openstack/ceilometer-0" Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.541074 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b79c9\" (UniqueName: \"kubernetes.io/projected/e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2-kube-api-access-b79c9\") pod \"ceilometer-0\" (UID: \"e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2\") " pod="openstack/ceilometer-0" Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.541121 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2\") " pod="openstack/ceilometer-0" Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.552812 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2-log-httpd\") pod \"ceilometer-0\" (UID: \"e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2\") " pod="openstack/ceilometer-0" Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.553909 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2-run-httpd\") pod \"ceilometer-0\" (UID: \"e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2\") " pod="openstack/ceilometer-0" Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.558763 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2\") " pod="openstack/ceilometer-0" Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.567736 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2\") " pod="openstack/ceilometer-0" Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.568719 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2-scripts\") pod \"ceilometer-0\" (UID: \"e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2\") " pod="openstack/ceilometer-0" Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.570775 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2-config-data\") pod \"ceilometer-0\" (UID: \"e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2\") " pod="openstack/ceilometer-0" Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.598723 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b79c9\" (UniqueName: \"kubernetes.io/projected/e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2-kube-api-access-b79c9\") pod \"ceilometer-0\" (UID: \"e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2\") " pod="openstack/ceilometer-0" Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.645148 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89b54d64-9045-40b1-a7fc-49d4dce849e6-ovsdbserver-sb\") pod \"dnsmasq-dns-5f7466dcbf-g984f\" (UID: \"89b54d64-9045-40b1-a7fc-49d4dce849e6\") " pod="openstack/dnsmasq-dns-5f7466dcbf-g984f" Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.645217 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89b54d64-9045-40b1-a7fc-49d4dce849e6-ovsdbserver-nb\") pod \"dnsmasq-dns-5f7466dcbf-g984f\" (UID: \"89b54d64-9045-40b1-a7fc-49d4dce849e6\") " pod="openstack/dnsmasq-dns-5f7466dcbf-g984f" Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.645240 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp85r\" (UniqueName: \"kubernetes.io/projected/89b54d64-9045-40b1-a7fc-49d4dce849e6-kube-api-access-lp85r\") pod \"dnsmasq-dns-5f7466dcbf-g984f\" (UID: \"89b54d64-9045-40b1-a7fc-49d4dce849e6\") " pod="openstack/dnsmasq-dns-5f7466dcbf-g984f" Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.645267 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/89b54d64-9045-40b1-a7fc-49d4dce849e6-dns-swift-storage-0\") pod \"dnsmasq-dns-5f7466dcbf-g984f\" (UID: \"89b54d64-9045-40b1-a7fc-49d4dce849e6\") " pod="openstack/dnsmasq-dns-5f7466dcbf-g984f" Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.645320 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89b54d64-9045-40b1-a7fc-49d4dce849e6-dns-svc\") pod \"dnsmasq-dns-5f7466dcbf-g984f\" (UID: \"89b54d64-9045-40b1-a7fc-49d4dce849e6\") " pod="openstack/dnsmasq-dns-5f7466dcbf-g984f" Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.645353 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89b54d64-9045-40b1-a7fc-49d4dce849e6-config\") pod \"dnsmasq-dns-5f7466dcbf-g984f\" (UID: \"89b54d64-9045-40b1-a7fc-49d4dce849e6\") " pod="openstack/dnsmasq-dns-5f7466dcbf-g984f" Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.717463 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8596c585-sw7r9"] Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.742783 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.748861 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89b54d64-9045-40b1-a7fc-49d4dce849e6-dns-svc\") pod \"dnsmasq-dns-5f7466dcbf-g984f\" (UID: \"89b54d64-9045-40b1-a7fc-49d4dce849e6\") " pod="openstack/dnsmasq-dns-5f7466dcbf-g984f" Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.748907 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89b54d64-9045-40b1-a7fc-49d4dce849e6-config\") pod \"dnsmasq-dns-5f7466dcbf-g984f\" (UID: \"89b54d64-9045-40b1-a7fc-49d4dce849e6\") " pod="openstack/dnsmasq-dns-5f7466dcbf-g984f" Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.749021 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89b54d64-9045-40b1-a7fc-49d4dce849e6-ovsdbserver-sb\") pod \"dnsmasq-dns-5f7466dcbf-g984f\" (UID: \"89b54d64-9045-40b1-a7fc-49d4dce849e6\") " pod="openstack/dnsmasq-dns-5f7466dcbf-g984f" Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.749053 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89b54d64-9045-40b1-a7fc-49d4dce849e6-ovsdbserver-nb\") pod \"dnsmasq-dns-5f7466dcbf-g984f\" (UID: \"89b54d64-9045-40b1-a7fc-49d4dce849e6\") " pod="openstack/dnsmasq-dns-5f7466dcbf-g984f" Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.749068 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp85r\" (UniqueName: \"kubernetes.io/projected/89b54d64-9045-40b1-a7fc-49d4dce849e6-kube-api-access-lp85r\") pod \"dnsmasq-dns-5f7466dcbf-g984f\" (UID: \"89b54d64-9045-40b1-a7fc-49d4dce849e6\") " pod="openstack/dnsmasq-dns-5f7466dcbf-g984f" Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.749091 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/89b54d64-9045-40b1-a7fc-49d4dce849e6-dns-swift-storage-0\") pod \"dnsmasq-dns-5f7466dcbf-g984f\" (UID: \"89b54d64-9045-40b1-a7fc-49d4dce849e6\") " pod="openstack/dnsmasq-dns-5f7466dcbf-g984f" Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.749873 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/89b54d64-9045-40b1-a7fc-49d4dce849e6-dns-swift-storage-0\") pod \"dnsmasq-dns-5f7466dcbf-g984f\" (UID: \"89b54d64-9045-40b1-a7fc-49d4dce849e6\") " pod="openstack/dnsmasq-dns-5f7466dcbf-g984f" Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.750426 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89b54d64-9045-40b1-a7fc-49d4dce849e6-config\") pod \"dnsmasq-dns-5f7466dcbf-g984f\" (UID: \"89b54d64-9045-40b1-a7fc-49d4dce849e6\") " pod="openstack/dnsmasq-dns-5f7466dcbf-g984f" Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.750466 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89b54d64-9045-40b1-a7fc-49d4dce849e6-ovsdbserver-sb\") pod \"dnsmasq-dns-5f7466dcbf-g984f\" (UID: \"89b54d64-9045-40b1-a7fc-49d4dce849e6\") " pod="openstack/dnsmasq-dns-5f7466dcbf-g984f" Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.749872 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89b54d64-9045-40b1-a7fc-49d4dce849e6-dns-svc\") pod \"dnsmasq-dns-5f7466dcbf-g984f\" (UID: \"89b54d64-9045-40b1-a7fc-49d4dce849e6\") " pod="openstack/dnsmasq-dns-5f7466dcbf-g984f" Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.750998 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89b54d64-9045-40b1-a7fc-49d4dce849e6-ovsdbserver-nb\") pod \"dnsmasq-dns-5f7466dcbf-g984f\" (UID: \"89b54d64-9045-40b1-a7fc-49d4dce849e6\") " pod="openstack/dnsmasq-dns-5f7466dcbf-g984f" Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.951447 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp85r\" (UniqueName: \"kubernetes.io/projected/89b54d64-9045-40b1-a7fc-49d4dce849e6-kube-api-access-lp85r\") pod \"dnsmasq-dns-5f7466dcbf-g984f\" (UID: \"89b54d64-9045-40b1-a7fc-49d4dce849e6\") " pod="openstack/dnsmasq-dns-5f7466dcbf-g984f" Jan 22 16:52:35 crc kubenswrapper[4758]: I0122 16:52:35.960692 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f7466dcbf-g984f" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.047964 4758 generic.go:334] "Generic (PLEG): container finished" podID="bbc57e1b-3cb7-4bce-91e8-d31356bf83ac" containerID="e2aefce7778dbbd373cdccd0af2c0cec2d717824b19dced7d99d122d6a2a539e" exitCode=0 Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.048023 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f9c99f667-72ftn" event={"ID":"bbc57e1b-3cb7-4bce-91e8-d31356bf83ac","Type":"ContainerDied","Data":"e2aefce7778dbbd373cdccd0af2c0cec2d717824b19dced7d99d122d6a2a539e"} Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.056913 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b7cfcc9b6-tclz9" event={"ID":"cd5b4616-f0db-4639-a791-c8882e65f6ca","Type":"ContainerStarted","Data":"91aa93d9de3692d224a3e448cb4b5983ea39fcb4ed0a1602ea985f042784b45c"} Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.132679 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.162225 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.530768 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.532717 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.539591 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.539808 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-th7td" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.539936 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.552359 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.697952 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.699995 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.701957 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.708791 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c1d0803-658d-4bdb-8770-3d3921554591-logs\") pod \"glance-default-external-api-0\" (UID: \"4c1d0803-658d-4bdb-8770-3d3921554591\") " pod="openstack/glance-default-external-api-0" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.709249 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c1d0803-658d-4bdb-8770-3d3921554591-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4c1d0803-658d-4bdb-8770-3d3921554591\") " pod="openstack/glance-default-external-api-0" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.709334 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4c1d0803-658d-4bdb-8770-3d3921554591-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4c1d0803-658d-4bdb-8770-3d3921554591\") " pod="openstack/glance-default-external-api-0" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.709402 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz4k2\" (UniqueName: \"kubernetes.io/projected/4c1d0803-658d-4bdb-8770-3d3921554591-kube-api-access-sz4k2\") pod \"glance-default-external-api-0\" (UID: \"4c1d0803-658d-4bdb-8770-3d3921554591\") " pod="openstack/glance-default-external-api-0" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.709510 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c1d0803-658d-4bdb-8770-3d3921554591-scripts\") pod \"glance-default-external-api-0\" (UID: \"4c1d0803-658d-4bdb-8770-3d3921554591\") " pod="openstack/glance-default-external-api-0" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.709679 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"4c1d0803-658d-4bdb-8770-3d3921554591\") " pod="openstack/glance-default-external-api-0" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.709875 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c1d0803-658d-4bdb-8770-3d3921554591-config-data\") pod \"glance-default-external-api-0\" (UID: \"4c1d0803-658d-4bdb-8770-3d3921554591\") " pod="openstack/glance-default-external-api-0" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.729421 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.812011 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c1d0803-658d-4bdb-8770-3d3921554591-config-data\") pod \"glance-default-external-api-0\" (UID: \"4c1d0803-658d-4bdb-8770-3d3921554591\") " pod="openstack/glance-default-external-api-0" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.812416 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"a1383243-b82d-4aaa-876f-aad36c14158a\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.812459 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1383243-b82d-4aaa-876f-aad36c14158a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a1383243-b82d-4aaa-876f-aad36c14158a\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.812487 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfk8b\" (UniqueName: \"kubernetes.io/projected/a1383243-b82d-4aaa-876f-aad36c14158a-kube-api-access-lfk8b\") pod \"glance-default-internal-api-0\" (UID: \"a1383243-b82d-4aaa-876f-aad36c14158a\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.812524 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c1d0803-658d-4bdb-8770-3d3921554591-logs\") pod \"glance-default-external-api-0\" (UID: \"4c1d0803-658d-4bdb-8770-3d3921554591\") " pod="openstack/glance-default-external-api-0" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.812554 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c1d0803-658d-4bdb-8770-3d3921554591-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4c1d0803-658d-4bdb-8770-3d3921554591\") " pod="openstack/glance-default-external-api-0" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.812587 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4c1d0803-658d-4bdb-8770-3d3921554591-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4c1d0803-658d-4bdb-8770-3d3921554591\") " pod="openstack/glance-default-external-api-0" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.812609 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz4k2\" (UniqueName: \"kubernetes.io/projected/4c1d0803-658d-4bdb-8770-3d3921554591-kube-api-access-sz4k2\") pod \"glance-default-external-api-0\" (UID: \"4c1d0803-658d-4bdb-8770-3d3921554591\") " pod="openstack/glance-default-external-api-0" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.812676 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c1d0803-658d-4bdb-8770-3d3921554591-scripts\") pod \"glance-default-external-api-0\" (UID: \"4c1d0803-658d-4bdb-8770-3d3921554591\") " pod="openstack/glance-default-external-api-0" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.812718 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1383243-b82d-4aaa-876f-aad36c14158a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a1383243-b82d-4aaa-876f-aad36c14158a\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.812767 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a1383243-b82d-4aaa-876f-aad36c14158a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a1383243-b82d-4aaa-876f-aad36c14158a\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.812799 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1383243-b82d-4aaa-876f-aad36c14158a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a1383243-b82d-4aaa-876f-aad36c14158a\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.812863 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"4c1d0803-658d-4bdb-8770-3d3921554591\") " pod="openstack/glance-default-external-api-0" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.812910 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1383243-b82d-4aaa-876f-aad36c14158a-logs\") pod \"glance-default-internal-api-0\" (UID: \"a1383243-b82d-4aaa-876f-aad36c14158a\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.813441 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4c1d0803-658d-4bdb-8770-3d3921554591-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4c1d0803-658d-4bdb-8770-3d3921554591\") " pod="openstack/glance-default-external-api-0" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.814259 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"4c1d0803-658d-4bdb-8770-3d3921554591\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-external-api-0" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.818325 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c1d0803-658d-4bdb-8770-3d3921554591-logs\") pod \"glance-default-external-api-0\" (UID: \"4c1d0803-658d-4bdb-8770-3d3921554591\") " pod="openstack/glance-default-external-api-0" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.821064 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c1d0803-658d-4bdb-8770-3d3921554591-scripts\") pod \"glance-default-external-api-0\" (UID: \"4c1d0803-658d-4bdb-8770-3d3921554591\") " pod="openstack/glance-default-external-api-0" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.824193 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c1d0803-658d-4bdb-8770-3d3921554591-config-data\") pod \"glance-default-external-api-0\" (UID: \"4c1d0803-658d-4bdb-8770-3d3921554591\") " pod="openstack/glance-default-external-api-0" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.825073 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a67f1efb-4c74-4acd-9948-de1491a8479c" path="/var/lib/kubelet/pods/a67f1efb-4c74-4acd-9948-de1491a8479c/volumes" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.829553 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c1d0803-658d-4bdb-8770-3d3921554591-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4c1d0803-658d-4bdb-8770-3d3921554591\") " pod="openstack/glance-default-external-api-0" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.839960 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz4k2\" (UniqueName: \"kubernetes.io/projected/4c1d0803-658d-4bdb-8770-3d3921554591-kube-api-access-sz4k2\") pod \"glance-default-external-api-0\" (UID: \"4c1d0803-658d-4bdb-8770-3d3921554591\") " pod="openstack/glance-default-external-api-0" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.861108 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"4c1d0803-658d-4bdb-8770-3d3921554591\") " pod="openstack/glance-default-external-api-0" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.869958 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.900490 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-88b76f788-th2jq" podUID="40487aaa-4c45-41b2-ab14-76477ed2f4bb" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.160:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.160:8443: connect: connection refused" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.914887 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1383243-b82d-4aaa-876f-aad36c14158a-logs\") pod \"glance-default-internal-api-0\" (UID: \"a1383243-b82d-4aaa-876f-aad36c14158a\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.914973 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"a1383243-b82d-4aaa-876f-aad36c14158a\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.915064 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1383243-b82d-4aaa-876f-aad36c14158a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a1383243-b82d-4aaa-876f-aad36c14158a\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.915132 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfk8b\" (UniqueName: \"kubernetes.io/projected/a1383243-b82d-4aaa-876f-aad36c14158a-kube-api-access-lfk8b\") pod \"glance-default-internal-api-0\" (UID: \"a1383243-b82d-4aaa-876f-aad36c14158a\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.915316 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1383243-b82d-4aaa-876f-aad36c14158a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a1383243-b82d-4aaa-876f-aad36c14158a\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.915365 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a1383243-b82d-4aaa-876f-aad36c14158a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a1383243-b82d-4aaa-876f-aad36c14158a\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.915429 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1383243-b82d-4aaa-876f-aad36c14158a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a1383243-b82d-4aaa-876f-aad36c14158a\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.916945 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a1383243-b82d-4aaa-876f-aad36c14158a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a1383243-b82d-4aaa-876f-aad36c14158a\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.917115 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1383243-b82d-4aaa-876f-aad36c14158a-logs\") pod \"glance-default-internal-api-0\" (UID: \"a1383243-b82d-4aaa-876f-aad36c14158a\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.917220 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"a1383243-b82d-4aaa-876f-aad36c14158a\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.923500 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1383243-b82d-4aaa-876f-aad36c14158a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a1383243-b82d-4aaa-876f-aad36c14158a\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.924716 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1383243-b82d-4aaa-876f-aad36c14158a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a1383243-b82d-4aaa-876f-aad36c14158a\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.937210 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfk8b\" (UniqueName: \"kubernetes.io/projected/a1383243-b82d-4aaa-876f-aad36c14158a-kube-api-access-lfk8b\") pod \"glance-default-internal-api-0\" (UID: \"a1383243-b82d-4aaa-876f-aad36c14158a\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:52:36 crc kubenswrapper[4758]: I0122 16:52:36.939240 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1383243-b82d-4aaa-876f-aad36c14158a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a1383243-b82d-4aaa-876f-aad36c14158a\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:52:37 crc kubenswrapper[4758]: I0122 16:52:37.004574 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"a1383243-b82d-4aaa-876f-aad36c14158a\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:52:37 crc kubenswrapper[4758]: I0122 16:52:37.021430 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 16:52:37 crc kubenswrapper[4758]: W0122 16:52:37.173218 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode140fc6a_db89_4748_be82_94765061de55.slice/crio-5430a78ab14ea97b8f9157db2c162b367099bcba084dd84c65f22ba450fefbe0 WatchSource:0}: Error finding container 5430a78ab14ea97b8f9157db2c162b367099bcba084dd84c65f22ba450fefbe0: Status 404 returned error can't find the container with id 5430a78ab14ea97b8f9157db2c162b367099bcba084dd84c65f22ba450fefbe0 Jan 22 16:52:37 crc kubenswrapper[4758]: I0122 16:52:37.215642 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f9c99f667-72ftn" Jan 22 16:52:37 crc kubenswrapper[4758]: I0122 16:52:37.329139 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbc57e1b-3cb7-4bce-91e8-d31356bf83ac-dns-svc\") pod \"bbc57e1b-3cb7-4bce-91e8-d31356bf83ac\" (UID: \"bbc57e1b-3cb7-4bce-91e8-d31356bf83ac\") " Jan 22 16:52:37 crc kubenswrapper[4758]: I0122 16:52:37.329205 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjg8n\" (UniqueName: \"kubernetes.io/projected/bbc57e1b-3cb7-4bce-91e8-d31356bf83ac-kube-api-access-jjg8n\") pod \"bbc57e1b-3cb7-4bce-91e8-d31356bf83ac\" (UID: \"bbc57e1b-3cb7-4bce-91e8-d31356bf83ac\") " Jan 22 16:52:37 crc kubenswrapper[4758]: I0122 16:52:37.329241 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bbc57e1b-3cb7-4bce-91e8-d31356bf83ac-ovsdbserver-nb\") pod \"bbc57e1b-3cb7-4bce-91e8-d31356bf83ac\" (UID: \"bbc57e1b-3cb7-4bce-91e8-d31356bf83ac\") " Jan 22 16:52:37 crc kubenswrapper[4758]: I0122 16:52:37.329269 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbc57e1b-3cb7-4bce-91e8-d31356bf83ac-config\") pod \"bbc57e1b-3cb7-4bce-91e8-d31356bf83ac\" (UID: \"bbc57e1b-3cb7-4bce-91e8-d31356bf83ac\") " Jan 22 16:52:37 crc kubenswrapper[4758]: I0122 16:52:37.329294 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bbc57e1b-3cb7-4bce-91e8-d31356bf83ac-dns-swift-storage-0\") pod \"bbc57e1b-3cb7-4bce-91e8-d31356bf83ac\" (UID: \"bbc57e1b-3cb7-4bce-91e8-d31356bf83ac\") " Jan 22 16:52:37 crc kubenswrapper[4758]: I0122 16:52:37.329385 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bbc57e1b-3cb7-4bce-91e8-d31356bf83ac-ovsdbserver-sb\") pod \"bbc57e1b-3cb7-4bce-91e8-d31356bf83ac\" (UID: \"bbc57e1b-3cb7-4bce-91e8-d31356bf83ac\") " Jan 22 16:52:37 crc kubenswrapper[4758]: I0122 16:52:37.335776 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbc57e1b-3cb7-4bce-91e8-d31356bf83ac-kube-api-access-jjg8n" (OuterVolumeSpecName: "kube-api-access-jjg8n") pod "bbc57e1b-3cb7-4bce-91e8-d31356bf83ac" (UID: "bbc57e1b-3cb7-4bce-91e8-d31356bf83ac"). InnerVolumeSpecName "kube-api-access-jjg8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:52:37 crc kubenswrapper[4758]: I0122 16:52:37.403486 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbc57e1b-3cb7-4bce-91e8-d31356bf83ac-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bbc57e1b-3cb7-4bce-91e8-d31356bf83ac" (UID: "bbc57e1b-3cb7-4bce-91e8-d31356bf83ac"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:52:37 crc kubenswrapper[4758]: I0122 16:52:37.411037 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbc57e1b-3cb7-4bce-91e8-d31356bf83ac-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bbc57e1b-3cb7-4bce-91e8-d31356bf83ac" (UID: "bbc57e1b-3cb7-4bce-91e8-d31356bf83ac"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:52:37 crc kubenswrapper[4758]: I0122 16:52:37.414478 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbc57e1b-3cb7-4bce-91e8-d31356bf83ac-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bbc57e1b-3cb7-4bce-91e8-d31356bf83ac" (UID: "bbc57e1b-3cb7-4bce-91e8-d31356bf83ac"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:52:37 crc kubenswrapper[4758]: I0122 16:52:37.414767 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbc57e1b-3cb7-4bce-91e8-d31356bf83ac-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bbc57e1b-3cb7-4bce-91e8-d31356bf83ac" (UID: "bbc57e1b-3cb7-4bce-91e8-d31356bf83ac"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:52:37 crc kubenswrapper[4758]: I0122 16:52:37.419244 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbc57e1b-3cb7-4bce-91e8-d31356bf83ac-config" (OuterVolumeSpecName: "config") pod "bbc57e1b-3cb7-4bce-91e8-d31356bf83ac" (UID: "bbc57e1b-3cb7-4bce-91e8-d31356bf83ac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:52:37 crc kubenswrapper[4758]: I0122 16:52:37.431183 4758 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbc57e1b-3cb7-4bce-91e8-d31356bf83ac-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:37 crc kubenswrapper[4758]: I0122 16:52:37.431223 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjg8n\" (UniqueName: \"kubernetes.io/projected/bbc57e1b-3cb7-4bce-91e8-d31356bf83ac-kube-api-access-jjg8n\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:37 crc kubenswrapper[4758]: I0122 16:52:37.431238 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bbc57e1b-3cb7-4bce-91e8-d31356bf83ac-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:37 crc kubenswrapper[4758]: I0122 16:52:37.431249 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbc57e1b-3cb7-4bce-91e8-d31356bf83ac-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:37 crc kubenswrapper[4758]: I0122 16:52:37.431263 4758 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bbc57e1b-3cb7-4bce-91e8-d31356bf83ac-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:37 crc kubenswrapper[4758]: I0122 16:52:37.431274 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bbc57e1b-3cb7-4bce-91e8-d31356bf83ac-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:37 crc kubenswrapper[4758]: I0122 16:52:37.802340 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-f8f6c6576-zfqs4" Jan 22 16:52:38 crc kubenswrapper[4758]: I0122 16:52:38.096924 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0a5401a8-4432-405a-8cdd-06d21ee90ece","Type":"ContainerStarted","Data":"5a482cb62e7fa3afac72f4431e885723af569f5bddbc1b69c24b2c83d129822b"} Jan 22 16:52:38 crc kubenswrapper[4758]: I0122 16:52:38.112922 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d","Type":"ContainerStarted","Data":"df1d6d2dd7d9c2c797adc55ed22ba1381da8fcf9a069fa1eccc001007b7ed94b"} Jan 22 16:52:38 crc kubenswrapper[4758]: I0122 16:52:38.130956 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f9c99f667-72ftn" event={"ID":"bbc57e1b-3cb7-4bce-91e8-d31356bf83ac","Type":"ContainerDied","Data":"759d46188af680ed12e3bf189c53a0f6798ce1612e51c45faf3de9dcb10f6043"} Jan 22 16:52:38 crc kubenswrapper[4758]: I0122 16:52:38.131004 4758 scope.go:117] "RemoveContainer" containerID="e2aefce7778dbbd373cdccd0af2c0cec2d717824b19dced7d99d122d6a2a539e" Jan 22 16:52:38 crc kubenswrapper[4758]: I0122 16:52:38.131158 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f9c99f667-72ftn" Jan 22 16:52:38 crc kubenswrapper[4758]: I0122 16:52:38.136984 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-86d8479bd8-rrvgj" podUID="b37953c7-685d-4a7e-85fd-a2964e025825" containerName="neutron-httpd" probeResult="failure" output="Get \"http://10.217.0.167:9696/\": dial tcp 10.217.0.167:9696: connect: connection refused" Jan 22 16:52:38 crc kubenswrapper[4758]: I0122 16:52:38.144989 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8596c585-sw7r9" event={"ID":"e140fc6a-db89-4748-be82-94765061de55","Type":"ContainerStarted","Data":"5430a78ab14ea97b8f9157db2c162b367099bcba084dd84c65f22ba450fefbe0"} Jan 22 16:52:38 crc kubenswrapper[4758]: I0122 16:52:38.322236 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 22 16:52:38 crc kubenswrapper[4758]: I0122 16:52:38.324056 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 22 16:52:38 crc kubenswrapper[4758]: I0122 16:52:38.414399 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 16:52:38 crc kubenswrapper[4758]: I0122 16:52:38.442466 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 22 16:52:38 crc kubenswrapper[4758]: I0122 16:52:38.615230 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f9c99f667-72ftn"] Jan 22 16:52:38 crc kubenswrapper[4758]: I0122 16:52:38.659569 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f9c99f667-72ftn"] Jan 22 16:52:38 crc kubenswrapper[4758]: I0122 16:52:38.729987 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f7466dcbf-g984f"] Jan 22 16:52:38 crc kubenswrapper[4758]: I0122 16:52:38.902893 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbc57e1b-3cb7-4bce-91e8-d31356bf83ac" path="/var/lib/kubelet/pods/bbc57e1b-3cb7-4bce-91e8-d31356bf83ac/volumes" Jan 22 16:52:38 crc kubenswrapper[4758]: I0122 16:52:38.922936 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 16:52:39 crc kubenswrapper[4758]: I0122 16:52:39.035334 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 16:52:39 crc kubenswrapper[4758]: I0122 16:52:39.234041 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8596c585-sw7r9" event={"ID":"e140fc6a-db89-4748-be82-94765061de55","Type":"ContainerStarted","Data":"04157e7df119d5bf2d1e5bdbbe8cc4612fbaa7978f76b193665bd18a99527587"} Jan 22 16:52:39 crc kubenswrapper[4758]: I0122 16:52:39.234240 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8596c585-sw7r9" podUID="e140fc6a-db89-4748-be82-94765061de55" containerName="init" containerID="cri-o://04157e7df119d5bf2d1e5bdbbe8cc4612fbaa7978f76b193665bd18a99527587" gracePeriod=10 Jan 22 16:52:39 crc kubenswrapper[4758]: I0122 16:52:39.237331 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5fbd4457db-5gt55" event={"ID":"b4115ae1-f42e-40b7-b82a-74d7e4abfa77","Type":"ContainerStarted","Data":"c3e888dfab81cf6dbcb138fab88e6eb4aabdd350e7e71d44b5f6be834120015f"} Jan 22 16:52:39 crc kubenswrapper[4758]: I0122 16:52:39.241434 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4c1d0803-658d-4bdb-8770-3d3921554591","Type":"ContainerStarted","Data":"839071acf6a607e5fdccef048f7b7875c23ea52a14bf7e3d9ab757714f863069"} Jan 22 16:52:39 crc kubenswrapper[4758]: I0122 16:52:39.244571 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f7466dcbf-g984f" event={"ID":"89b54d64-9045-40b1-a7fc-49d4dce849e6","Type":"ContainerStarted","Data":"0e95c4607190c9ad512fa391d634eb3edd6661d06281150232fe008a9c2ec9a8"} Jan 22 16:52:39 crc kubenswrapper[4758]: I0122 16:52:39.264956 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2","Type":"ContainerStarted","Data":"f5e4235e98107cb1e473405543af580e1db21446d3e7c65c4138bcb0b2364577"} Jan 22 16:52:39 crc kubenswrapper[4758]: I0122 16:52:39.274340 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-775569c6d5-2vjq7" event={"ID":"925ad838-b20e-48b3-9ee7-08133afb7840","Type":"ContainerStarted","Data":"d586767244f695a357a99038d6c9acb6266146aeb2977f3441e495fecc8fdcf1"} Jan 22 16:52:39 crc kubenswrapper[4758]: I0122 16:52:39.318602 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b7cfcc9b6-tclz9" event={"ID":"cd5b4616-f0db-4639-a791-c8882e65f6ca","Type":"ContainerStarted","Data":"3b2e7ee039de7f55d394f9218bcd174b16bf80f81b4fed8aba2bb2eff102017e"} Jan 22 16:52:39 crc kubenswrapper[4758]: I0122 16:52:39.319767 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6b7cfcc9b6-tclz9" Jan 22 16:52:39 crc kubenswrapper[4758]: I0122 16:52:39.319799 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6b7cfcc9b6-tclz9" Jan 22 16:52:39 crc kubenswrapper[4758]: I0122 16:52:39.321851 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a1383243-b82d-4aaa-876f-aad36c14158a","Type":"ContainerStarted","Data":"17019e62cb4154d285ebe6847dc6cf2dc95ee42f42987bb3c08b929e970d7b29"} Jan 22 16:52:39 crc kubenswrapper[4758]: I0122 16:52:39.367073 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 22 16:52:39 crc kubenswrapper[4758]: I0122 16:52:39.382824 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6b7cfcc9b6-tclz9" podStartSLOduration=18.382807218 podStartE2EDuration="18.382807218s" podCreationTimestamp="2026-01-22 16:52:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:52:39.360278033 +0000 UTC m=+1380.843617318" watchObservedRunningTime="2026-01-22 16:52:39.382807218 +0000 UTC m=+1380.866146503" Jan 22 16:52:39 crc kubenswrapper[4758]: I0122 16:52:39.523903 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 22 16:52:39 crc kubenswrapper[4758]: I0122 16:52:39.640754 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 22 16:52:40 crc kubenswrapper[4758]: I0122 16:52:40.319314 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8596c585-sw7r9" Jan 22 16:52:40 crc kubenswrapper[4758]: I0122 16:52:40.382189 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-775569c6d5-2vjq7" event={"ID":"925ad838-b20e-48b3-9ee7-08133afb7840","Type":"ContainerStarted","Data":"f3d40cde65b37956c53b343e54386f1382357fd2a2e61fb8d58f8afd2c5b205a"} Jan 22 16:52:40 crc kubenswrapper[4758]: I0122 16:52:40.404300 4758 generic.go:334] "Generic (PLEG): container finished" podID="e140fc6a-db89-4748-be82-94765061de55" containerID="04157e7df119d5bf2d1e5bdbbe8cc4612fbaa7978f76b193665bd18a99527587" exitCode=0 Jan 22 16:52:40 crc kubenswrapper[4758]: I0122 16:52:40.404636 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8596c585-sw7r9" event={"ID":"e140fc6a-db89-4748-be82-94765061de55","Type":"ContainerDied","Data":"04157e7df119d5bf2d1e5bdbbe8cc4612fbaa7978f76b193665bd18a99527587"} Jan 22 16:52:40 crc kubenswrapper[4758]: I0122 16:52:40.404665 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8596c585-sw7r9" event={"ID":"e140fc6a-db89-4748-be82-94765061de55","Type":"ContainerDied","Data":"5430a78ab14ea97b8f9157db2c162b367099bcba084dd84c65f22ba450fefbe0"} Jan 22 16:52:40 crc kubenswrapper[4758]: I0122 16:52:40.404681 4758 scope.go:117] "RemoveContainer" containerID="04157e7df119d5bf2d1e5bdbbe8cc4612fbaa7978f76b193665bd18a99527587" Jan 22 16:52:40 crc kubenswrapper[4758]: I0122 16:52:40.404872 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8596c585-sw7r9" Jan 22 16:52:40 crc kubenswrapper[4758]: I0122 16:52:40.411207 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-775569c6d5-2vjq7" podStartSLOduration=15.539080617 podStartE2EDuration="19.411187379s" podCreationTimestamp="2026-01-22 16:52:21 +0000 UTC" firstStartedPulling="2026-01-22 16:52:34.073826841 +0000 UTC m=+1375.557166126" lastFinishedPulling="2026-01-22 16:52:37.945933603 +0000 UTC m=+1379.429272888" observedRunningTime="2026-01-22 16:52:40.403227651 +0000 UTC m=+1381.886566936" watchObservedRunningTime="2026-01-22 16:52:40.411187379 +0000 UTC m=+1381.894526664" Jan 22 16:52:40 crc kubenswrapper[4758]: I0122 16:52:40.415298 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0a5401a8-4432-405a-8cdd-06d21ee90ece","Type":"ContainerStarted","Data":"67f04c8edd0bfadf7999eb3e60499af7612f6aba062524c649cf701fd1c49e86"} Jan 22 16:52:40 crc kubenswrapper[4758]: I0122 16:52:40.424660 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f7466dcbf-g984f" event={"ID":"89b54d64-9045-40b1-a7fc-49d4dce849e6","Type":"ContainerDied","Data":"f2426189211acadfd3582be9b2a5a2092dba629617ca09b0e836b6e6e3773f47"} Jan 22 16:52:40 crc kubenswrapper[4758]: I0122 16:52:40.425310 4758 generic.go:334] "Generic (PLEG): container finished" podID="89b54d64-9045-40b1-a7fc-49d4dce849e6" containerID="f2426189211acadfd3582be9b2a5a2092dba629617ca09b0e836b6e6e3773f47" exitCode=0 Jan 22 16:52:40 crc kubenswrapper[4758]: I0122 16:52:40.442947 4758 scope.go:117] "RemoveContainer" containerID="04157e7df119d5bf2d1e5bdbbe8cc4612fbaa7978f76b193665bd18a99527587" Jan 22 16:52:40 crc kubenswrapper[4758]: I0122 16:52:40.446859 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e140fc6a-db89-4748-be82-94765061de55-config\") pod \"e140fc6a-db89-4748-be82-94765061de55\" (UID: \"e140fc6a-db89-4748-be82-94765061de55\") " Jan 22 16:52:40 crc kubenswrapper[4758]: I0122 16:52:40.446988 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e140fc6a-db89-4748-be82-94765061de55-dns-svc\") pod \"e140fc6a-db89-4748-be82-94765061de55\" (UID: \"e140fc6a-db89-4748-be82-94765061de55\") " Jan 22 16:52:40 crc kubenswrapper[4758]: I0122 16:52:40.447061 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e140fc6a-db89-4748-be82-94765061de55-dns-swift-storage-0\") pod \"e140fc6a-db89-4748-be82-94765061de55\" (UID: \"e140fc6a-db89-4748-be82-94765061de55\") " Jan 22 16:52:40 crc kubenswrapper[4758]: I0122 16:52:40.447130 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e140fc6a-db89-4748-be82-94765061de55-ovsdbserver-sb\") pod \"e140fc6a-db89-4748-be82-94765061de55\" (UID: \"e140fc6a-db89-4748-be82-94765061de55\") " Jan 22 16:52:40 crc kubenswrapper[4758]: I0122 16:52:40.447209 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e140fc6a-db89-4748-be82-94765061de55-ovsdbserver-nb\") pod \"e140fc6a-db89-4748-be82-94765061de55\" (UID: \"e140fc6a-db89-4748-be82-94765061de55\") " Jan 22 16:52:40 crc kubenswrapper[4758]: I0122 16:52:40.447415 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgf64\" (UniqueName: \"kubernetes.io/projected/e140fc6a-db89-4748-be82-94765061de55-kube-api-access-bgf64\") pod \"e140fc6a-db89-4748-be82-94765061de55\" (UID: \"e140fc6a-db89-4748-be82-94765061de55\") " Jan 22 16:52:40 crc kubenswrapper[4758]: E0122 16:52:40.449436 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04157e7df119d5bf2d1e5bdbbe8cc4612fbaa7978f76b193665bd18a99527587\": container with ID starting with 04157e7df119d5bf2d1e5bdbbe8cc4612fbaa7978f76b193665bd18a99527587 not found: ID does not exist" containerID="04157e7df119d5bf2d1e5bdbbe8cc4612fbaa7978f76b193665bd18a99527587" Jan 22 16:52:40 crc kubenswrapper[4758]: I0122 16:52:40.449495 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04157e7df119d5bf2d1e5bdbbe8cc4612fbaa7978f76b193665bd18a99527587"} err="failed to get container status \"04157e7df119d5bf2d1e5bdbbe8cc4612fbaa7978f76b193665bd18a99527587\": rpc error: code = NotFound desc = could not find container \"04157e7df119d5bf2d1e5bdbbe8cc4612fbaa7978f76b193665bd18a99527587\": container with ID starting with 04157e7df119d5bf2d1e5bdbbe8cc4612fbaa7978f76b193665bd18a99527587 not found: ID does not exist" Jan 22 16:52:40 crc kubenswrapper[4758]: I0122 16:52:40.524993 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e140fc6a-db89-4748-be82-94765061de55-kube-api-access-bgf64" (OuterVolumeSpecName: "kube-api-access-bgf64") pod "e140fc6a-db89-4748-be82-94765061de55" (UID: "e140fc6a-db89-4748-be82-94765061de55"). InnerVolumeSpecName "kube-api-access-bgf64". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:52:40 crc kubenswrapper[4758]: I0122 16:52:40.549768 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgf64\" (UniqueName: \"kubernetes.io/projected/e140fc6a-db89-4748-be82-94765061de55-kube-api-access-bgf64\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:40 crc kubenswrapper[4758]: I0122 16:52:40.797305 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e140fc6a-db89-4748-be82-94765061de55-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e140fc6a-db89-4748-be82-94765061de55" (UID: "e140fc6a-db89-4748-be82-94765061de55"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:52:40 crc kubenswrapper[4758]: I0122 16:52:40.815847 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e140fc6a-db89-4748-be82-94765061de55-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e140fc6a-db89-4748-be82-94765061de55" (UID: "e140fc6a-db89-4748-be82-94765061de55"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:52:40 crc kubenswrapper[4758]: I0122 16:52:40.836679 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e140fc6a-db89-4748-be82-94765061de55-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e140fc6a-db89-4748-be82-94765061de55" (UID: "e140fc6a-db89-4748-be82-94765061de55"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:52:40 crc kubenswrapper[4758]: I0122 16:52:40.851460 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e140fc6a-db89-4748-be82-94765061de55-config" (OuterVolumeSpecName: "config") pod "e140fc6a-db89-4748-be82-94765061de55" (UID: "e140fc6a-db89-4748-be82-94765061de55"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:52:40 crc kubenswrapper[4758]: I0122 16:52:40.860046 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e140fc6a-db89-4748-be82-94765061de55-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:40 crc kubenswrapper[4758]: I0122 16:52:40.860083 4758 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e140fc6a-db89-4748-be82-94765061de55-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:40 crc kubenswrapper[4758]: I0122 16:52:40.860095 4758 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e140fc6a-db89-4748-be82-94765061de55-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:40 crc kubenswrapper[4758]: I0122 16:52:40.860106 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e140fc6a-db89-4748-be82-94765061de55-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:40 crc kubenswrapper[4758]: I0122 16:52:40.911232 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e140fc6a-db89-4748-be82-94765061de55-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e140fc6a-db89-4748-be82-94765061de55" (UID: "e140fc6a-db89-4748-be82-94765061de55"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:52:40 crc kubenswrapper[4758]: I0122 16:52:40.961402 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e140fc6a-db89-4748-be82-94765061de55-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:41 crc kubenswrapper[4758]: I0122 16:52:41.237558 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-877b57c45-cs9rd" Jan 22 16:52:41 crc kubenswrapper[4758]: I0122 16:52:41.237603 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 16:52:41 crc kubenswrapper[4758]: I0122 16:52:41.237627 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 16:52:41 crc kubenswrapper[4758]: I0122 16:52:41.493823 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-f8f6c6576-zfqs4"] Jan 22 16:52:41 crc kubenswrapper[4758]: I0122 16:52:41.494249 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-f8f6c6576-zfqs4" podUID="b7312e42-6737-4296-a35b-39bbb4a6f21b" containerName="neutron-api" containerID="cri-o://0c91657a572b3b34b8817f7c25202435a5ff9b50a99f94fed486d107c72a8bd0" gracePeriod=30 Jan 22 16:52:41 crc kubenswrapper[4758]: I0122 16:52:41.494607 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-f8f6c6576-zfqs4" podUID="b7312e42-6737-4296-a35b-39bbb4a6f21b" containerName="neutron-httpd" containerID="cri-o://c0ef1600c909cea06f743be6661231c80d0f2cf31472785a373ddde21f6e6f4b" gracePeriod=30 Jan 22 16:52:41 crc kubenswrapper[4758]: I0122 16:52:41.502714 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5fbd4457db-5gt55" event={"ID":"b4115ae1-f42e-40b7-b82a-74d7e4abfa77","Type":"ContainerStarted","Data":"0b5a8c0a5b24d6c6fe06f68c606db17d62b440899caa19897e0f58b91604f2f9"} Jan 22 16:52:41 crc kubenswrapper[4758]: I0122 16:52:41.515139 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8596c585-sw7r9"] Jan 22 16:52:41 crc kubenswrapper[4758]: I0122 16:52:41.526276 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d","Type":"ContainerStarted","Data":"fa2cdd68a8771f35842e6ee8c3b649e849d38378cc8174191e94cd5c7727eddf"} Jan 22 16:52:41 crc kubenswrapper[4758]: I0122 16:52:41.535029 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8596c585-sw7r9"] Jan 22 16:52:41 crc kubenswrapper[4758]: I0122 16:52:41.564721 4758 generic.go:334] "Generic (PLEG): container finished" podID="ea53227e-7c78-42b4-959c-dd2531914be2" containerID="a242bb86d02a02912959476d1e89c5801e3e8b0a179d33e8ede7e504d5a32eae" exitCode=1 Jan 22 16:52:41 crc kubenswrapper[4758]: I0122 16:52:41.564801 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ea53227e-7c78-42b4-959c-dd2531914be2","Type":"ContainerDied","Data":"a242bb86d02a02912959476d1e89c5801e3e8b0a179d33e8ede7e504d5a32eae"} Jan 22 16:52:41 crc kubenswrapper[4758]: I0122 16:52:41.564834 4758 scope.go:117] "RemoveContainer" containerID="879e0aeb8d1bcac2eefb400de2ed81acbc3af9e70161b2e8d9775267f2afb046" Jan 22 16:52:41 crc kubenswrapper[4758]: I0122 16:52:41.575387 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2","Type":"ContainerStarted","Data":"a87ec731c15c865dfd922ff358e50c07ec711fad452c4bc5d2435063607b9f52"} Jan 22 16:52:41 crc kubenswrapper[4758]: I0122 16:52:41.606001 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5fbd4457db-5gt55" podStartSLOduration=16.397344308 podStartE2EDuration="20.605981818s" podCreationTimestamp="2026-01-22 16:52:21 +0000 UTC" firstStartedPulling="2026-01-22 16:52:33.665075178 +0000 UTC m=+1375.148414463" lastFinishedPulling="2026-01-22 16:52:37.873712688 +0000 UTC m=+1379.357051973" observedRunningTime="2026-01-22 16:52:41.593391024 +0000 UTC m=+1383.076730309" watchObservedRunningTime="2026-01-22 16:52:41.605981818 +0000 UTC m=+1383.089321103" Jan 22 16:52:42 crc kubenswrapper[4758]: I0122 16:52:42.629525 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a1383243-b82d-4aaa-876f-aad36c14158a","Type":"ContainerStarted","Data":"957da5532f165f92f1aca059a509fea4f4d557da7b77fc0b490d72e8187a1820"} Jan 22 16:52:42 crc kubenswrapper[4758]: I0122 16:52:42.649653 4758 generic.go:334] "Generic (PLEG): container finished" podID="b7312e42-6737-4296-a35b-39bbb4a6f21b" containerID="c0ef1600c909cea06f743be6661231c80d0f2cf31472785a373ddde21f6e6f4b" exitCode=0 Jan 22 16:52:42 crc kubenswrapper[4758]: I0122 16:52:42.649771 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f8f6c6576-zfqs4" event={"ID":"b7312e42-6737-4296-a35b-39bbb4a6f21b","Type":"ContainerDied","Data":"c0ef1600c909cea06f743be6661231c80d0f2cf31472785a373ddde21f6e6f4b"} Jan 22 16:52:42 crc kubenswrapper[4758]: I0122 16:52:42.665592 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4c1d0803-658d-4bdb-8770-3d3921554591","Type":"ContainerStarted","Data":"0552816a9ba646bbab3b68c9a8675ed966b7ebf91f680a33ae8338aaf77e68a7"} Jan 22 16:52:42 crc kubenswrapper[4758]: I0122 16:52:42.700243 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ea53227e-7c78-42b4-959c-dd2531914be2","Type":"ContainerDied","Data":"c77ced53f64d07ef3a38ca638ea8cd3142878c1beb3143a78ba43a71d899d5f1"} Jan 22 16:52:42 crc kubenswrapper[4758]: I0122 16:52:42.700282 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c77ced53f64d07ef3a38ca638ea8cd3142878c1beb3143a78ba43a71d899d5f1" Jan 22 16:52:42 crc kubenswrapper[4758]: I0122 16:52:42.724220 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 22 16:52:42 crc kubenswrapper[4758]: I0122 16:52:42.832006 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ea53227e-7c78-42b4-959c-dd2531914be2-custom-prometheus-ca\") pod \"ea53227e-7c78-42b4-959c-dd2531914be2\" (UID: \"ea53227e-7c78-42b4-959c-dd2531914be2\") " Jan 22 16:52:42 crc kubenswrapper[4758]: I0122 16:52:42.832059 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea53227e-7c78-42b4-959c-dd2531914be2-combined-ca-bundle\") pod \"ea53227e-7c78-42b4-959c-dd2531914be2\" (UID: \"ea53227e-7c78-42b4-959c-dd2531914be2\") " Jan 22 16:52:42 crc kubenswrapper[4758]: I0122 16:52:42.832162 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hb4hd\" (UniqueName: \"kubernetes.io/projected/ea53227e-7c78-42b4-959c-dd2531914be2-kube-api-access-hb4hd\") pod \"ea53227e-7c78-42b4-959c-dd2531914be2\" (UID: \"ea53227e-7c78-42b4-959c-dd2531914be2\") " Jan 22 16:52:42 crc kubenswrapper[4758]: I0122 16:52:42.833705 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea53227e-7c78-42b4-959c-dd2531914be2-config-data\") pod \"ea53227e-7c78-42b4-959c-dd2531914be2\" (UID: \"ea53227e-7c78-42b4-959c-dd2531914be2\") " Jan 22 16:52:42 crc kubenswrapper[4758]: I0122 16:52:42.833826 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea53227e-7c78-42b4-959c-dd2531914be2-logs\") pod \"ea53227e-7c78-42b4-959c-dd2531914be2\" (UID: \"ea53227e-7c78-42b4-959c-dd2531914be2\") " Jan 22 16:52:42 crc kubenswrapper[4758]: I0122 16:52:42.834996 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e140fc6a-db89-4748-be82-94765061de55" path="/var/lib/kubelet/pods/e140fc6a-db89-4748-be82-94765061de55/volumes" Jan 22 16:52:42 crc kubenswrapper[4758]: I0122 16:52:42.835580 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea53227e-7c78-42b4-959c-dd2531914be2-logs" (OuterVolumeSpecName: "logs") pod "ea53227e-7c78-42b4-959c-dd2531914be2" (UID: "ea53227e-7c78-42b4-959c-dd2531914be2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:52:42 crc kubenswrapper[4758]: I0122 16:52:42.852364 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea53227e-7c78-42b4-959c-dd2531914be2-kube-api-access-hb4hd" (OuterVolumeSpecName: "kube-api-access-hb4hd") pod "ea53227e-7c78-42b4-959c-dd2531914be2" (UID: "ea53227e-7c78-42b4-959c-dd2531914be2"). InnerVolumeSpecName "kube-api-access-hb4hd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:52:42 crc kubenswrapper[4758]: I0122 16:52:42.937849 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea53227e-7c78-42b4-959c-dd2531914be2-logs\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:42 crc kubenswrapper[4758]: I0122 16:52:42.938558 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hb4hd\" (UniqueName: \"kubernetes.io/projected/ea53227e-7c78-42b4-959c-dd2531914be2-kube-api-access-hb4hd\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:43 crc kubenswrapper[4758]: I0122 16:52:43.045207 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea53227e-7c78-42b4-959c-dd2531914be2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ea53227e-7c78-42b4-959c-dd2531914be2" (UID: "ea53227e-7c78-42b4-959c-dd2531914be2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:43 crc kubenswrapper[4758]: I0122 16:52:43.047137 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea53227e-7c78-42b4-959c-dd2531914be2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:43 crc kubenswrapper[4758]: I0122 16:52:43.085705 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea53227e-7c78-42b4-959c-dd2531914be2-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "ea53227e-7c78-42b4-959c-dd2531914be2" (UID: "ea53227e-7c78-42b4-959c-dd2531914be2"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:43 crc kubenswrapper[4758]: I0122 16:52:43.148372 4758 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ea53227e-7c78-42b4-959c-dd2531914be2-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:43 crc kubenswrapper[4758]: I0122 16:52:43.205307 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea53227e-7c78-42b4-959c-dd2531914be2-config-data" (OuterVolumeSpecName: "config-data") pod "ea53227e-7c78-42b4-959c-dd2531914be2" (UID: "ea53227e-7c78-42b4-959c-dd2531914be2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:43 crc kubenswrapper[4758]: I0122 16:52:43.253110 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea53227e-7c78-42b4-959c-dd2531914be2-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:43 crc kubenswrapper[4758]: I0122 16:52:43.596118 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-86d8479bd8-rrvgj_b37953c7-685d-4a7e-85fd-a2964e025825/neutron-api/0.log" Jan 22 16:52:43 crc kubenswrapper[4758]: I0122 16:52:43.596409 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86d8479bd8-rrvgj" Jan 22 16:52:43 crc kubenswrapper[4758]: I0122 16:52:43.621916 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6cd69747bd-jv5rb" Jan 22 16:52:43 crc kubenswrapper[4758]: I0122 16:52:43.733878 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6c78f7b546-sv5rx" podUID="177272b6-b55b-4e45-9336-d6227af172d0" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.176:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 16:52:43 crc kubenswrapper[4758]: I0122 16:52:43.733913 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6c78f7b546-sv5rx" podUID="177272b6-b55b-4e45-9336-d6227af172d0" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.176:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 16:52:43 crc kubenswrapper[4758]: I0122 16:52:43.762893 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b37953c7-685d-4a7e-85fd-a2964e025825-combined-ca-bundle\") pod \"b37953c7-685d-4a7e-85fd-a2964e025825\" (UID: \"b37953c7-685d-4a7e-85fd-a2964e025825\") " Jan 22 16:52:43 crc kubenswrapper[4758]: I0122 16:52:43.762938 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b37953c7-685d-4a7e-85fd-a2964e025825-httpd-config\") pod \"b37953c7-685d-4a7e-85fd-a2964e025825\" (UID: \"b37953c7-685d-4a7e-85fd-a2964e025825\") " Jan 22 16:52:43 crc kubenswrapper[4758]: I0122 16:52:43.762989 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b37953c7-685d-4a7e-85fd-a2964e025825-ovndb-tls-certs\") pod \"b37953c7-685d-4a7e-85fd-a2964e025825\" (UID: \"b37953c7-685d-4a7e-85fd-a2964e025825\") " Jan 22 16:52:43 crc kubenswrapper[4758]: I0122 16:52:43.763012 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b37953c7-685d-4a7e-85fd-a2964e025825-config\") pod \"b37953c7-685d-4a7e-85fd-a2964e025825\" (UID: \"b37953c7-685d-4a7e-85fd-a2964e025825\") " Jan 22 16:52:43 crc kubenswrapper[4758]: I0122 16:52:43.763068 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsgtl\" (UniqueName: \"kubernetes.io/projected/b37953c7-685d-4a7e-85fd-a2964e025825-kube-api-access-vsgtl\") pod \"b37953c7-685d-4a7e-85fd-a2964e025825\" (UID: \"b37953c7-685d-4a7e-85fd-a2964e025825\") " Jan 22 16:52:43 crc kubenswrapper[4758]: I0122 16:52:43.766442 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d","Type":"ContainerStarted","Data":"1da3f1dc3b4e352657d6ec448c3e8750a635e4b7d4ebc56baf53a5ff63632e19"} Jan 22 16:52:43 crc kubenswrapper[4758]: I0122 16:52:43.777827 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4c1d0803-658d-4bdb-8770-3d3921554591","Type":"ContainerStarted","Data":"bcce4d53124194930cd7b4bba64d473ac0a2114c056e048e63f5ca406fbf45fc"} Jan 22 16:52:43 crc kubenswrapper[4758]: I0122 16:52:43.777987 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4c1d0803-658d-4bdb-8770-3d3921554591" containerName="glance-log" containerID="cri-o://0552816a9ba646bbab3b68c9a8675ed966b7ebf91f680a33ae8338aaf77e68a7" gracePeriod=30 Jan 22 16:52:43 crc kubenswrapper[4758]: I0122 16:52:43.778513 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4c1d0803-658d-4bdb-8770-3d3921554591" containerName="glance-httpd" containerID="cri-o://bcce4d53124194930cd7b4bba64d473ac0a2114c056e048e63f5ca406fbf45fc" gracePeriod=30 Jan 22 16:52:43 crc kubenswrapper[4758]: I0122 16:52:43.789918 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b37953c7-685d-4a7e-85fd-a2964e025825-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "b37953c7-685d-4a7e-85fd-a2964e025825" (UID: "b37953c7-685d-4a7e-85fd-a2964e025825"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:43 crc kubenswrapper[4758]: I0122 16:52:43.806133 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f7466dcbf-g984f" event={"ID":"89b54d64-9045-40b1-a7fc-49d4dce849e6","Type":"ContainerStarted","Data":"04709b65415b5ce55c5e501fd59e6359307278c8ee978a585a593c53c836b627"} Jan 22 16:52:43 crc kubenswrapper[4758]: I0122 16:52:43.807247 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f7466dcbf-g984f" Jan 22 16:52:43 crc kubenswrapper[4758]: I0122 16:52:43.810983 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b37953c7-685d-4a7e-85fd-a2964e025825-kube-api-access-vsgtl" (OuterVolumeSpecName: "kube-api-access-vsgtl") pod "b37953c7-685d-4a7e-85fd-a2964e025825" (UID: "b37953c7-685d-4a7e-85fd-a2964e025825"). InnerVolumeSpecName "kube-api-access-vsgtl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:52:43 crc kubenswrapper[4758]: I0122 16:52:43.820412 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=8.729946532 podStartE2EDuration="9.820389749s" podCreationTimestamp="2026-01-22 16:52:34 +0000 UTC" firstStartedPulling="2026-01-22 16:52:37.184476598 +0000 UTC m=+1378.667815883" lastFinishedPulling="2026-01-22 16:52:38.274919815 +0000 UTC m=+1379.758259100" observedRunningTime="2026-01-22 16:52:43.792537298 +0000 UTC m=+1385.275876593" watchObservedRunningTime="2026-01-22 16:52:43.820389749 +0000 UTC m=+1385.303729044" Jan 22 16:52:43 crc kubenswrapper[4758]: I0122 16:52:43.856046 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.856021603 podStartE2EDuration="8.856021603s" podCreationTimestamp="2026-01-22 16:52:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:52:43.828415058 +0000 UTC m=+1385.311754343" watchObservedRunningTime="2026-01-22 16:52:43.856021603 +0000 UTC m=+1385.339360888" Jan 22 16:52:43 crc kubenswrapper[4758]: I0122 16:52:43.856687 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a1383243-b82d-4aaa-876f-aad36c14158a","Type":"ContainerStarted","Data":"686e4484c8c23dd808c8c81a761d97343138ed477de30a2c8c237cecd7b034ef"} Jan 22 16:52:43 crc kubenswrapper[4758]: I0122 16:52:43.860157 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a1383243-b82d-4aaa-876f-aad36c14158a" containerName="glance-log" containerID="cri-o://957da5532f165f92f1aca059a509fea4f4d557da7b77fc0b490d72e8187a1820" gracePeriod=30 Jan 22 16:52:43 crc kubenswrapper[4758]: I0122 16:52:43.860572 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a1383243-b82d-4aaa-876f-aad36c14158a" containerName="glance-httpd" containerID="cri-o://686e4484c8c23dd808c8c81a761d97343138ed477de30a2c8c237cecd7b034ef" gracePeriod=30 Jan 22 16:52:43 crc kubenswrapper[4758]: I0122 16:52:43.864051 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f7466dcbf-g984f" podStartSLOduration=8.864028062 podStartE2EDuration="8.864028062s" podCreationTimestamp="2026-01-22 16:52:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:52:43.853200566 +0000 UTC m=+1385.336539851" watchObservedRunningTime="2026-01-22 16:52:43.864028062 +0000 UTC m=+1385.347367347" Jan 22 16:52:43 crc kubenswrapper[4758]: I0122 16:52:43.868255 4758 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b37953c7-685d-4a7e-85fd-a2964e025825-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:43 crc kubenswrapper[4758]: I0122 16:52:43.868277 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsgtl\" (UniqueName: \"kubernetes.io/projected/b37953c7-685d-4a7e-85fd-a2964e025825-kube-api-access-vsgtl\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:43 crc kubenswrapper[4758]: I0122 16:52:43.901179 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.901164046 podStartE2EDuration="8.901164046s" podCreationTimestamp="2026-01-22 16:52:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:52:43.88410542 +0000 UTC m=+1385.367444705" watchObservedRunningTime="2026-01-22 16:52:43.901164046 +0000 UTC m=+1385.384503331" Jan 22 16:52:43 crc kubenswrapper[4758]: I0122 16:52:43.907038 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2","Type":"ContainerStarted","Data":"6042a0e469a10144b1d7b8eec0dab7ca6f4bdf95df62b73166c44465f0907f22"} Jan 22 16:52:43 crc kubenswrapper[4758]: I0122 16:52:43.924152 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-86d8479bd8-rrvgj_b37953c7-685d-4a7e-85fd-a2964e025825/neutron-api/0.log" Jan 22 16:52:43 crc kubenswrapper[4758]: I0122 16:52:43.924244 4758 generic.go:334] "Generic (PLEG): container finished" podID="b37953c7-685d-4a7e-85fd-a2964e025825" containerID="1b6cc5ccbbfc7b0277304522d450bf801fa83ae1548aa7317a2ef97828b8b019" exitCode=137 Jan 22 16:52:43 crc kubenswrapper[4758]: I0122 16:52:43.924425 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86d8479bd8-rrvgj" event={"ID":"b37953c7-685d-4a7e-85fd-a2964e025825","Type":"ContainerDied","Data":"1b6cc5ccbbfc7b0277304522d450bf801fa83ae1548aa7317a2ef97828b8b019"} Jan 22 16:52:43 crc kubenswrapper[4758]: I0122 16:52:43.924487 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86d8479bd8-rrvgj" event={"ID":"b37953c7-685d-4a7e-85fd-a2964e025825","Type":"ContainerDied","Data":"21830ff0562d03fac8b6c3dcf351712b2fa2309112b08dc1b3eb9338d5071507"} Jan 22 16:52:43 crc kubenswrapper[4758]: I0122 16:52:43.924511 4758 scope.go:117] "RemoveContainer" containerID="c10ae485f186912a6e35b078a622dbd0915dce04f6b1eb7a7a6feee6114d5ac9" Jan 22 16:52:43 crc kubenswrapper[4758]: I0122 16:52:43.924811 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86d8479bd8-rrvgj" Jan 22 16:52:43 crc kubenswrapper[4758]: I0122 16:52:43.961183 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 22 16:52:43 crc kubenswrapper[4758]: I0122 16:52:43.961238 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0a5401a8-4432-405a-8cdd-06d21ee90ece","Type":"ContainerStarted","Data":"9ed7716389bb42108c02f6a05b6d832a0b8d104d94073ae0784b73206992c4da"} Jan 22 16:52:43 crc kubenswrapper[4758]: I0122 16:52:43.961289 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="0a5401a8-4432-405a-8cdd-06d21ee90ece" containerName="cinder-api-log" containerID="cri-o://67f04c8edd0bfadf7999eb3e60499af7612f6aba062524c649cf701fd1c49e86" gracePeriod=30 Jan 22 16:52:43 crc kubenswrapper[4758]: I0122 16:52:43.961455 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="0a5401a8-4432-405a-8cdd-06d21ee90ece" containerName="cinder-api" containerID="cri-o://9ed7716389bb42108c02f6a05b6d832a0b8d104d94073ae0784b73206992c4da" gracePeriod=30 Jan 22 16:52:43 crc kubenswrapper[4758]: I0122 16:52:43.961542 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 22 16:52:43 crc kubenswrapper[4758]: I0122 16:52:43.974980 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b37953c7-685d-4a7e-85fd-a2964e025825-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "b37953c7-685d-4a7e-85fd-a2964e025825" (UID: "b37953c7-685d-4a7e-85fd-a2964e025825"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:43 crc kubenswrapper[4758]: I0122 16:52:43.985047 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b37953c7-685d-4a7e-85fd-a2964e025825-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b37953c7-685d-4a7e-85fd-a2964e025825" (UID: "b37953c7-685d-4a7e-85fd-a2964e025825"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:43 crc kubenswrapper[4758]: I0122 16:52:43.987250 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b37953c7-685d-4a7e-85fd-a2964e025825-config" (OuterVolumeSpecName: "config") pod "b37953c7-685d-4a7e-85fd-a2964e025825" (UID: "b37953c7-685d-4a7e-85fd-a2964e025825"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:44 crc kubenswrapper[4758]: I0122 16:52:44.016501 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=10.016476569 podStartE2EDuration="10.016476569s" podCreationTimestamp="2026-01-22 16:52:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:52:43.995176147 +0000 UTC m=+1385.478515432" watchObservedRunningTime="2026-01-22 16:52:44.016476569 +0000 UTC m=+1385.499815854" Jan 22 16:52:44 crc kubenswrapper[4758]: I0122 16:52:44.080605 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b37953c7-685d-4a7e-85fd-a2964e025825-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:44 crc kubenswrapper[4758]: I0122 16:52:44.080632 4758 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b37953c7-685d-4a7e-85fd-a2964e025825-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:44 crc kubenswrapper[4758]: I0122 16:52:44.080642 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b37953c7-685d-4a7e-85fd-a2964e025825-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:44 crc kubenswrapper[4758]: I0122 16:52:44.309030 4758 scope.go:117] "RemoveContainer" containerID="1b6cc5ccbbfc7b0277304522d450bf801fa83ae1548aa7317a2ef97828b8b019" Jan 22 16:52:44 crc kubenswrapper[4758]: I0122 16:52:44.324708 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 22 16:52:44 crc kubenswrapper[4758]: I0122 16:52:44.344625 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 22 16:52:44 crc kubenswrapper[4758]: I0122 16:52:44.360306 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-86d8479bd8-rrvgj"] Jan 22 16:52:44 crc kubenswrapper[4758]: I0122 16:52:44.369044 4758 scope.go:117] "RemoveContainer" containerID="c10ae485f186912a6e35b078a622dbd0915dce04f6b1eb7a7a6feee6114d5ac9" Jan 22 16:52:44 crc kubenswrapper[4758]: E0122 16:52:44.369552 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c10ae485f186912a6e35b078a622dbd0915dce04f6b1eb7a7a6feee6114d5ac9\": container with ID starting with c10ae485f186912a6e35b078a622dbd0915dce04f6b1eb7a7a6feee6114d5ac9 not found: ID does not exist" containerID="c10ae485f186912a6e35b078a622dbd0915dce04f6b1eb7a7a6feee6114d5ac9" Jan 22 16:52:44 crc kubenswrapper[4758]: I0122 16:52:44.369568 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 22 16:52:44 crc kubenswrapper[4758]: I0122 16:52:44.369581 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c10ae485f186912a6e35b078a622dbd0915dce04f6b1eb7a7a6feee6114d5ac9"} err="failed to get container status \"c10ae485f186912a6e35b078a622dbd0915dce04f6b1eb7a7a6feee6114d5ac9\": rpc error: code = NotFound desc = could not find container \"c10ae485f186912a6e35b078a622dbd0915dce04f6b1eb7a7a6feee6114d5ac9\": container with ID starting with c10ae485f186912a6e35b078a622dbd0915dce04f6b1eb7a7a6feee6114d5ac9 not found: ID does not exist" Jan 22 16:52:44 crc kubenswrapper[4758]: I0122 16:52:44.369602 4758 scope.go:117] "RemoveContainer" containerID="1b6cc5ccbbfc7b0277304522d450bf801fa83ae1548aa7317a2ef97828b8b019" Jan 22 16:52:44 crc kubenswrapper[4758]: E0122 16:52:44.369815 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b6cc5ccbbfc7b0277304522d450bf801fa83ae1548aa7317a2ef97828b8b019\": container with ID starting with 1b6cc5ccbbfc7b0277304522d450bf801fa83ae1548aa7317a2ef97828b8b019 not found: ID does not exist" containerID="1b6cc5ccbbfc7b0277304522d450bf801fa83ae1548aa7317a2ef97828b8b019" Jan 22 16:52:44 crc kubenswrapper[4758]: I0122 16:52:44.369834 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b6cc5ccbbfc7b0277304522d450bf801fa83ae1548aa7317a2ef97828b8b019"} err="failed to get container status \"1b6cc5ccbbfc7b0277304522d450bf801fa83ae1548aa7317a2ef97828b8b019\": rpc error: code = NotFound desc = could not find container \"1b6cc5ccbbfc7b0277304522d450bf801fa83ae1548aa7317a2ef97828b8b019\": container with ID starting with 1b6cc5ccbbfc7b0277304522d450bf801fa83ae1548aa7317a2ef97828b8b019 not found: ID does not exist" Jan 22 16:52:44 crc kubenswrapper[4758]: E0122 16:52:44.369941 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbc57e1b-3cb7-4bce-91e8-d31356bf83ac" containerName="init" Jan 22 16:52:44 crc kubenswrapper[4758]: I0122 16:52:44.369957 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbc57e1b-3cb7-4bce-91e8-d31356bf83ac" containerName="init" Jan 22 16:52:44 crc kubenswrapper[4758]: E0122 16:52:44.369975 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b37953c7-685d-4a7e-85fd-a2964e025825" containerName="neutron-httpd" Jan 22 16:52:44 crc kubenswrapper[4758]: I0122 16:52:44.369982 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="b37953c7-685d-4a7e-85fd-a2964e025825" containerName="neutron-httpd" Jan 22 16:52:44 crc kubenswrapper[4758]: E0122 16:52:44.369995 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b37953c7-685d-4a7e-85fd-a2964e025825" containerName="neutron-api" Jan 22 16:52:44 crc kubenswrapper[4758]: I0122 16:52:44.370001 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="b37953c7-685d-4a7e-85fd-a2964e025825" containerName="neutron-api" Jan 22 16:52:44 crc kubenswrapper[4758]: E0122 16:52:44.370015 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea53227e-7c78-42b4-959c-dd2531914be2" containerName="watcher-decision-engine" Jan 22 16:52:44 crc kubenswrapper[4758]: I0122 16:52:44.370021 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea53227e-7c78-42b4-959c-dd2531914be2" containerName="watcher-decision-engine" Jan 22 16:52:44 crc kubenswrapper[4758]: E0122 16:52:44.370030 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea53227e-7c78-42b4-959c-dd2531914be2" containerName="watcher-decision-engine" Jan 22 16:52:44 crc kubenswrapper[4758]: I0122 16:52:44.370036 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea53227e-7c78-42b4-959c-dd2531914be2" containerName="watcher-decision-engine" Jan 22 16:52:44 crc kubenswrapper[4758]: E0122 16:52:44.370045 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e140fc6a-db89-4748-be82-94765061de55" containerName="init" Jan 22 16:52:44 crc kubenswrapper[4758]: I0122 16:52:44.370063 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e140fc6a-db89-4748-be82-94765061de55" containerName="init" Jan 22 16:52:44 crc kubenswrapper[4758]: E0122 16:52:44.370073 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea53227e-7c78-42b4-959c-dd2531914be2" containerName="watcher-decision-engine" Jan 22 16:52:44 crc kubenswrapper[4758]: I0122 16:52:44.370078 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea53227e-7c78-42b4-959c-dd2531914be2" containerName="watcher-decision-engine" Jan 22 16:52:44 crc kubenswrapper[4758]: I0122 16:52:44.370247 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="e140fc6a-db89-4748-be82-94765061de55" containerName="init" Jan 22 16:52:44 crc kubenswrapper[4758]: I0122 16:52:44.370266 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="b37953c7-685d-4a7e-85fd-a2964e025825" containerName="neutron-httpd" Jan 22 16:52:44 crc kubenswrapper[4758]: I0122 16:52:44.370284 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea53227e-7c78-42b4-959c-dd2531914be2" containerName="watcher-decision-engine" Jan 22 16:52:44 crc kubenswrapper[4758]: I0122 16:52:44.370290 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbc57e1b-3cb7-4bce-91e8-d31356bf83ac" containerName="init" Jan 22 16:52:44 crc kubenswrapper[4758]: I0122 16:52:44.370304 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="b37953c7-685d-4a7e-85fd-a2964e025825" containerName="neutron-api" Jan 22 16:52:44 crc kubenswrapper[4758]: I0122 16:52:44.370314 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea53227e-7c78-42b4-959c-dd2531914be2" containerName="watcher-decision-engine" Jan 22 16:52:44 crc kubenswrapper[4758]: I0122 16:52:44.370926 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 22 16:52:44 crc kubenswrapper[4758]: I0122 16:52:44.376651 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Jan 22 16:52:44 crc kubenswrapper[4758]: I0122 16:52:44.380571 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-86d8479bd8-rrvgj"] Jan 22 16:52:44 crc kubenswrapper[4758]: I0122 16:52:44.393179 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 22 16:52:44 crc kubenswrapper[4758]: I0122 16:52:44.394662 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5-config-data\") pod \"watcher-decision-engine-0\" (UID: \"0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5\") " pod="openstack/watcher-decision-engine-0" Jan 22 16:52:44 crc kubenswrapper[4758]: I0122 16:52:44.394698 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5q2b\" (UniqueName: \"kubernetes.io/projected/0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5-kube-api-access-g5q2b\") pod \"watcher-decision-engine-0\" (UID: \"0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5\") " pod="openstack/watcher-decision-engine-0" Jan 22 16:52:44 crc kubenswrapper[4758]: I0122 16:52:44.394728 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5-logs\") pod \"watcher-decision-engine-0\" (UID: \"0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5\") " pod="openstack/watcher-decision-engine-0" Jan 22 16:52:44 crc kubenswrapper[4758]: I0122 16:52:44.394942 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5\") " pod="openstack/watcher-decision-engine-0" Jan 22 16:52:44 crc kubenswrapper[4758]: I0122 16:52:44.395015 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5\") " pod="openstack/watcher-decision-engine-0" Jan 22 16:52:44 crc kubenswrapper[4758]: I0122 16:52:44.497367 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5-config-data\") pod \"watcher-decision-engine-0\" (UID: \"0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5\") " pod="openstack/watcher-decision-engine-0" Jan 22 16:52:44 crc kubenswrapper[4758]: I0122 16:52:44.497418 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5q2b\" (UniqueName: \"kubernetes.io/projected/0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5-kube-api-access-g5q2b\") pod \"watcher-decision-engine-0\" (UID: \"0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5\") " pod="openstack/watcher-decision-engine-0" Jan 22 16:52:44 crc kubenswrapper[4758]: I0122 16:52:44.497447 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5-logs\") pod \"watcher-decision-engine-0\" (UID: \"0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5\") " pod="openstack/watcher-decision-engine-0" Jan 22 16:52:44 crc kubenswrapper[4758]: I0122 16:52:44.497516 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5\") " pod="openstack/watcher-decision-engine-0" Jan 22 16:52:44 crc kubenswrapper[4758]: I0122 16:52:44.497540 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5\") " pod="openstack/watcher-decision-engine-0" Jan 22 16:52:44 crc kubenswrapper[4758]: I0122 16:52:44.500052 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5-logs\") pod \"watcher-decision-engine-0\" (UID: \"0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5\") " pod="openstack/watcher-decision-engine-0" Jan 22 16:52:44 crc kubenswrapper[4758]: I0122 16:52:44.503664 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5-config-data\") pod \"watcher-decision-engine-0\" (UID: \"0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5\") " pod="openstack/watcher-decision-engine-0" Jan 22 16:52:44 crc kubenswrapper[4758]: I0122 16:52:44.506123 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5\") " pod="openstack/watcher-decision-engine-0" Jan 22 16:52:44 crc kubenswrapper[4758]: I0122 16:52:44.512462 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5\") " pod="openstack/watcher-decision-engine-0" Jan 22 16:52:44 crc kubenswrapper[4758]: I0122 16:52:44.541105 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5q2b\" (UniqueName: \"kubernetes.io/projected/0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5-kube-api-access-g5q2b\") pod \"watcher-decision-engine-0\" (UID: \"0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5\") " pod="openstack/watcher-decision-engine-0" Jan 22 16:52:44 crc kubenswrapper[4758]: I0122 16:52:44.725146 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 22 16:52:44 crc kubenswrapper[4758]: I0122 16:52:44.825596 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b37953c7-685d-4a7e-85fd-a2964e025825" path="/var/lib/kubelet/pods/b37953c7-685d-4a7e-85fd-a2964e025825/volumes" Jan 22 16:52:44 crc kubenswrapper[4758]: I0122 16:52:44.826489 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea53227e-7c78-42b4-959c-dd2531914be2" path="/var/lib/kubelet/pods/ea53227e-7c78-42b4-959c-dd2531914be2/volumes" Jan 22 16:52:44 crc kubenswrapper[4758]: I0122 16:52:44.860643 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6cd69747bd-jv5rb" Jan 22 16:52:44 crc kubenswrapper[4758]: I0122 16:52:44.885390 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.047034 4758 generic.go:334] "Generic (PLEG): container finished" podID="a1383243-b82d-4aaa-876f-aad36c14158a" containerID="686e4484c8c23dd808c8c81a761d97343138ed477de30a2c8c237cecd7b034ef" exitCode=143 Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.047094 4758 generic.go:334] "Generic (PLEG): container finished" podID="a1383243-b82d-4aaa-876f-aad36c14158a" containerID="957da5532f165f92f1aca059a509fea4f4d557da7b77fc0b490d72e8187a1820" exitCode=143 Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.047165 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a1383243-b82d-4aaa-876f-aad36c14158a","Type":"ContainerDied","Data":"686e4484c8c23dd808c8c81a761d97343138ed477de30a2c8c237cecd7b034ef"} Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.047194 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a1383243-b82d-4aaa-876f-aad36c14158a","Type":"ContainerDied","Data":"957da5532f165f92f1aca059a509fea4f4d557da7b77fc0b490d72e8187a1820"} Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.071856 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2","Type":"ContainerStarted","Data":"0551082ac671f06b3f279766df1407ea976e33632c448297e015ccc75c12a048"} Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.111178 4758 generic.go:334] "Generic (PLEG): container finished" podID="0a5401a8-4432-405a-8cdd-06d21ee90ece" containerID="67f04c8edd0bfadf7999eb3e60499af7612f6aba062524c649cf701fd1c49e86" exitCode=143 Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.111355 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0a5401a8-4432-405a-8cdd-06d21ee90ece","Type":"ContainerDied","Data":"67f04c8edd0bfadf7999eb3e60499af7612f6aba062524c649cf701fd1c49e86"} Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.115930 4758 generic.go:334] "Generic (PLEG): container finished" podID="4c1d0803-658d-4bdb-8770-3d3921554591" containerID="bcce4d53124194930cd7b4bba64d473ac0a2114c056e048e63f5ca406fbf45fc" exitCode=143 Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.115966 4758 generic.go:334] "Generic (PLEG): container finished" podID="4c1d0803-658d-4bdb-8770-3d3921554591" containerID="0552816a9ba646bbab3b68c9a8675ed966b7ebf91f680a33ae8338aaf77e68a7" exitCode=143 Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.116057 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4c1d0803-658d-4bdb-8770-3d3921554591","Type":"ContainerDied","Data":"bcce4d53124194930cd7b4bba64d473ac0a2114c056e048e63f5ca406fbf45fc"} Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.116999 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4c1d0803-658d-4bdb-8770-3d3921554591","Type":"ContainerDied","Data":"0552816a9ba646bbab3b68c9a8675ed966b7ebf91f680a33ae8338aaf77e68a7"} Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.468881 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6b7cfcc9b6-tclz9" podUID="cd5b4616-f0db-4639-a791-c8882e65f6ca" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.175:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.499778 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.508037 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.628449 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfk8b\" (UniqueName: \"kubernetes.io/projected/a1383243-b82d-4aaa-876f-aad36c14158a-kube-api-access-lfk8b\") pod \"a1383243-b82d-4aaa-876f-aad36c14158a\" (UID: \"a1383243-b82d-4aaa-876f-aad36c14158a\") " Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.628884 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4c1d0803-658d-4bdb-8770-3d3921554591-httpd-run\") pod \"4c1d0803-658d-4bdb-8770-3d3921554591\" (UID: \"4c1d0803-658d-4bdb-8770-3d3921554591\") " Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.628970 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1383243-b82d-4aaa-876f-aad36c14158a-config-data\") pod \"a1383243-b82d-4aaa-876f-aad36c14158a\" (UID: \"a1383243-b82d-4aaa-876f-aad36c14158a\") " Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.629007 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1383243-b82d-4aaa-876f-aad36c14158a-combined-ca-bundle\") pod \"a1383243-b82d-4aaa-876f-aad36c14158a\" (UID: \"a1383243-b82d-4aaa-876f-aad36c14158a\") " Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.629059 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sz4k2\" (UniqueName: \"kubernetes.io/projected/4c1d0803-658d-4bdb-8770-3d3921554591-kube-api-access-sz4k2\") pod \"4c1d0803-658d-4bdb-8770-3d3921554591\" (UID: \"4c1d0803-658d-4bdb-8770-3d3921554591\") " Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.629096 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c1d0803-658d-4bdb-8770-3d3921554591-combined-ca-bundle\") pod \"4c1d0803-658d-4bdb-8770-3d3921554591\" (UID: \"4c1d0803-658d-4bdb-8770-3d3921554591\") " Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.629143 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c1d0803-658d-4bdb-8770-3d3921554591-scripts\") pod \"4c1d0803-658d-4bdb-8770-3d3921554591\" (UID: \"4c1d0803-658d-4bdb-8770-3d3921554591\") " Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.629175 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c1d0803-658d-4bdb-8770-3d3921554591-config-data\") pod \"4c1d0803-658d-4bdb-8770-3d3921554591\" (UID: \"4c1d0803-658d-4bdb-8770-3d3921554591\") " Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.629243 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"a1383243-b82d-4aaa-876f-aad36c14158a\" (UID: \"a1383243-b82d-4aaa-876f-aad36c14158a\") " Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.629270 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1383243-b82d-4aaa-876f-aad36c14158a-logs\") pod \"a1383243-b82d-4aaa-876f-aad36c14158a\" (UID: \"a1383243-b82d-4aaa-876f-aad36c14158a\") " Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.629296 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c1d0803-658d-4bdb-8770-3d3921554591-logs\") pod \"4c1d0803-658d-4bdb-8770-3d3921554591\" (UID: \"4c1d0803-658d-4bdb-8770-3d3921554591\") " Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.629347 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a1383243-b82d-4aaa-876f-aad36c14158a-httpd-run\") pod \"a1383243-b82d-4aaa-876f-aad36c14158a\" (UID: \"a1383243-b82d-4aaa-876f-aad36c14158a\") " Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.629386 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1383243-b82d-4aaa-876f-aad36c14158a-scripts\") pod \"a1383243-b82d-4aaa-876f-aad36c14158a\" (UID: \"a1383243-b82d-4aaa-876f-aad36c14158a\") " Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.629461 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"4c1d0803-658d-4bdb-8770-3d3921554591\" (UID: \"4c1d0803-658d-4bdb-8770-3d3921554591\") " Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.633070 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1383243-b82d-4aaa-876f-aad36c14158a-logs" (OuterVolumeSpecName: "logs") pod "a1383243-b82d-4aaa-876f-aad36c14158a" (UID: "a1383243-b82d-4aaa-876f-aad36c14158a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.635422 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c1d0803-658d-4bdb-8770-3d3921554591-logs" (OuterVolumeSpecName: "logs") pod "4c1d0803-658d-4bdb-8770-3d3921554591" (UID: "4c1d0803-658d-4bdb-8770-3d3921554591"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.635679 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1383243-b82d-4aaa-876f-aad36c14158a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a1383243-b82d-4aaa-876f-aad36c14158a" (UID: "a1383243-b82d-4aaa-876f-aad36c14158a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.640565 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1383243-b82d-4aaa-876f-aad36c14158a-scripts" (OuterVolumeSpecName: "scripts") pod "a1383243-b82d-4aaa-876f-aad36c14158a" (UID: "a1383243-b82d-4aaa-876f-aad36c14158a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.640712 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "4c1d0803-658d-4bdb-8770-3d3921554591" (UID: "4c1d0803-658d-4bdb-8770-3d3921554591"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.665040 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c1d0803-658d-4bdb-8770-3d3921554591-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4c1d0803-658d-4bdb-8770-3d3921554591" (UID: "4c1d0803-658d-4bdb-8770-3d3921554591"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.665179 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c1d0803-658d-4bdb-8770-3d3921554591-scripts" (OuterVolumeSpecName: "scripts") pod "4c1d0803-658d-4bdb-8770-3d3921554591" (UID: "4c1d0803-658d-4bdb-8770-3d3921554591"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.667926 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "a1383243-b82d-4aaa-876f-aad36c14158a" (UID: "a1383243-b82d-4aaa-876f-aad36c14158a"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.671902 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1383243-b82d-4aaa-876f-aad36c14158a-kube-api-access-lfk8b" (OuterVolumeSpecName: "kube-api-access-lfk8b") pod "a1383243-b82d-4aaa-876f-aad36c14158a" (UID: "a1383243-b82d-4aaa-876f-aad36c14158a"). InnerVolumeSpecName "kube-api-access-lfk8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.721935 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c1d0803-658d-4bdb-8770-3d3921554591-kube-api-access-sz4k2" (OuterVolumeSpecName: "kube-api-access-sz4k2") pod "4c1d0803-658d-4bdb-8770-3d3921554591" (UID: "4c1d0803-658d-4bdb-8770-3d3921554591"). InnerVolumeSpecName "kube-api-access-sz4k2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.731369 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c1d0803-658d-4bdb-8770-3d3921554591-config-data" (OuterVolumeSpecName: "config-data") pod "4c1d0803-658d-4bdb-8770-3d3921554591" (UID: "4c1d0803-658d-4bdb-8770-3d3921554591"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.731989 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sz4k2\" (UniqueName: \"kubernetes.io/projected/4c1d0803-658d-4bdb-8770-3d3921554591-kube-api-access-sz4k2\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.732023 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c1d0803-658d-4bdb-8770-3d3921554591-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.732032 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c1d0803-658d-4bdb-8770-3d3921554591-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.732057 4758 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.732067 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1383243-b82d-4aaa-876f-aad36c14158a-logs\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.732075 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c1d0803-658d-4bdb-8770-3d3921554591-logs\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.732083 4758 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a1383243-b82d-4aaa-876f-aad36c14158a-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.732093 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1383243-b82d-4aaa-876f-aad36c14158a-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.732105 4758 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.732115 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfk8b\" (UniqueName: \"kubernetes.io/projected/a1383243-b82d-4aaa-876f-aad36c14158a-kube-api-access-lfk8b\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.732123 4758 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4c1d0803-658d-4bdb-8770-3d3921554591-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.733376 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c1d0803-658d-4bdb-8770-3d3921554591-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c1d0803-658d-4bdb-8770-3d3921554591" (UID: "4c1d0803-658d-4bdb-8770-3d3921554591"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.755935 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1383243-b82d-4aaa-876f-aad36c14158a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1383243-b82d-4aaa-876f-aad36c14158a" (UID: "a1383243-b82d-4aaa-876f-aad36c14158a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.833373 4758 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.834395 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1383243-b82d-4aaa-876f-aad36c14158a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.834417 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c1d0803-658d-4bdb-8770-3d3921554591-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.834427 4758 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.844035 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1383243-b82d-4aaa-876f-aad36c14158a-config-data" (OuterVolumeSpecName: "config-data") pod "a1383243-b82d-4aaa-876f-aad36c14158a" (UID: "a1383243-b82d-4aaa-876f-aad36c14158a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.853245 4758 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.936389 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1383243-b82d-4aaa-876f-aad36c14158a-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:45 crc kubenswrapper[4758]: I0122 16:52:45.936414 4758 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.137579 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a1383243-b82d-4aaa-876f-aad36c14158a","Type":"ContainerDied","Data":"17019e62cb4154d285ebe6847dc6cf2dc95ee42f42987bb3c08b929e970d7b29"} Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.137639 4758 scope.go:117] "RemoveContainer" containerID="686e4484c8c23dd808c8c81a761d97343138ed477de30a2c8c237cecd7b034ef" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.137796 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.179497 4758 generic.go:334] "Generic (PLEG): container finished" podID="0a5401a8-4432-405a-8cdd-06d21ee90ece" containerID="9ed7716389bb42108c02f6a05b6d832a0b8d104d94073ae0784b73206992c4da" exitCode=0 Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.179599 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0a5401a8-4432-405a-8cdd-06d21ee90ece","Type":"ContainerDied","Data":"9ed7716389bb42108c02f6a05b6d832a0b8d104d94073ae0784b73206992c4da"} Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.195826 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.222696 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.222799 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4c1d0803-658d-4bdb-8770-3d3921554591","Type":"ContainerDied","Data":"839071acf6a607e5fdccef048f7b7875c23ea52a14bf7e3d9ab757714f863069"} Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.483052 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.504120 4758 scope.go:117] "RemoveContainer" containerID="957da5532f165f92f1aca059a509fea4f4d557da7b77fc0b490d72e8187a1820" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.525475 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.560950 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 16:52:46 crc kubenswrapper[4758]: E0122 16:52:46.561328 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c1d0803-658d-4bdb-8770-3d3921554591" containerName="glance-log" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.561342 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c1d0803-658d-4bdb-8770-3d3921554591" containerName="glance-log" Jan 22 16:52:46 crc kubenswrapper[4758]: E0122 16:52:46.561369 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1383243-b82d-4aaa-876f-aad36c14158a" containerName="glance-httpd" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.561396 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1383243-b82d-4aaa-876f-aad36c14158a" containerName="glance-httpd" Jan 22 16:52:46 crc kubenswrapper[4758]: E0122 16:52:46.561411 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c1d0803-658d-4bdb-8770-3d3921554591" containerName="glance-httpd" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.561419 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c1d0803-658d-4bdb-8770-3d3921554591" containerName="glance-httpd" Jan 22 16:52:46 crc kubenswrapper[4758]: E0122 16:52:46.561449 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1383243-b82d-4aaa-876f-aad36c14158a" containerName="glance-log" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.561455 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1383243-b82d-4aaa-876f-aad36c14158a" containerName="glance-log" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.561616 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1383243-b82d-4aaa-876f-aad36c14158a" containerName="glance-httpd" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.561630 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1383243-b82d-4aaa-876f-aad36c14158a" containerName="glance-log" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.561641 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c1d0803-658d-4bdb-8770-3d3921554591" containerName="glance-log" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.561652 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea53227e-7c78-42b4-959c-dd2531914be2" containerName="watcher-decision-engine" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.561661 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c1d0803-658d-4bdb-8770-3d3921554591" containerName="glance-httpd" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.562717 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.573828 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-th7td" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.574057 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.574189 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.582988 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.601555 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.625537 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.646806 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.668378 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.670083 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.676816 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.682904 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/efc0b77e-57a1-4a76-93ae-c56db1fd3969-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"efc0b77e-57a1-4a76-93ae-c56db1fd3969\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.682951 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efc0b77e-57a1-4a76-93ae-c56db1fd3969-scripts\") pod \"glance-default-internal-api-0\" (UID: \"efc0b77e-57a1-4a76-93ae-c56db1fd3969\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.683009 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjkps\" (UniqueName: \"kubernetes.io/projected/efc0b77e-57a1-4a76-93ae-c56db1fd3969-kube-api-access-kjkps\") pod \"glance-default-internal-api-0\" (UID: \"efc0b77e-57a1-4a76-93ae-c56db1fd3969\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.683104 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efc0b77e-57a1-4a76-93ae-c56db1fd3969-config-data\") pod \"glance-default-internal-api-0\" (UID: \"efc0b77e-57a1-4a76-93ae-c56db1fd3969\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.683242 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"efc0b77e-57a1-4a76-93ae-c56db1fd3969\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.683300 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/efc0b77e-57a1-4a76-93ae-c56db1fd3969-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"efc0b77e-57a1-4a76-93ae-c56db1fd3969\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.683329 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efc0b77e-57a1-4a76-93ae-c56db1fd3969-logs\") pod \"glance-default-internal-api-0\" (UID: \"efc0b77e-57a1-4a76-93ae-c56db1fd3969\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.683345 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efc0b77e-57a1-4a76-93ae-c56db1fd3969-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"efc0b77e-57a1-4a76-93ae-c56db1fd3969\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.684622 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.684783 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.720024 4758 scope.go:117] "RemoveContainer" containerID="bcce4d53124194930cd7b4bba64d473ac0a2114c056e048e63f5ca406fbf45fc" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.731955 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6c78f7b546-sv5rx" podUID="177272b6-b55b-4e45-9336-d6227af172d0" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.176:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.732302 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6c78f7b546-sv5rx" podUID="177272b6-b55b-4e45-9336-d6227af172d0" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.176:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.797883 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjkps\" (UniqueName: \"kubernetes.io/projected/efc0b77e-57a1-4a76-93ae-c56db1fd3969-kube-api-access-kjkps\") pod \"glance-default-internal-api-0\" (UID: \"efc0b77e-57a1-4a76-93ae-c56db1fd3969\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.798304 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efc0b77e-57a1-4a76-93ae-c56db1fd3969-config-data\") pod \"glance-default-internal-api-0\" (UID: \"efc0b77e-57a1-4a76-93ae-c56db1fd3969\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.799430 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"9c95d79b-1cd4-4f71-9ab9-16081fbc54e5\") " pod="openstack/glance-default-external-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.799598 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c95d79b-1cd4-4f71-9ab9-16081fbc54e5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9c95d79b-1cd4-4f71-9ab9-16081fbc54e5\") " pod="openstack/glance-default-external-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.799840 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"efc0b77e-57a1-4a76-93ae-c56db1fd3969\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.799958 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c95d79b-1cd4-4f71-9ab9-16081fbc54e5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9c95d79b-1cd4-4f71-9ab9-16081fbc54e5\") " pod="openstack/glance-default-external-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.800046 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9c95d79b-1cd4-4f71-9ab9-16081fbc54e5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9c95d79b-1cd4-4f71-9ab9-16081fbc54e5\") " pod="openstack/glance-default-external-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.800140 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c95d79b-1cd4-4f71-9ab9-16081fbc54e5-scripts\") pod \"glance-default-external-api-0\" (UID: \"9c95d79b-1cd4-4f71-9ab9-16081fbc54e5\") " pod="openstack/glance-default-external-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.800277 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/efc0b77e-57a1-4a76-93ae-c56db1fd3969-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"efc0b77e-57a1-4a76-93ae-c56db1fd3969\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.800359 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c95d79b-1cd4-4f71-9ab9-16081fbc54e5-logs\") pod \"glance-default-external-api-0\" (UID: \"9c95d79b-1cd4-4f71-9ab9-16081fbc54e5\") " pod="openstack/glance-default-external-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.800465 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efc0b77e-57a1-4a76-93ae-c56db1fd3969-logs\") pod \"glance-default-internal-api-0\" (UID: \"efc0b77e-57a1-4a76-93ae-c56db1fd3969\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.800554 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efc0b77e-57a1-4a76-93ae-c56db1fd3969-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"efc0b77e-57a1-4a76-93ae-c56db1fd3969\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.800784 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/efc0b77e-57a1-4a76-93ae-c56db1fd3969-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"efc0b77e-57a1-4a76-93ae-c56db1fd3969\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.800885 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efc0b77e-57a1-4a76-93ae-c56db1fd3969-scripts\") pod \"glance-default-internal-api-0\" (UID: \"efc0b77e-57a1-4a76-93ae-c56db1fd3969\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.800974 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c95d79b-1cd4-4f71-9ab9-16081fbc54e5-config-data\") pod \"glance-default-external-api-0\" (UID: \"9c95d79b-1cd4-4f71-9ab9-16081fbc54e5\") " pod="openstack/glance-default-external-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.801104 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r6w7\" (UniqueName: \"kubernetes.io/projected/9c95d79b-1cd4-4f71-9ab9-16081fbc54e5-kube-api-access-9r6w7\") pod \"glance-default-external-api-0\" (UID: \"9c95d79b-1cd4-4f71-9ab9-16081fbc54e5\") " pod="openstack/glance-default-external-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.801596 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"efc0b77e-57a1-4a76-93ae-c56db1fd3969\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.809249 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/efc0b77e-57a1-4a76-93ae-c56db1fd3969-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"efc0b77e-57a1-4a76-93ae-c56db1fd3969\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.809498 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efc0b77e-57a1-4a76-93ae-c56db1fd3969-logs\") pod \"glance-default-internal-api-0\" (UID: \"efc0b77e-57a1-4a76-93ae-c56db1fd3969\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.812682 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/efc0b77e-57a1-4a76-93ae-c56db1fd3969-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"efc0b77e-57a1-4a76-93ae-c56db1fd3969\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.839410 4758 scope.go:117] "RemoveContainer" containerID="0552816a9ba646bbab3b68c9a8675ed966b7ebf91f680a33ae8338aaf77e68a7" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.843995 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efc0b77e-57a1-4a76-93ae-c56db1fd3969-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"efc0b77e-57a1-4a76-93ae-c56db1fd3969\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.844062 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efc0b77e-57a1-4a76-93ae-c56db1fd3969-scripts\") pod \"glance-default-internal-api-0\" (UID: \"efc0b77e-57a1-4a76-93ae-c56db1fd3969\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.845616 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efc0b77e-57a1-4a76-93ae-c56db1fd3969-config-data\") pod \"glance-default-internal-api-0\" (UID: \"efc0b77e-57a1-4a76-93ae-c56db1fd3969\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.857251 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c1d0803-658d-4bdb-8770-3d3921554591" path="/var/lib/kubelet/pods/4c1d0803-658d-4bdb-8770-3d3921554591/volumes" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.862899 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1383243-b82d-4aaa-876f-aad36c14158a" path="/var/lib/kubelet/pods/a1383243-b82d-4aaa-876f-aad36c14158a/volumes" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.867989 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjkps\" (UniqueName: \"kubernetes.io/projected/efc0b77e-57a1-4a76-93ae-c56db1fd3969-kube-api-access-kjkps\") pod \"glance-default-internal-api-0\" (UID: \"efc0b77e-57a1-4a76-93ae-c56db1fd3969\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.899443 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-88b76f788-th2jq" podUID="40487aaa-4c45-41b2-ab14-76477ed2f4bb" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.160:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.160:8443: connect: connection refused" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.905002 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c95d79b-1cd4-4f71-9ab9-16081fbc54e5-logs\") pod \"glance-default-external-api-0\" (UID: \"9c95d79b-1cd4-4f71-9ab9-16081fbc54e5\") " pod="openstack/glance-default-external-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.905094 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c95d79b-1cd4-4f71-9ab9-16081fbc54e5-config-data\") pod \"glance-default-external-api-0\" (UID: \"9c95d79b-1cd4-4f71-9ab9-16081fbc54e5\") " pod="openstack/glance-default-external-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.905119 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r6w7\" (UniqueName: \"kubernetes.io/projected/9c95d79b-1cd4-4f71-9ab9-16081fbc54e5-kube-api-access-9r6w7\") pod \"glance-default-external-api-0\" (UID: \"9c95d79b-1cd4-4f71-9ab9-16081fbc54e5\") " pod="openstack/glance-default-external-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.905175 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"9c95d79b-1cd4-4f71-9ab9-16081fbc54e5\") " pod="openstack/glance-default-external-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.905203 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c95d79b-1cd4-4f71-9ab9-16081fbc54e5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9c95d79b-1cd4-4f71-9ab9-16081fbc54e5\") " pod="openstack/glance-default-external-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.905258 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c95d79b-1cd4-4f71-9ab9-16081fbc54e5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9c95d79b-1cd4-4f71-9ab9-16081fbc54e5\") " pod="openstack/glance-default-external-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.905274 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9c95d79b-1cd4-4f71-9ab9-16081fbc54e5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9c95d79b-1cd4-4f71-9ab9-16081fbc54e5\") " pod="openstack/glance-default-external-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.905291 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c95d79b-1cd4-4f71-9ab9-16081fbc54e5-scripts\") pod \"glance-default-external-api-0\" (UID: \"9c95d79b-1cd4-4f71-9ab9-16081fbc54e5\") " pod="openstack/glance-default-external-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.907195 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"9c95d79b-1cd4-4f71-9ab9-16081fbc54e5\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-external-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.911601 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c95d79b-1cd4-4f71-9ab9-16081fbc54e5-logs\") pod \"glance-default-external-api-0\" (UID: \"9c95d79b-1cd4-4f71-9ab9-16081fbc54e5\") " pod="openstack/glance-default-external-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.912409 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9c95d79b-1cd4-4f71-9ab9-16081fbc54e5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9c95d79b-1cd4-4f71-9ab9-16081fbc54e5\") " pod="openstack/glance-default-external-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.947208 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c95d79b-1cd4-4f71-9ab9-16081fbc54e5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9c95d79b-1cd4-4f71-9ab9-16081fbc54e5\") " pod="openstack/glance-default-external-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.947306 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c95d79b-1cd4-4f71-9ab9-16081fbc54e5-scripts\") pod \"glance-default-external-api-0\" (UID: \"9c95d79b-1cd4-4f71-9ab9-16081fbc54e5\") " pod="openstack/glance-default-external-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.947799 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c95d79b-1cd4-4f71-9ab9-16081fbc54e5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9c95d79b-1cd4-4f71-9ab9-16081fbc54e5\") " pod="openstack/glance-default-external-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.948585 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c95d79b-1cd4-4f71-9ab9-16081fbc54e5-config-data\") pod \"glance-default-external-api-0\" (UID: \"9c95d79b-1cd4-4f71-9ab9-16081fbc54e5\") " pod="openstack/glance-default-external-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.970448 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r6w7\" (UniqueName: \"kubernetes.io/projected/9c95d79b-1cd4-4f71-9ab9-16081fbc54e5-kube-api-access-9r6w7\") pod \"glance-default-external-api-0\" (UID: \"9c95d79b-1cd4-4f71-9ab9-16081fbc54e5\") " pod="openstack/glance-default-external-api-0" Jan 22 16:52:46 crc kubenswrapper[4758]: I0122 16:52:46.992600 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5486585c8c-crbmm" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.097013 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"9c95d79b-1cd4-4f71-9ab9-16081fbc54e5\") " pod="openstack/glance-default-external-api-0" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.113715 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.121101 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"efc0b77e-57a1-4a76-93ae-c56db1fd3969\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.277830 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.297049 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2","Type":"ContainerStarted","Data":"37127c9b7a1671450fd4af1c038c3c3c5b2f6249e9734fc5f89e5d196c959cf0"} Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.298184 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.346437 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.780666534 podStartE2EDuration="12.346410931s" podCreationTimestamp="2026-01-22 16:52:35 +0000 UTC" firstStartedPulling="2026-01-22 16:52:38.557654264 +0000 UTC m=+1380.040993549" lastFinishedPulling="2026-01-22 16:52:46.123398661 +0000 UTC m=+1387.606737946" observedRunningTime="2026-01-22 16:52:47.343373098 +0000 UTC m=+1388.826712383" watchObservedRunningTime="2026-01-22 16:52:47.346410931 +0000 UTC m=+1388.829750226" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.351840 4758 generic.go:334] "Generic (PLEG): container finished" podID="b7312e42-6737-4296-a35b-39bbb4a6f21b" containerID="0c91657a572b3b34b8817f7c25202435a5ff9b50a99f94fed486d107c72a8bd0" exitCode=0 Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.351924 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f8f6c6576-zfqs4" event={"ID":"b7312e42-6737-4296-a35b-39bbb4a6f21b","Type":"ContainerDied","Data":"0c91657a572b3b34b8817f7c25202435a5ff9b50a99f94fed486d107c72a8bd0"} Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.395724 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0a5401a8-4432-405a-8cdd-06d21ee90ece","Type":"ContainerDied","Data":"5a482cb62e7fa3afac72f4431e885723af569f5bddbc1b69c24b2c83d129822b"} Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.395793 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a482cb62e7fa3afac72f4431e885723af569f5bddbc1b69c24b2c83d129822b" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.404304 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.427726 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5","Type":"ContainerStarted","Data":"82eb701a31f22b2189008d04498f33dda0d615831b0b09fbc67e94bf80067085"} Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.427779 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5","Type":"ContainerStarted","Data":"45d441cf67290b2260d2d1e41cfbf4b22497910f339e5dc6f86d30b25c60d7dd"} Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.466970 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f8f6c6576-zfqs4" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.486864 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=3.486844429 podStartE2EDuration="3.486844429s" podCreationTimestamp="2026-01-22 16:52:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:52:47.452250394 +0000 UTC m=+1388.935589679" watchObservedRunningTime="2026-01-22 16:52:47.486844429 +0000 UTC m=+1388.970183714" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.530307 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a5401a8-4432-405a-8cdd-06d21ee90ece-config-data\") pod \"0a5401a8-4432-405a-8cdd-06d21ee90ece\" (UID: \"0a5401a8-4432-405a-8cdd-06d21ee90ece\") " Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.530378 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a5401a8-4432-405a-8cdd-06d21ee90ece-scripts\") pod \"0a5401a8-4432-405a-8cdd-06d21ee90ece\" (UID: \"0a5401a8-4432-405a-8cdd-06d21ee90ece\") " Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.530465 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a5401a8-4432-405a-8cdd-06d21ee90ece-combined-ca-bundle\") pod \"0a5401a8-4432-405a-8cdd-06d21ee90ece\" (UID: \"0a5401a8-4432-405a-8cdd-06d21ee90ece\") " Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.530507 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5m9h\" (UniqueName: \"kubernetes.io/projected/0a5401a8-4432-405a-8cdd-06d21ee90ece-kube-api-access-c5m9h\") pod \"0a5401a8-4432-405a-8cdd-06d21ee90ece\" (UID: \"0a5401a8-4432-405a-8cdd-06d21ee90ece\") " Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.530535 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a5401a8-4432-405a-8cdd-06d21ee90ece-logs\") pod \"0a5401a8-4432-405a-8cdd-06d21ee90ece\" (UID: \"0a5401a8-4432-405a-8cdd-06d21ee90ece\") " Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.530591 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0a5401a8-4432-405a-8cdd-06d21ee90ece-etc-machine-id\") pod \"0a5401a8-4432-405a-8cdd-06d21ee90ece\" (UID: \"0a5401a8-4432-405a-8cdd-06d21ee90ece\") " Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.530726 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0a5401a8-4432-405a-8cdd-06d21ee90ece-config-data-custom\") pod \"0a5401a8-4432-405a-8cdd-06d21ee90ece\" (UID: \"0a5401a8-4432-405a-8cdd-06d21ee90ece\") " Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.531591 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a5401a8-4432-405a-8cdd-06d21ee90ece-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0a5401a8-4432-405a-8cdd-06d21ee90ece" (UID: "0a5401a8-4432-405a-8cdd-06d21ee90ece"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.534224 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a5401a8-4432-405a-8cdd-06d21ee90ece-logs" (OuterVolumeSpecName: "logs") pod "0a5401a8-4432-405a-8cdd-06d21ee90ece" (UID: "0a5401a8-4432-405a-8cdd-06d21ee90ece"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.555016 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a5401a8-4432-405a-8cdd-06d21ee90ece-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0a5401a8-4432-405a-8cdd-06d21ee90ece" (UID: "0a5401a8-4432-405a-8cdd-06d21ee90ece"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.557121 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a5401a8-4432-405a-8cdd-06d21ee90ece-scripts" (OuterVolumeSpecName: "scripts") pod "0a5401a8-4432-405a-8cdd-06d21ee90ece" (UID: "0a5401a8-4432-405a-8cdd-06d21ee90ece"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.557347 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a5401a8-4432-405a-8cdd-06d21ee90ece-kube-api-access-c5m9h" (OuterVolumeSpecName: "kube-api-access-c5m9h") pod "0a5401a8-4432-405a-8cdd-06d21ee90ece" (UID: "0a5401a8-4432-405a-8cdd-06d21ee90ece"). InnerVolumeSpecName "kube-api-access-c5m9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.614861 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a5401a8-4432-405a-8cdd-06d21ee90ece-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a5401a8-4432-405a-8cdd-06d21ee90ece" (UID: "0a5401a8-4432-405a-8cdd-06d21ee90ece"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.632948 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a5401a8-4432-405a-8cdd-06d21ee90ece-config-data" (OuterVolumeSpecName: "config-data") pod "0a5401a8-4432-405a-8cdd-06d21ee90ece" (UID: "0a5401a8-4432-405a-8cdd-06d21ee90ece"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.637413 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b7312e42-6737-4296-a35b-39bbb4a6f21b-httpd-config\") pod \"b7312e42-6737-4296-a35b-39bbb4a6f21b\" (UID: \"b7312e42-6737-4296-a35b-39bbb4a6f21b\") " Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.637493 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6stw\" (UniqueName: \"kubernetes.io/projected/b7312e42-6737-4296-a35b-39bbb4a6f21b-kube-api-access-x6stw\") pod \"b7312e42-6737-4296-a35b-39bbb4a6f21b\" (UID: \"b7312e42-6737-4296-a35b-39bbb4a6f21b\") " Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.637561 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b7312e42-6737-4296-a35b-39bbb4a6f21b-config\") pod \"b7312e42-6737-4296-a35b-39bbb4a6f21b\" (UID: \"b7312e42-6737-4296-a35b-39bbb4a6f21b\") " Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.637632 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7312e42-6737-4296-a35b-39bbb4a6f21b-combined-ca-bundle\") pod \"b7312e42-6737-4296-a35b-39bbb4a6f21b\" (UID: \"b7312e42-6737-4296-a35b-39bbb4a6f21b\") " Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.637845 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7312e42-6737-4296-a35b-39bbb4a6f21b-ovndb-tls-certs\") pod \"b7312e42-6737-4296-a35b-39bbb4a6f21b\" (UID: \"b7312e42-6737-4296-a35b-39bbb4a6f21b\") " Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.638243 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a5401a8-4432-405a-8cdd-06d21ee90ece-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.638254 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5m9h\" (UniqueName: \"kubernetes.io/projected/0a5401a8-4432-405a-8cdd-06d21ee90ece-kube-api-access-c5m9h\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.638267 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a5401a8-4432-405a-8cdd-06d21ee90ece-logs\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.638276 4758 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0a5401a8-4432-405a-8cdd-06d21ee90ece-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.638285 4758 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0a5401a8-4432-405a-8cdd-06d21ee90ece-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.638295 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a5401a8-4432-405a-8cdd-06d21ee90ece-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.638303 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a5401a8-4432-405a-8cdd-06d21ee90ece-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.653335 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7312e42-6737-4296-a35b-39bbb4a6f21b-kube-api-access-x6stw" (OuterVolumeSpecName: "kube-api-access-x6stw") pod "b7312e42-6737-4296-a35b-39bbb4a6f21b" (UID: "b7312e42-6737-4296-a35b-39bbb4a6f21b"). InnerVolumeSpecName "kube-api-access-x6stw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.655939 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7312e42-6737-4296-a35b-39bbb4a6f21b-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "b7312e42-6737-4296-a35b-39bbb4a6f21b" (UID: "b7312e42-6737-4296-a35b-39bbb4a6f21b"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.728422 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7312e42-6737-4296-a35b-39bbb4a6f21b-config" (OuterVolumeSpecName: "config") pod "b7312e42-6737-4296-a35b-39bbb4a6f21b" (UID: "b7312e42-6737-4296-a35b-39bbb4a6f21b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.751056 4758 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b7312e42-6737-4296-a35b-39bbb4a6f21b-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.751088 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6stw\" (UniqueName: \"kubernetes.io/projected/b7312e42-6737-4296-a35b-39bbb4a6f21b-kube-api-access-x6stw\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.751100 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b7312e42-6737-4296-a35b-39bbb4a6f21b-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.788852 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7312e42-6737-4296-a35b-39bbb4a6f21b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b7312e42-6737-4296-a35b-39bbb4a6f21b" (UID: "b7312e42-6737-4296-a35b-39bbb4a6f21b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.855032 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7312e42-6737-4296-a35b-39bbb4a6f21b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.915818 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 22 16:52:47 crc kubenswrapper[4758]: E0122 16:52:47.916277 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7312e42-6737-4296-a35b-39bbb4a6f21b" containerName="neutron-api" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.916290 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7312e42-6737-4296-a35b-39bbb4a6f21b" containerName="neutron-api" Jan 22 16:52:47 crc kubenswrapper[4758]: E0122 16:52:47.916299 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a5401a8-4432-405a-8cdd-06d21ee90ece" containerName="cinder-api-log" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.916305 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a5401a8-4432-405a-8cdd-06d21ee90ece" containerName="cinder-api-log" Jan 22 16:52:47 crc kubenswrapper[4758]: E0122 16:52:47.916333 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a5401a8-4432-405a-8cdd-06d21ee90ece" containerName="cinder-api" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.916340 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a5401a8-4432-405a-8cdd-06d21ee90ece" containerName="cinder-api" Jan 22 16:52:47 crc kubenswrapper[4758]: E0122 16:52:47.916353 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7312e42-6737-4296-a35b-39bbb4a6f21b" containerName="neutron-httpd" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.916359 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7312e42-6737-4296-a35b-39bbb4a6f21b" containerName="neutron-httpd" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.916510 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a5401a8-4432-405a-8cdd-06d21ee90ece" containerName="cinder-api-log" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.916522 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7312e42-6737-4296-a35b-39bbb4a6f21b" containerName="neutron-httpd" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.916535 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7312e42-6737-4296-a35b-39bbb4a6f21b" containerName="neutron-api" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.916556 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a5401a8-4432-405a-8cdd-06d21ee90ece" containerName="cinder-api" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.917217 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.918432 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7312e42-6737-4296-a35b-39bbb4a6f21b-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "b7312e42-6737-4296-a35b-39bbb4a6f21b" (UID: "b7312e42-6737-4296-a35b-39bbb4a6f21b"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.926619 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-kmlnc" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.927605 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.935337 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.936182 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.982312 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f05be9d3-0051-48ce-9100-e436b5f14762-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f05be9d3-0051-48ce-9100-e436b5f14762\") " pod="openstack/openstackclient" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.982693 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f05be9d3-0051-48ce-9100-e436b5f14762-openstack-config\") pod \"openstackclient\" (UID: \"f05be9d3-0051-48ce-9100-e436b5f14762\") " pod="openstack/openstackclient" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.982965 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f05be9d3-0051-48ce-9100-e436b5f14762-openstack-config-secret\") pod \"openstackclient\" (UID: \"f05be9d3-0051-48ce-9100-e436b5f14762\") " pod="openstack/openstackclient" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.983036 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6qkl\" (UniqueName: \"kubernetes.io/projected/f05be9d3-0051-48ce-9100-e436b5f14762-kube-api-access-p6qkl\") pod \"openstackclient\" (UID: \"f05be9d3-0051-48ce-9100-e436b5f14762\") " pod="openstack/openstackclient" Jan 22 16:52:47 crc kubenswrapper[4758]: I0122 16:52:47.983345 4758 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7312e42-6737-4296-a35b-39bbb4a6f21b-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.088830 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f05be9d3-0051-48ce-9100-e436b5f14762-openstack-config-secret\") pod \"openstackclient\" (UID: \"f05be9d3-0051-48ce-9100-e436b5f14762\") " pod="openstack/openstackclient" Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.088903 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6qkl\" (UniqueName: \"kubernetes.io/projected/f05be9d3-0051-48ce-9100-e436b5f14762-kube-api-access-p6qkl\") pod \"openstackclient\" (UID: \"f05be9d3-0051-48ce-9100-e436b5f14762\") " pod="openstack/openstackclient" Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.089001 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f05be9d3-0051-48ce-9100-e436b5f14762-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f05be9d3-0051-48ce-9100-e436b5f14762\") " pod="openstack/openstackclient" Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.089034 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f05be9d3-0051-48ce-9100-e436b5f14762-openstack-config\") pod \"openstackclient\" (UID: \"f05be9d3-0051-48ce-9100-e436b5f14762\") " pod="openstack/openstackclient" Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.090005 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f05be9d3-0051-48ce-9100-e436b5f14762-openstack-config\") pod \"openstackclient\" (UID: \"f05be9d3-0051-48ce-9100-e436b5f14762\") " pod="openstack/openstackclient" Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.097722 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f05be9d3-0051-48ce-9100-e436b5f14762-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f05be9d3-0051-48ce-9100-e436b5f14762\") " pod="openstack/openstackclient" Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.099281 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f05be9d3-0051-48ce-9100-e436b5f14762-openstack-config-secret\") pod \"openstackclient\" (UID: \"f05be9d3-0051-48ce-9100-e436b5f14762\") " pod="openstack/openstackclient" Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.118325 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6qkl\" (UniqueName: \"kubernetes.io/projected/f05be9d3-0051-48ce-9100-e436b5f14762-kube-api-access-p6qkl\") pod \"openstackclient\" (UID: \"f05be9d3-0051-48ce-9100-e436b5f14762\") " pod="openstack/openstackclient" Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.191356 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.381174 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.465636 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.525383 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"efc0b77e-57a1-4a76-93ae-c56db1fd3969","Type":"ContainerStarted","Data":"4d01365d79aedf70c1dc862fa5fa99a13eedb36d4444e68fb8077a2ef5a093dd"} Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.539307 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f8f6c6576-zfqs4" Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.545331 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.545378 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f8f6c6576-zfqs4" event={"ID":"b7312e42-6737-4296-a35b-39bbb4a6f21b","Type":"ContainerDied","Data":"cd241df4d9a9ca5fb55df0f9463dfe3812ee19ccbc679251cacb91b57217b4ea"} Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.545426 4758 scope.go:117] "RemoveContainer" containerID="c0ef1600c909cea06f743be6661231c80d0f2cf31472785a373ddde21f6e6f4b" Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.611043 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-f8f6c6576-zfqs4"] Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.643060 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-f8f6c6576-zfqs4"] Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.670540 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.684068 4758 scope.go:117] "RemoveContainer" containerID="0c91657a572b3b34b8817f7c25202435a5ff9b50a99f94fed486d107c72a8bd0" Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.685659 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.695467 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.705260 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.711528 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.723046 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.723258 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.731045 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.740932 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6c78f7b546-sv5rx" podUID="177272b6-b55b-4e45-9336-d6227af172d0" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.176:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.829317 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3943daea-3dfe-4c65-ada3-f1c36f9701f8-config-data\") pod \"cinder-api-0\" (UID: \"3943daea-3dfe-4c65-ada3-f1c36f9701f8\") " pod="openstack/cinder-api-0" Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.829363 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3943daea-3dfe-4c65-ada3-f1c36f9701f8-scripts\") pod \"cinder-api-0\" (UID: \"3943daea-3dfe-4c65-ada3-f1c36f9701f8\") " pod="openstack/cinder-api-0" Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.829387 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3943daea-3dfe-4c65-ada3-f1c36f9701f8-logs\") pod \"cinder-api-0\" (UID: \"3943daea-3dfe-4c65-ada3-f1c36f9701f8\") " pod="openstack/cinder-api-0" Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.829437 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3943daea-3dfe-4c65-ada3-f1c36f9701f8-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3943daea-3dfe-4c65-ada3-f1c36f9701f8\") " pod="openstack/cinder-api-0" Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.829471 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtjqw\" (UniqueName: \"kubernetes.io/projected/3943daea-3dfe-4c65-ada3-f1c36f9701f8-kube-api-access-rtjqw\") pod \"cinder-api-0\" (UID: \"3943daea-3dfe-4c65-ada3-f1c36f9701f8\") " pod="openstack/cinder-api-0" Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.829499 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3943daea-3dfe-4c65-ada3-f1c36f9701f8-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3943daea-3dfe-4c65-ada3-f1c36f9701f8\") " pod="openstack/cinder-api-0" Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.829515 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3943daea-3dfe-4c65-ada3-f1c36f9701f8-config-data-custom\") pod \"cinder-api-0\" (UID: \"3943daea-3dfe-4c65-ada3-f1c36f9701f8\") " pod="openstack/cinder-api-0" Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.829543 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3943daea-3dfe-4c65-ada3-f1c36f9701f8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3943daea-3dfe-4c65-ada3-f1c36f9701f8\") " pod="openstack/cinder-api-0" Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.829591 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3943daea-3dfe-4c65-ada3-f1c36f9701f8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3943daea-3dfe-4c65-ada3-f1c36f9701f8\") " pod="openstack/cinder-api-0" Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.900728 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a5401a8-4432-405a-8cdd-06d21ee90ece" path="/var/lib/kubelet/pods/0a5401a8-4432-405a-8cdd-06d21ee90ece/volumes" Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.901732 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7312e42-6737-4296-a35b-39bbb4a6f21b" path="/var/lib/kubelet/pods/b7312e42-6737-4296-a35b-39bbb4a6f21b/volumes" Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.903298 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6b7cfcc9b6-tclz9" Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.955305 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3943daea-3dfe-4c65-ada3-f1c36f9701f8-config-data\") pod \"cinder-api-0\" (UID: \"3943daea-3dfe-4c65-ada3-f1c36f9701f8\") " pod="openstack/cinder-api-0" Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.955355 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3943daea-3dfe-4c65-ada3-f1c36f9701f8-scripts\") pod \"cinder-api-0\" (UID: \"3943daea-3dfe-4c65-ada3-f1c36f9701f8\") " pod="openstack/cinder-api-0" Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.955409 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3943daea-3dfe-4c65-ada3-f1c36f9701f8-logs\") pod \"cinder-api-0\" (UID: \"3943daea-3dfe-4c65-ada3-f1c36f9701f8\") " pod="openstack/cinder-api-0" Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.955845 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3943daea-3dfe-4c65-ada3-f1c36f9701f8-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3943daea-3dfe-4c65-ada3-f1c36f9701f8\") " pod="openstack/cinder-api-0" Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.955902 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtjqw\" (UniqueName: \"kubernetes.io/projected/3943daea-3dfe-4c65-ada3-f1c36f9701f8-kube-api-access-rtjqw\") pod \"cinder-api-0\" (UID: \"3943daea-3dfe-4c65-ada3-f1c36f9701f8\") " pod="openstack/cinder-api-0" Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.955968 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3943daea-3dfe-4c65-ada3-f1c36f9701f8-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3943daea-3dfe-4c65-ada3-f1c36f9701f8\") " pod="openstack/cinder-api-0" Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.956180 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3943daea-3dfe-4c65-ada3-f1c36f9701f8-config-data-custom\") pod \"cinder-api-0\" (UID: \"3943daea-3dfe-4c65-ada3-f1c36f9701f8\") " pod="openstack/cinder-api-0" Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.956224 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3943daea-3dfe-4c65-ada3-f1c36f9701f8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3943daea-3dfe-4c65-ada3-f1c36f9701f8\") " pod="openstack/cinder-api-0" Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.956511 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3943daea-3dfe-4c65-ada3-f1c36f9701f8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3943daea-3dfe-4c65-ada3-f1c36f9701f8\") " pod="openstack/cinder-api-0" Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.956624 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3943daea-3dfe-4c65-ada3-f1c36f9701f8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3943daea-3dfe-4c65-ada3-f1c36f9701f8\") " pod="openstack/cinder-api-0" Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.961086 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3943daea-3dfe-4c65-ada3-f1c36f9701f8-logs\") pod \"cinder-api-0\" (UID: \"3943daea-3dfe-4c65-ada3-f1c36f9701f8\") " pod="openstack/cinder-api-0" Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.966721 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3943daea-3dfe-4c65-ada3-f1c36f9701f8-config-data-custom\") pod \"cinder-api-0\" (UID: \"3943daea-3dfe-4c65-ada3-f1c36f9701f8\") " pod="openstack/cinder-api-0" Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.968491 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3943daea-3dfe-4c65-ada3-f1c36f9701f8-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3943daea-3dfe-4c65-ada3-f1c36f9701f8\") " pod="openstack/cinder-api-0" Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.969104 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3943daea-3dfe-4c65-ada3-f1c36f9701f8-config-data\") pod \"cinder-api-0\" (UID: \"3943daea-3dfe-4c65-ada3-f1c36f9701f8\") " pod="openstack/cinder-api-0" Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.972731 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3943daea-3dfe-4c65-ada3-f1c36f9701f8-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3943daea-3dfe-4c65-ada3-f1c36f9701f8\") " pod="openstack/cinder-api-0" Jan 22 16:52:48 crc kubenswrapper[4758]: I0122 16:52:48.984454 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3943daea-3dfe-4c65-ada3-f1c36f9701f8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3943daea-3dfe-4c65-ada3-f1c36f9701f8\") " pod="openstack/cinder-api-0" Jan 22 16:52:49 crc kubenswrapper[4758]: I0122 16:52:49.019714 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3943daea-3dfe-4c65-ada3-f1c36f9701f8-scripts\") pod \"cinder-api-0\" (UID: \"3943daea-3dfe-4c65-ada3-f1c36f9701f8\") " pod="openstack/cinder-api-0" Jan 22 16:52:49 crc kubenswrapper[4758]: I0122 16:52:49.033393 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtjqw\" (UniqueName: \"kubernetes.io/projected/3943daea-3dfe-4c65-ada3-f1c36f9701f8-kube-api-access-rtjqw\") pod \"cinder-api-0\" (UID: \"3943daea-3dfe-4c65-ada3-f1c36f9701f8\") " pod="openstack/cinder-api-0" Jan 22 16:52:49 crc kubenswrapper[4758]: I0122 16:52:49.062521 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 22 16:52:49 crc kubenswrapper[4758]: I0122 16:52:49.071815 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 22 16:52:49 crc kubenswrapper[4758]: I0122 16:52:49.571857 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"f05be9d3-0051-48ce-9100-e436b5f14762","Type":"ContainerStarted","Data":"7a6d580e6a6d97a4e099aa119eafb79f2902908e7f45ae05b8111f9d3793dec3"} Jan 22 16:52:49 crc kubenswrapper[4758]: I0122 16:52:49.587869 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9c95d79b-1cd4-4f71-9ab9-16081fbc54e5","Type":"ContainerStarted","Data":"ae7b623d7c963e4d8fa161d307c65cdfa118cab21c97a1e8714073ef0305a67a"} Jan 22 16:52:49 crc kubenswrapper[4758]: I0122 16:52:49.829590 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 22 16:52:50 crc kubenswrapper[4758]: I0122 16:52:50.021828 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6c78f7b546-sv5rx" Jan 22 16:52:50 crc kubenswrapper[4758]: I0122 16:52:50.025672 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6b7cfcc9b6-tclz9" Jan 22 16:52:50 crc kubenswrapper[4758]: I0122 16:52:50.209505 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6c78f7b546-sv5rx" Jan 22 16:52:50 crc kubenswrapper[4758]: I0122 16:52:50.327218 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6b7cfcc9b6-tclz9"] Jan 22 16:52:50 crc kubenswrapper[4758]: I0122 16:52:50.342622 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 22 16:52:50 crc kubenswrapper[4758]: I0122 16:52:50.393061 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 16:52:50 crc kubenswrapper[4758]: I0122 16:52:50.633692 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"efc0b77e-57a1-4a76-93ae-c56db1fd3969","Type":"ContainerStarted","Data":"6689e0cbf87d5b680eb4600e83e72d441be2e8d99698c2c06afc9f56d1e4deeb"} Jan 22 16:52:50 crc kubenswrapper[4758]: I0122 16:52:50.639479 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9c95d79b-1cd4-4f71-9ab9-16081fbc54e5","Type":"ContainerStarted","Data":"1424128c818a82b2e4b52afb45b200751b14469a6aae9b89229e70f0efbf92b1"} Jan 22 16:52:50 crc kubenswrapper[4758]: I0122 16:52:50.640810 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6b7cfcc9b6-tclz9" podUID="cd5b4616-f0db-4639-a791-c8882e65f6ca" containerName="barbican-api-log" containerID="cri-o://91aa93d9de3692d224a3e448cb4b5983ea39fcb4ed0a1602ea985f042784b45c" gracePeriod=30 Jan 22 16:52:50 crc kubenswrapper[4758]: I0122 16:52:50.640895 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3943daea-3dfe-4c65-ada3-f1c36f9701f8","Type":"ContainerStarted","Data":"2273b38dfeec5ec931efee806c107e5d4562653e261d30b61960a175f315516e"} Jan 22 16:52:50 crc kubenswrapper[4758]: I0122 16:52:50.641961 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d" containerName="cinder-scheduler" containerID="cri-o://fa2cdd68a8771f35842e6ee8c3b649e849d38378cc8174191e94cd5c7727eddf" gracePeriod=30 Jan 22 16:52:50 crc kubenswrapper[4758]: I0122 16:52:50.642394 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6b7cfcc9b6-tclz9" podUID="cd5b4616-f0db-4639-a791-c8882e65f6ca" containerName="barbican-api" containerID="cri-o://3b2e7ee039de7f55d394f9218bcd174b16bf80f81b4fed8aba2bb2eff102017e" gracePeriod=30 Jan 22 16:52:50 crc kubenswrapper[4758]: I0122 16:52:50.642648 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d" containerName="probe" containerID="cri-o://1da3f1dc3b4e352657d6ec448c3e8750a635e4b7d4ebc56baf53a5ff63632e19" gracePeriod=30 Jan 22 16:52:50 crc kubenswrapper[4758]: I0122 16:52:50.665646 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6b7cfcc9b6-tclz9" podUID="cd5b4616-f0db-4639-a791-c8882e65f6ca" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.175:9311/healthcheck\": EOF" Jan 22 16:52:50 crc kubenswrapper[4758]: I0122 16:52:50.666269 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6b7cfcc9b6-tclz9" podUID="cd5b4616-f0db-4639-a791-c8882e65f6ca" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.175:9311/healthcheck\": EOF" Jan 22 16:52:50 crc kubenswrapper[4758]: I0122 16:52:50.963929 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f7466dcbf-g984f" Jan 22 16:52:51 crc kubenswrapper[4758]: I0122 16:52:51.105556 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-647dd9b96f-tvdcp"] Jan 22 16:52:51 crc kubenswrapper[4758]: I0122 16:52:51.116135 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-647dd9b96f-tvdcp" podUID="13009e8c-ff8c-4429-ba2d-3a0053fe0ff4" containerName="dnsmasq-dns" containerID="cri-o://07d58592b5fe3309684fc29c740b9416c6aab32053853beeb26cdde70d5380e2" gracePeriod=10 Jan 22 16:52:51 crc kubenswrapper[4758]: I0122 16:52:51.682143 4758 generic.go:334] "Generic (PLEG): container finished" podID="13009e8c-ff8c-4429-ba2d-3a0053fe0ff4" containerID="07d58592b5fe3309684fc29c740b9416c6aab32053853beeb26cdde70d5380e2" exitCode=0 Jan 22 16:52:51 crc kubenswrapper[4758]: I0122 16:52:51.682257 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647dd9b96f-tvdcp" event={"ID":"13009e8c-ff8c-4429-ba2d-3a0053fe0ff4","Type":"ContainerDied","Data":"07d58592b5fe3309684fc29c740b9416c6aab32053853beeb26cdde70d5380e2"} Jan 22 16:52:51 crc kubenswrapper[4758]: I0122 16:52:51.696206 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"efc0b77e-57a1-4a76-93ae-c56db1fd3969","Type":"ContainerStarted","Data":"054f06827200d506586945a26f061d13371e9e314fb007b7f47420d84f5a5d27"} Jan 22 16:52:51 crc kubenswrapper[4758]: I0122 16:52:51.717512 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9c95d79b-1cd4-4f71-9ab9-16081fbc54e5","Type":"ContainerStarted","Data":"80422c2fb8eae76c949d5f97b8728f124a6c645d863ef08a5e105c3a30f3e98b"} Jan 22 16:52:51 crc kubenswrapper[4758]: I0122 16:52:51.738591 4758 generic.go:334] "Generic (PLEG): container finished" podID="cd5b4616-f0db-4639-a791-c8882e65f6ca" containerID="91aa93d9de3692d224a3e448cb4b5983ea39fcb4ed0a1602ea985f042784b45c" exitCode=143 Jan 22 16:52:51 crc kubenswrapper[4758]: I0122 16:52:51.738650 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b7cfcc9b6-tclz9" event={"ID":"cd5b4616-f0db-4639-a791-c8882e65f6ca","Type":"ContainerDied","Data":"91aa93d9de3692d224a3e448cb4b5983ea39fcb4ed0a1602ea985f042784b45c"} Jan 22 16:52:51 crc kubenswrapper[4758]: I0122 16:52:51.740963 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.7409492239999995 podStartE2EDuration="5.740949224s" podCreationTimestamp="2026-01-22 16:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:52:51.723576958 +0000 UTC m=+1393.206916263" watchObservedRunningTime="2026-01-22 16:52:51.740949224 +0000 UTC m=+1393.224288509" Jan 22 16:52:51 crc kubenswrapper[4758]: I0122 16:52:51.881116 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647dd9b96f-tvdcp" Jan 22 16:52:51 crc kubenswrapper[4758]: I0122 16:52:51.904988 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.904973577 podStartE2EDuration="5.904973577s" podCreationTimestamp="2026-01-22 16:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:52:51.754169045 +0000 UTC m=+1393.237508340" watchObservedRunningTime="2026-01-22 16:52:51.904973577 +0000 UTC m=+1393.388312862" Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.000431 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13009e8c-ff8c-4429-ba2d-3a0053fe0ff4-ovsdbserver-nb\") pod \"13009e8c-ff8c-4429-ba2d-3a0053fe0ff4\" (UID: \"13009e8c-ff8c-4429-ba2d-3a0053fe0ff4\") " Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.000489 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13009e8c-ff8c-4429-ba2d-3a0053fe0ff4-ovsdbserver-sb\") pod \"13009e8c-ff8c-4429-ba2d-3a0053fe0ff4\" (UID: \"13009e8c-ff8c-4429-ba2d-3a0053fe0ff4\") " Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.000527 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l295l\" (UniqueName: \"kubernetes.io/projected/13009e8c-ff8c-4429-ba2d-3a0053fe0ff4-kube-api-access-l295l\") pod \"13009e8c-ff8c-4429-ba2d-3a0053fe0ff4\" (UID: \"13009e8c-ff8c-4429-ba2d-3a0053fe0ff4\") " Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.000662 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13009e8c-ff8c-4429-ba2d-3a0053fe0ff4-config\") pod \"13009e8c-ff8c-4429-ba2d-3a0053fe0ff4\" (UID: \"13009e8c-ff8c-4429-ba2d-3a0053fe0ff4\") " Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.000756 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13009e8c-ff8c-4429-ba2d-3a0053fe0ff4-dns-svc\") pod \"13009e8c-ff8c-4429-ba2d-3a0053fe0ff4\" (UID: \"13009e8c-ff8c-4429-ba2d-3a0053fe0ff4\") " Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.000788 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13009e8c-ff8c-4429-ba2d-3a0053fe0ff4-dns-swift-storage-0\") pod \"13009e8c-ff8c-4429-ba2d-3a0053fe0ff4\" (UID: \"13009e8c-ff8c-4429-ba2d-3a0053fe0ff4\") " Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.013304 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13009e8c-ff8c-4429-ba2d-3a0053fe0ff4-kube-api-access-l295l" (OuterVolumeSpecName: "kube-api-access-l295l") pod "13009e8c-ff8c-4429-ba2d-3a0053fe0ff4" (UID: "13009e8c-ff8c-4429-ba2d-3a0053fe0ff4"). InnerVolumeSpecName "kube-api-access-l295l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.077724 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13009e8c-ff8c-4429-ba2d-3a0053fe0ff4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "13009e8c-ff8c-4429-ba2d-3a0053fe0ff4" (UID: "13009e8c-ff8c-4429-ba2d-3a0053fe0ff4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.088638 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13009e8c-ff8c-4429-ba2d-3a0053fe0ff4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "13009e8c-ff8c-4429-ba2d-3a0053fe0ff4" (UID: "13009e8c-ff8c-4429-ba2d-3a0053fe0ff4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.092184 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13009e8c-ff8c-4429-ba2d-3a0053fe0ff4-config" (OuterVolumeSpecName: "config") pod "13009e8c-ff8c-4429-ba2d-3a0053fe0ff4" (UID: "13009e8c-ff8c-4429-ba2d-3a0053fe0ff4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.109475 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13009e8c-ff8c-4429-ba2d-3a0053fe0ff4-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.109506 4758 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13009e8c-ff8c-4429-ba2d-3a0053fe0ff4-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.109518 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13009e8c-ff8c-4429-ba2d-3a0053fe0ff4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.109531 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l295l\" (UniqueName: \"kubernetes.io/projected/13009e8c-ff8c-4429-ba2d-3a0053fe0ff4-kube-api-access-l295l\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.116784 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13009e8c-ff8c-4429-ba2d-3a0053fe0ff4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "13009e8c-ff8c-4429-ba2d-3a0053fe0ff4" (UID: "13009e8c-ff8c-4429-ba2d-3a0053fe0ff4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.117246 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13009e8c-ff8c-4429-ba2d-3a0053fe0ff4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "13009e8c-ff8c-4429-ba2d-3a0053fe0ff4" (UID: "13009e8c-ff8c-4429-ba2d-3a0053fe0ff4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.211289 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13009e8c-ff8c-4429-ba2d-3a0053fe0ff4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.211313 4758 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13009e8c-ff8c-4429-ba2d-3a0053fe0ff4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.578792 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.726567 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgjk5\" (UniqueName: \"kubernetes.io/projected/05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d-kube-api-access-vgjk5\") pod \"05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d\" (UID: \"05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d\") " Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.726672 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d-scripts\") pod \"05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d\" (UID: \"05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d\") " Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.726804 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d-etc-machine-id\") pod \"05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d\" (UID: \"05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d\") " Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.726888 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d-config-data-custom\") pod \"05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d\" (UID: \"05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d\") " Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.726924 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d-config-data\") pod \"05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d\" (UID: \"05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d\") " Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.726977 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d-combined-ca-bundle\") pod \"05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d\" (UID: \"05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d\") " Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.728307 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d" (UID: "05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.741604 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d-kube-api-access-vgjk5" (OuterVolumeSpecName: "kube-api-access-vgjk5") pod "05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d" (UID: "05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d"). InnerVolumeSpecName "kube-api-access-vgjk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.741679 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d-scripts" (OuterVolumeSpecName: "scripts") pod "05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d" (UID: "05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.741756 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d" (UID: "05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.774196 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3943daea-3dfe-4c65-ada3-f1c36f9701f8","Type":"ContainerStarted","Data":"1e6060e774ebe03410562d587d70c3da481c9c3d90ccdeff03a08b95adf18212"} Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.782665 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647dd9b96f-tvdcp" event={"ID":"13009e8c-ff8c-4429-ba2d-3a0053fe0ff4","Type":"ContainerDied","Data":"86345802f5a5a15aa9bf1c1de706ae793adceca42022e6cc8d86be975077107e"} Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.782718 4758 scope.go:117] "RemoveContainer" containerID="07d58592b5fe3309684fc29c740b9416c6aab32053853beeb26cdde70d5380e2" Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.782868 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647dd9b96f-tvdcp" Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.796901 4758 generic.go:334] "Generic (PLEG): container finished" podID="05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d" containerID="1da3f1dc3b4e352657d6ec448c3e8750a635e4b7d4ebc56baf53a5ff63632e19" exitCode=0 Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.796936 4758 generic.go:334] "Generic (PLEG): container finished" podID="05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d" containerID="fa2cdd68a8771f35842e6ee8c3b649e849d38378cc8174191e94cd5c7727eddf" exitCode=0 Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.798072 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d","Type":"ContainerDied","Data":"1da3f1dc3b4e352657d6ec448c3e8750a635e4b7d4ebc56baf53a5ff63632e19"} Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.798101 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d","Type":"ContainerDied","Data":"fa2cdd68a8771f35842e6ee8c3b649e849d38378cc8174191e94cd5c7727eddf"} Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.798112 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d","Type":"ContainerDied","Data":"df1d6d2dd7d9c2c797adc55ed22ba1381da8fcf9a069fa1eccc001007b7ed94b"} Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.798115 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.830600 4758 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.830628 4758 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.830638 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgjk5\" (UniqueName: \"kubernetes.io/projected/05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d-kube-api-access-vgjk5\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.830651 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.843397 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d" (UID: "05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.899885 4758 scope.go:117] "RemoveContainer" containerID="b35efac72031395c7a23d19e43fdf246a2ace507230c2adefcc1424a200fa16a" Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.900321 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-647dd9b96f-tvdcp"] Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.908321 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-647dd9b96f-tvdcp"] Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.930104 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d-config-data" (OuterVolumeSpecName: "config-data") pod "05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d" (UID: "05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.935411 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.935438 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.952547 4758 scope.go:117] "RemoveContainer" containerID="1da3f1dc3b4e352657d6ec448c3e8750a635e4b7d4ebc56baf53a5ff63632e19" Jan 22 16:52:52 crc kubenswrapper[4758]: I0122 16:52:52.978487 4758 scope.go:117] "RemoveContainer" containerID="fa2cdd68a8771f35842e6ee8c3b649e849d38378cc8174191e94cd5c7727eddf" Jan 22 16:52:53 crc kubenswrapper[4758]: I0122 16:52:53.002854 4758 scope.go:117] "RemoveContainer" containerID="1da3f1dc3b4e352657d6ec448c3e8750a635e4b7d4ebc56baf53a5ff63632e19" Jan 22 16:52:53 crc kubenswrapper[4758]: E0122 16:52:53.003402 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1da3f1dc3b4e352657d6ec448c3e8750a635e4b7d4ebc56baf53a5ff63632e19\": container with ID starting with 1da3f1dc3b4e352657d6ec448c3e8750a635e4b7d4ebc56baf53a5ff63632e19 not found: ID does not exist" containerID="1da3f1dc3b4e352657d6ec448c3e8750a635e4b7d4ebc56baf53a5ff63632e19" Jan 22 16:52:53 crc kubenswrapper[4758]: I0122 16:52:53.003437 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1da3f1dc3b4e352657d6ec448c3e8750a635e4b7d4ebc56baf53a5ff63632e19"} err="failed to get container status \"1da3f1dc3b4e352657d6ec448c3e8750a635e4b7d4ebc56baf53a5ff63632e19\": rpc error: code = NotFound desc = could not find container \"1da3f1dc3b4e352657d6ec448c3e8750a635e4b7d4ebc56baf53a5ff63632e19\": container with ID starting with 1da3f1dc3b4e352657d6ec448c3e8750a635e4b7d4ebc56baf53a5ff63632e19 not found: ID does not exist" Jan 22 16:52:53 crc kubenswrapper[4758]: I0122 16:52:53.003479 4758 scope.go:117] "RemoveContainer" containerID="fa2cdd68a8771f35842e6ee8c3b649e849d38378cc8174191e94cd5c7727eddf" Jan 22 16:52:53 crc kubenswrapper[4758]: E0122 16:52:53.003932 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa2cdd68a8771f35842e6ee8c3b649e849d38378cc8174191e94cd5c7727eddf\": container with ID starting with fa2cdd68a8771f35842e6ee8c3b649e849d38378cc8174191e94cd5c7727eddf not found: ID does not exist" containerID="fa2cdd68a8771f35842e6ee8c3b649e849d38378cc8174191e94cd5c7727eddf" Jan 22 16:52:53 crc kubenswrapper[4758]: I0122 16:52:53.003979 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa2cdd68a8771f35842e6ee8c3b649e849d38378cc8174191e94cd5c7727eddf"} err="failed to get container status \"fa2cdd68a8771f35842e6ee8c3b649e849d38378cc8174191e94cd5c7727eddf\": rpc error: code = NotFound desc = could not find container \"fa2cdd68a8771f35842e6ee8c3b649e849d38378cc8174191e94cd5c7727eddf\": container with ID starting with fa2cdd68a8771f35842e6ee8c3b649e849d38378cc8174191e94cd5c7727eddf not found: ID does not exist" Jan 22 16:52:53 crc kubenswrapper[4758]: I0122 16:52:53.003994 4758 scope.go:117] "RemoveContainer" containerID="1da3f1dc3b4e352657d6ec448c3e8750a635e4b7d4ebc56baf53a5ff63632e19" Jan 22 16:52:53 crc kubenswrapper[4758]: I0122 16:52:53.004354 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1da3f1dc3b4e352657d6ec448c3e8750a635e4b7d4ebc56baf53a5ff63632e19"} err="failed to get container status \"1da3f1dc3b4e352657d6ec448c3e8750a635e4b7d4ebc56baf53a5ff63632e19\": rpc error: code = NotFound desc = could not find container \"1da3f1dc3b4e352657d6ec448c3e8750a635e4b7d4ebc56baf53a5ff63632e19\": container with ID starting with 1da3f1dc3b4e352657d6ec448c3e8750a635e4b7d4ebc56baf53a5ff63632e19 not found: ID does not exist" Jan 22 16:52:53 crc kubenswrapper[4758]: I0122 16:52:53.004373 4758 scope.go:117] "RemoveContainer" containerID="fa2cdd68a8771f35842e6ee8c3b649e849d38378cc8174191e94cd5c7727eddf" Jan 22 16:52:53 crc kubenswrapper[4758]: I0122 16:52:53.005432 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa2cdd68a8771f35842e6ee8c3b649e849d38378cc8174191e94cd5c7727eddf"} err="failed to get container status \"fa2cdd68a8771f35842e6ee8c3b649e849d38378cc8174191e94cd5c7727eddf\": rpc error: code = NotFound desc = could not find container \"fa2cdd68a8771f35842e6ee8c3b649e849d38378cc8174191e94cd5c7727eddf\": container with ID starting with fa2cdd68a8771f35842e6ee8c3b649e849d38378cc8174191e94cd5c7727eddf not found: ID does not exist" Jan 22 16:52:53 crc kubenswrapper[4758]: I0122 16:52:53.163538 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 16:52:53 crc kubenswrapper[4758]: I0122 16:52:53.198825 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 16:52:53 crc kubenswrapper[4758]: I0122 16:52:53.222322 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 16:52:53 crc kubenswrapper[4758]: E0122 16:52:53.222797 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13009e8c-ff8c-4429-ba2d-3a0053fe0ff4" containerName="init" Jan 22 16:52:53 crc kubenswrapper[4758]: I0122 16:52:53.222817 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="13009e8c-ff8c-4429-ba2d-3a0053fe0ff4" containerName="init" Jan 22 16:52:53 crc kubenswrapper[4758]: E0122 16:52:53.222839 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d" containerName="probe" Jan 22 16:52:53 crc kubenswrapper[4758]: I0122 16:52:53.222845 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d" containerName="probe" Jan 22 16:52:53 crc kubenswrapper[4758]: E0122 16:52:53.222873 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d" containerName="cinder-scheduler" Jan 22 16:52:53 crc kubenswrapper[4758]: I0122 16:52:53.222878 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d" containerName="cinder-scheduler" Jan 22 16:52:53 crc kubenswrapper[4758]: E0122 16:52:53.222890 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13009e8c-ff8c-4429-ba2d-3a0053fe0ff4" containerName="dnsmasq-dns" Jan 22 16:52:53 crc kubenswrapper[4758]: I0122 16:52:53.222896 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="13009e8c-ff8c-4429-ba2d-3a0053fe0ff4" containerName="dnsmasq-dns" Jan 22 16:52:53 crc kubenswrapper[4758]: I0122 16:52:53.223071 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d" containerName="cinder-scheduler" Jan 22 16:52:53 crc kubenswrapper[4758]: I0122 16:52:53.223092 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="13009e8c-ff8c-4429-ba2d-3a0053fe0ff4" containerName="dnsmasq-dns" Jan 22 16:52:53 crc kubenswrapper[4758]: I0122 16:52:53.223104 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d" containerName="probe" Jan 22 16:52:53 crc kubenswrapper[4758]: I0122 16:52:53.224167 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 22 16:52:53 crc kubenswrapper[4758]: I0122 16:52:53.227841 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 22 16:52:53 crc kubenswrapper[4758]: I0122 16:52:53.250961 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 16:52:53 crc kubenswrapper[4758]: I0122 16:52:53.350755 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4898b260-d20c-4e08-a760-5fa80e700b95-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4898b260-d20c-4e08-a760-5fa80e700b95\") " pod="openstack/cinder-scheduler-0" Jan 22 16:52:53 crc kubenswrapper[4758]: I0122 16:52:53.350835 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4898b260-d20c-4e08-a760-5fa80e700b95-scripts\") pod \"cinder-scheduler-0\" (UID: \"4898b260-d20c-4e08-a760-5fa80e700b95\") " pod="openstack/cinder-scheduler-0" Jan 22 16:52:53 crc kubenswrapper[4758]: I0122 16:52:53.350919 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4898b260-d20c-4e08-a760-5fa80e700b95-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4898b260-d20c-4e08-a760-5fa80e700b95\") " pod="openstack/cinder-scheduler-0" Jan 22 16:52:53 crc kubenswrapper[4758]: I0122 16:52:53.350936 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4898b260-d20c-4e08-a760-5fa80e700b95-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4898b260-d20c-4e08-a760-5fa80e700b95\") " pod="openstack/cinder-scheduler-0" Jan 22 16:52:53 crc kubenswrapper[4758]: I0122 16:52:53.350957 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4898b260-d20c-4e08-a760-5fa80e700b95-config-data\") pod \"cinder-scheduler-0\" (UID: \"4898b260-d20c-4e08-a760-5fa80e700b95\") " pod="openstack/cinder-scheduler-0" Jan 22 16:52:53 crc kubenswrapper[4758]: I0122 16:52:53.350992 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8hv7\" (UniqueName: \"kubernetes.io/projected/4898b260-d20c-4e08-a760-5fa80e700b95-kube-api-access-j8hv7\") pod \"cinder-scheduler-0\" (UID: \"4898b260-d20c-4e08-a760-5fa80e700b95\") " pod="openstack/cinder-scheduler-0" Jan 22 16:52:53 crc kubenswrapper[4758]: I0122 16:52:53.452707 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4898b260-d20c-4e08-a760-5fa80e700b95-scripts\") pod \"cinder-scheduler-0\" (UID: \"4898b260-d20c-4e08-a760-5fa80e700b95\") " pod="openstack/cinder-scheduler-0" Jan 22 16:52:53 crc kubenswrapper[4758]: I0122 16:52:53.452847 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4898b260-d20c-4e08-a760-5fa80e700b95-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4898b260-d20c-4e08-a760-5fa80e700b95\") " pod="openstack/cinder-scheduler-0" Jan 22 16:52:53 crc kubenswrapper[4758]: I0122 16:52:53.452870 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4898b260-d20c-4e08-a760-5fa80e700b95-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4898b260-d20c-4e08-a760-5fa80e700b95\") " pod="openstack/cinder-scheduler-0" Jan 22 16:52:53 crc kubenswrapper[4758]: I0122 16:52:53.452890 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4898b260-d20c-4e08-a760-5fa80e700b95-config-data\") pod \"cinder-scheduler-0\" (UID: \"4898b260-d20c-4e08-a760-5fa80e700b95\") " pod="openstack/cinder-scheduler-0" Jan 22 16:52:53 crc kubenswrapper[4758]: I0122 16:52:53.452927 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8hv7\" (UniqueName: \"kubernetes.io/projected/4898b260-d20c-4e08-a760-5fa80e700b95-kube-api-access-j8hv7\") pod \"cinder-scheduler-0\" (UID: \"4898b260-d20c-4e08-a760-5fa80e700b95\") " pod="openstack/cinder-scheduler-0" Jan 22 16:52:53 crc kubenswrapper[4758]: I0122 16:52:53.452970 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4898b260-d20c-4e08-a760-5fa80e700b95-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4898b260-d20c-4e08-a760-5fa80e700b95\") " pod="openstack/cinder-scheduler-0" Jan 22 16:52:53 crc kubenswrapper[4758]: I0122 16:52:53.454907 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4898b260-d20c-4e08-a760-5fa80e700b95-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4898b260-d20c-4e08-a760-5fa80e700b95\") " pod="openstack/cinder-scheduler-0" Jan 22 16:52:53 crc kubenswrapper[4758]: I0122 16:52:53.457005 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4898b260-d20c-4e08-a760-5fa80e700b95-scripts\") pod \"cinder-scheduler-0\" (UID: \"4898b260-d20c-4e08-a760-5fa80e700b95\") " pod="openstack/cinder-scheduler-0" Jan 22 16:52:53 crc kubenswrapper[4758]: I0122 16:52:53.457424 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4898b260-d20c-4e08-a760-5fa80e700b95-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4898b260-d20c-4e08-a760-5fa80e700b95\") " pod="openstack/cinder-scheduler-0" Jan 22 16:52:53 crc kubenswrapper[4758]: I0122 16:52:53.457475 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4898b260-d20c-4e08-a760-5fa80e700b95-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4898b260-d20c-4e08-a760-5fa80e700b95\") " pod="openstack/cinder-scheduler-0" Jan 22 16:52:53 crc kubenswrapper[4758]: I0122 16:52:53.462837 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4898b260-d20c-4e08-a760-5fa80e700b95-config-data\") pod \"cinder-scheduler-0\" (UID: \"4898b260-d20c-4e08-a760-5fa80e700b95\") " pod="openstack/cinder-scheduler-0" Jan 22 16:52:53 crc kubenswrapper[4758]: I0122 16:52:53.476644 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8hv7\" (UniqueName: \"kubernetes.io/projected/4898b260-d20c-4e08-a760-5fa80e700b95-kube-api-access-j8hv7\") pod \"cinder-scheduler-0\" (UID: \"4898b260-d20c-4e08-a760-5fa80e700b95\") " pod="openstack/cinder-scheduler-0" Jan 22 16:52:53 crc kubenswrapper[4758]: I0122 16:52:53.541235 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 22 16:52:53 crc kubenswrapper[4758]: I0122 16:52:53.859982 4758 generic.go:334] "Generic (PLEG): container finished" podID="0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5" containerID="82eb701a31f22b2189008d04498f33dda0d615831b0b09fbc67e94bf80067085" exitCode=1 Jan 22 16:52:53 crc kubenswrapper[4758]: I0122 16:52:53.860076 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5","Type":"ContainerDied","Data":"82eb701a31f22b2189008d04498f33dda0d615831b0b09fbc67e94bf80067085"} Jan 22 16:52:53 crc kubenswrapper[4758]: I0122 16:52:53.860716 4758 scope.go:117] "RemoveContainer" containerID="82eb701a31f22b2189008d04498f33dda0d615831b0b09fbc67e94bf80067085" Jan 22 16:52:53 crc kubenswrapper[4758]: I0122 16:52:53.899074 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3943daea-3dfe-4c65-ada3-f1c36f9701f8","Type":"ContainerStarted","Data":"4b73a81653fdaac8cf19130ac975926527a47758b7c361911fad4b1019f2fdb1"} Jan 22 16:52:53 crc kubenswrapper[4758]: I0122 16:52:53.899803 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 22 16:52:54 crc kubenswrapper[4758]: I0122 16:52:54.227640 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.227620486 podStartE2EDuration="6.227620486s" podCreationTimestamp="2026-01-22 16:52:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:52:53.948629909 +0000 UTC m=+1395.431969194" watchObservedRunningTime="2026-01-22 16:52:54.227620486 +0000 UTC m=+1395.710959771" Jan 22 16:52:54 crc kubenswrapper[4758]: I0122 16:52:54.231602 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 16:52:54 crc kubenswrapper[4758]: I0122 16:52:54.549314 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6b7cfcc9b6-tclz9" podUID="cd5b4616-f0db-4639-a791-c8882e65f6ca" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.175:9311/healthcheck\": read tcp 10.217.0.2:45124->10.217.0.175:9311: read: connection reset by peer" Jan 22 16:52:54 crc kubenswrapper[4758]: I0122 16:52:54.549771 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6b7cfcc9b6-tclz9" podUID="cd5b4616-f0db-4639-a791-c8882e65f6ca" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.175:9311/healthcheck\": read tcp 10.217.0.2:45140->10.217.0.175:9311: read: connection reset by peer" Jan 22 16:52:54 crc kubenswrapper[4758]: I0122 16:52:54.549787 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6b7cfcc9b6-tclz9" podUID="cd5b4616-f0db-4639-a791-c8882e65f6ca" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.175:9311/healthcheck\": read tcp 10.217.0.2:45134->10.217.0.175:9311: read: connection reset by peer" Jan 22 16:52:54 crc kubenswrapper[4758]: I0122 16:52:54.549835 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6b7cfcc9b6-tclz9" podUID="cd5b4616-f0db-4639-a791-c8882e65f6ca" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.175:9311/healthcheck\": read tcp 10.217.0.2:45126->10.217.0.175:9311: read: connection reset by peer" Jan 22 16:52:54 crc kubenswrapper[4758]: I0122 16:52:54.726911 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 22 16:52:54 crc kubenswrapper[4758]: I0122 16:52:54.726975 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 22 16:52:54 crc kubenswrapper[4758]: I0122 16:52:54.820306 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d" path="/var/lib/kubelet/pods/05f4cb00-a2ca-444c-8ca6-c85d56d9ed9d/volumes" Jan 22 16:52:54 crc kubenswrapper[4758]: I0122 16:52:54.821354 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13009e8c-ff8c-4429-ba2d-3a0053fe0ff4" path="/var/lib/kubelet/pods/13009e8c-ff8c-4429-ba2d-3a0053fe0ff4/volumes" Jan 22 16:52:54 crc kubenswrapper[4758]: I0122 16:52:54.924972 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5","Type":"ContainerStarted","Data":"7881cf6a1ea9246b1451350e25b945ccd52405bd209ed861bedc85b51ac01118"} Jan 22 16:52:54 crc kubenswrapper[4758]: I0122 16:52:54.927968 4758 generic.go:334] "Generic (PLEG): container finished" podID="cd5b4616-f0db-4639-a791-c8882e65f6ca" containerID="3b2e7ee039de7f55d394f9218bcd174b16bf80f81b4fed8aba2bb2eff102017e" exitCode=0 Jan 22 16:52:54 crc kubenswrapper[4758]: I0122 16:52:54.928026 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b7cfcc9b6-tclz9" event={"ID":"cd5b4616-f0db-4639-a791-c8882e65f6ca","Type":"ContainerDied","Data":"3b2e7ee039de7f55d394f9218bcd174b16bf80f81b4fed8aba2bb2eff102017e"} Jan 22 16:52:54 crc kubenswrapper[4758]: I0122 16:52:54.933071 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4898b260-d20c-4e08-a760-5fa80e700b95","Type":"ContainerStarted","Data":"878d07ecf5271b42e0a42dfd438728296756d596cf9aea73f92441a894b33fef"} Jan 22 16:52:55 crc kubenswrapper[4758]: I0122 16:52:55.004233 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b7cfcc9b6-tclz9" Jan 22 16:52:55 crc kubenswrapper[4758]: I0122 16:52:55.099329 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd5b4616-f0db-4639-a791-c8882e65f6ca-config-data-custom\") pod \"cd5b4616-f0db-4639-a791-c8882e65f6ca\" (UID: \"cd5b4616-f0db-4639-a791-c8882e65f6ca\") " Jan 22 16:52:55 crc kubenswrapper[4758]: I0122 16:52:55.099674 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd5b4616-f0db-4639-a791-c8882e65f6ca-config-data\") pod \"cd5b4616-f0db-4639-a791-c8882e65f6ca\" (UID: \"cd5b4616-f0db-4639-a791-c8882e65f6ca\") " Jan 22 16:52:55 crc kubenswrapper[4758]: I0122 16:52:55.099763 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd5b4616-f0db-4639-a791-c8882e65f6ca-logs\") pod \"cd5b4616-f0db-4639-a791-c8882e65f6ca\" (UID: \"cd5b4616-f0db-4639-a791-c8882e65f6ca\") " Jan 22 16:52:55 crc kubenswrapper[4758]: I0122 16:52:55.099871 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9mdj\" (UniqueName: \"kubernetes.io/projected/cd5b4616-f0db-4639-a791-c8882e65f6ca-kube-api-access-r9mdj\") pod \"cd5b4616-f0db-4639-a791-c8882e65f6ca\" (UID: \"cd5b4616-f0db-4639-a791-c8882e65f6ca\") " Jan 22 16:52:55 crc kubenswrapper[4758]: I0122 16:52:55.099901 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd5b4616-f0db-4639-a791-c8882e65f6ca-combined-ca-bundle\") pod \"cd5b4616-f0db-4639-a791-c8882e65f6ca\" (UID: \"cd5b4616-f0db-4639-a791-c8882e65f6ca\") " Jan 22 16:52:55 crc kubenswrapper[4758]: I0122 16:52:55.100415 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd5b4616-f0db-4639-a791-c8882e65f6ca-logs" (OuterVolumeSpecName: "logs") pod "cd5b4616-f0db-4639-a791-c8882e65f6ca" (UID: "cd5b4616-f0db-4639-a791-c8882e65f6ca"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:52:55 crc kubenswrapper[4758]: I0122 16:52:55.100708 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd5b4616-f0db-4639-a791-c8882e65f6ca-logs\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:55 crc kubenswrapper[4758]: I0122 16:52:55.108361 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd5b4616-f0db-4639-a791-c8882e65f6ca-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "cd5b4616-f0db-4639-a791-c8882e65f6ca" (UID: "cd5b4616-f0db-4639-a791-c8882e65f6ca"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:55 crc kubenswrapper[4758]: I0122 16:52:55.113972 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd5b4616-f0db-4639-a791-c8882e65f6ca-kube-api-access-r9mdj" (OuterVolumeSpecName: "kube-api-access-r9mdj") pod "cd5b4616-f0db-4639-a791-c8882e65f6ca" (UID: "cd5b4616-f0db-4639-a791-c8882e65f6ca"). InnerVolumeSpecName "kube-api-access-r9mdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:52:55 crc kubenswrapper[4758]: I0122 16:52:55.142950 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd5b4616-f0db-4639-a791-c8882e65f6ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd5b4616-f0db-4639-a791-c8882e65f6ca" (UID: "cd5b4616-f0db-4639-a791-c8882e65f6ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:55 crc kubenswrapper[4758]: I0122 16:52:55.165398 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd5b4616-f0db-4639-a791-c8882e65f6ca-config-data" (OuterVolumeSpecName: "config-data") pod "cd5b4616-f0db-4639-a791-c8882e65f6ca" (UID: "cd5b4616-f0db-4639-a791-c8882e65f6ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:55 crc kubenswrapper[4758]: I0122 16:52:55.202186 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9mdj\" (UniqueName: \"kubernetes.io/projected/cd5b4616-f0db-4639-a791-c8882e65f6ca-kube-api-access-r9mdj\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:55 crc kubenswrapper[4758]: I0122 16:52:55.202230 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd5b4616-f0db-4639-a791-c8882e65f6ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:55 crc kubenswrapper[4758]: I0122 16:52:55.202243 4758 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd5b4616-f0db-4639-a791-c8882e65f6ca-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:55 crc kubenswrapper[4758]: I0122 16:52:55.202255 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd5b4616-f0db-4639-a791-c8882e65f6ca-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:55 crc kubenswrapper[4758]: I0122 16:52:55.946910 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4898b260-d20c-4e08-a760-5fa80e700b95","Type":"ContainerStarted","Data":"a388beb76160bb8350d93fb0b7cdc1b22cc8827dd43a57e13dfed95a69f6896d"} Jan 22 16:52:55 crc kubenswrapper[4758]: I0122 16:52:55.947207 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4898b260-d20c-4e08-a760-5fa80e700b95","Type":"ContainerStarted","Data":"24bf2b81f37a2e754a1d1b28d6ed78013ae2381348778ca7b2d0af1cb0b04e42"} Jan 22 16:52:55 crc kubenswrapper[4758]: I0122 16:52:55.949928 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b7cfcc9b6-tclz9" event={"ID":"cd5b4616-f0db-4639-a791-c8882e65f6ca","Type":"ContainerDied","Data":"cf8801fa574f38bac6462d4899dfd5f6f8aafde8709b294654b251dbb92a8825"} Jan 22 16:52:55 crc kubenswrapper[4758]: I0122 16:52:55.949956 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b7cfcc9b6-tclz9" Jan 22 16:52:55 crc kubenswrapper[4758]: I0122 16:52:55.950025 4758 scope.go:117] "RemoveContainer" containerID="3b2e7ee039de7f55d394f9218bcd174b16bf80f81b4fed8aba2bb2eff102017e" Jan 22 16:52:55 crc kubenswrapper[4758]: I0122 16:52:55.979283 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=2.979258716 podStartE2EDuration="2.979258716s" podCreationTimestamp="2026-01-22 16:52:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:52:55.967622507 +0000 UTC m=+1397.450961792" watchObservedRunningTime="2026-01-22 16:52:55.979258716 +0000 UTC m=+1397.462598011" Jan 22 16:52:55 crc kubenswrapper[4758]: I0122 16:52:55.983628 4758 scope.go:117] "RemoveContainer" containerID="91aa93d9de3692d224a3e448cb4b5983ea39fcb4ed0a1602ea985f042784b45c" Jan 22 16:52:56 crc kubenswrapper[4758]: I0122 16:52:56.000675 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6b7cfcc9b6-tclz9"] Jan 22 16:52:56 crc kubenswrapper[4758]: I0122 16:52:56.007774 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6b7cfcc9b6-tclz9"] Jan 22 16:52:56 crc kubenswrapper[4758]: I0122 16:52:56.640727 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-5fb5ff74dc-qd4wf"] Jan 22 16:52:56 crc kubenswrapper[4758]: E0122 16:52:56.641141 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd5b4616-f0db-4639-a791-c8882e65f6ca" containerName="barbican-api-log" Jan 22 16:52:56 crc kubenswrapper[4758]: I0122 16:52:56.641163 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd5b4616-f0db-4639-a791-c8882e65f6ca" containerName="barbican-api-log" Jan 22 16:52:56 crc kubenswrapper[4758]: E0122 16:52:56.641171 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd5b4616-f0db-4639-a791-c8882e65f6ca" containerName="barbican-api" Jan 22 16:52:56 crc kubenswrapper[4758]: I0122 16:52:56.641177 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd5b4616-f0db-4639-a791-c8882e65f6ca" containerName="barbican-api" Jan 22 16:52:56 crc kubenswrapper[4758]: I0122 16:52:56.641371 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd5b4616-f0db-4639-a791-c8882e65f6ca" containerName="barbican-api" Jan 22 16:52:56 crc kubenswrapper[4758]: I0122 16:52:56.641393 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd5b4616-f0db-4639-a791-c8882e65f6ca" containerName="barbican-api-log" Jan 22 16:52:56 crc kubenswrapper[4758]: I0122 16:52:56.647831 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5fb5ff74dc-qd4wf" Jan 22 16:52:56 crc kubenswrapper[4758]: I0122 16:52:56.659196 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 22 16:52:56 crc kubenswrapper[4758]: I0122 16:52:56.659390 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 22 16:52:56 crc kubenswrapper[4758]: I0122 16:52:56.659511 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 22 16:52:56 crc kubenswrapper[4758]: I0122 16:52:56.670505 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5fb5ff74dc-qd4wf"] Jan 22 16:52:56 crc kubenswrapper[4758]: I0122 16:52:56.731004 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8c43412a-0632-40d3-918a-e8a601754dcd-etc-swift\") pod \"swift-proxy-5fb5ff74dc-qd4wf\" (UID: \"8c43412a-0632-40d3-918a-e8a601754dcd\") " pod="openstack/swift-proxy-5fb5ff74dc-qd4wf" Jan 22 16:52:56 crc kubenswrapper[4758]: I0122 16:52:56.731069 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz47w\" (UniqueName: \"kubernetes.io/projected/8c43412a-0632-40d3-918a-e8a601754dcd-kube-api-access-cz47w\") pod \"swift-proxy-5fb5ff74dc-qd4wf\" (UID: \"8c43412a-0632-40d3-918a-e8a601754dcd\") " pod="openstack/swift-proxy-5fb5ff74dc-qd4wf" Jan 22 16:52:56 crc kubenswrapper[4758]: I0122 16:52:56.731105 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c43412a-0632-40d3-918a-e8a601754dcd-config-data\") pod \"swift-proxy-5fb5ff74dc-qd4wf\" (UID: \"8c43412a-0632-40d3-918a-e8a601754dcd\") " pod="openstack/swift-proxy-5fb5ff74dc-qd4wf" Jan 22 16:52:56 crc kubenswrapper[4758]: I0122 16:52:56.731158 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c43412a-0632-40d3-918a-e8a601754dcd-internal-tls-certs\") pod \"swift-proxy-5fb5ff74dc-qd4wf\" (UID: \"8c43412a-0632-40d3-918a-e8a601754dcd\") " pod="openstack/swift-proxy-5fb5ff74dc-qd4wf" Jan 22 16:52:56 crc kubenswrapper[4758]: I0122 16:52:56.731176 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c43412a-0632-40d3-918a-e8a601754dcd-run-httpd\") pod \"swift-proxy-5fb5ff74dc-qd4wf\" (UID: \"8c43412a-0632-40d3-918a-e8a601754dcd\") " pod="openstack/swift-proxy-5fb5ff74dc-qd4wf" Jan 22 16:52:56 crc kubenswrapper[4758]: I0122 16:52:56.731218 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c43412a-0632-40d3-918a-e8a601754dcd-combined-ca-bundle\") pod \"swift-proxy-5fb5ff74dc-qd4wf\" (UID: \"8c43412a-0632-40d3-918a-e8a601754dcd\") " pod="openstack/swift-proxy-5fb5ff74dc-qd4wf" Jan 22 16:52:56 crc kubenswrapper[4758]: I0122 16:52:56.731242 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c43412a-0632-40d3-918a-e8a601754dcd-log-httpd\") pod \"swift-proxy-5fb5ff74dc-qd4wf\" (UID: \"8c43412a-0632-40d3-918a-e8a601754dcd\") " pod="openstack/swift-proxy-5fb5ff74dc-qd4wf" Jan 22 16:52:56 crc kubenswrapper[4758]: I0122 16:52:56.731265 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c43412a-0632-40d3-918a-e8a601754dcd-public-tls-certs\") pod \"swift-proxy-5fb5ff74dc-qd4wf\" (UID: \"8c43412a-0632-40d3-918a-e8a601754dcd\") " pod="openstack/swift-proxy-5fb5ff74dc-qd4wf" Jan 22 16:52:56 crc kubenswrapper[4758]: I0122 16:52:56.839796 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c43412a-0632-40d3-918a-e8a601754dcd-internal-tls-certs\") pod \"swift-proxy-5fb5ff74dc-qd4wf\" (UID: \"8c43412a-0632-40d3-918a-e8a601754dcd\") " pod="openstack/swift-proxy-5fb5ff74dc-qd4wf" Jan 22 16:52:56 crc kubenswrapper[4758]: I0122 16:52:56.839838 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c43412a-0632-40d3-918a-e8a601754dcd-run-httpd\") pod \"swift-proxy-5fb5ff74dc-qd4wf\" (UID: \"8c43412a-0632-40d3-918a-e8a601754dcd\") " pod="openstack/swift-proxy-5fb5ff74dc-qd4wf" Jan 22 16:52:56 crc kubenswrapper[4758]: I0122 16:52:56.839884 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c43412a-0632-40d3-918a-e8a601754dcd-combined-ca-bundle\") pod \"swift-proxy-5fb5ff74dc-qd4wf\" (UID: \"8c43412a-0632-40d3-918a-e8a601754dcd\") " pod="openstack/swift-proxy-5fb5ff74dc-qd4wf" Jan 22 16:52:56 crc kubenswrapper[4758]: I0122 16:52:56.839906 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c43412a-0632-40d3-918a-e8a601754dcd-log-httpd\") pod \"swift-proxy-5fb5ff74dc-qd4wf\" (UID: \"8c43412a-0632-40d3-918a-e8a601754dcd\") " pod="openstack/swift-proxy-5fb5ff74dc-qd4wf" Jan 22 16:52:56 crc kubenswrapper[4758]: I0122 16:52:56.839929 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c43412a-0632-40d3-918a-e8a601754dcd-public-tls-certs\") pod \"swift-proxy-5fb5ff74dc-qd4wf\" (UID: \"8c43412a-0632-40d3-918a-e8a601754dcd\") " pod="openstack/swift-proxy-5fb5ff74dc-qd4wf" Jan 22 16:52:56 crc kubenswrapper[4758]: I0122 16:52:56.839983 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8c43412a-0632-40d3-918a-e8a601754dcd-etc-swift\") pod \"swift-proxy-5fb5ff74dc-qd4wf\" (UID: \"8c43412a-0632-40d3-918a-e8a601754dcd\") " pod="openstack/swift-proxy-5fb5ff74dc-qd4wf" Jan 22 16:52:56 crc kubenswrapper[4758]: I0122 16:52:56.840009 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz47w\" (UniqueName: \"kubernetes.io/projected/8c43412a-0632-40d3-918a-e8a601754dcd-kube-api-access-cz47w\") pod \"swift-proxy-5fb5ff74dc-qd4wf\" (UID: \"8c43412a-0632-40d3-918a-e8a601754dcd\") " pod="openstack/swift-proxy-5fb5ff74dc-qd4wf" Jan 22 16:52:56 crc kubenswrapper[4758]: I0122 16:52:56.840038 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c43412a-0632-40d3-918a-e8a601754dcd-config-data\") pod \"swift-proxy-5fb5ff74dc-qd4wf\" (UID: \"8c43412a-0632-40d3-918a-e8a601754dcd\") " pod="openstack/swift-proxy-5fb5ff74dc-qd4wf" Jan 22 16:52:56 crc kubenswrapper[4758]: I0122 16:52:56.845788 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c43412a-0632-40d3-918a-e8a601754dcd-log-httpd\") pod \"swift-proxy-5fb5ff74dc-qd4wf\" (UID: \"8c43412a-0632-40d3-918a-e8a601754dcd\") " pod="openstack/swift-proxy-5fb5ff74dc-qd4wf" Jan 22 16:52:56 crc kubenswrapper[4758]: I0122 16:52:56.846170 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c43412a-0632-40d3-918a-e8a601754dcd-run-httpd\") pod \"swift-proxy-5fb5ff74dc-qd4wf\" (UID: \"8c43412a-0632-40d3-918a-e8a601754dcd\") " pod="openstack/swift-proxy-5fb5ff74dc-qd4wf" Jan 22 16:52:56 crc kubenswrapper[4758]: I0122 16:52:56.848566 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c43412a-0632-40d3-918a-e8a601754dcd-config-data\") pod \"swift-proxy-5fb5ff74dc-qd4wf\" (UID: \"8c43412a-0632-40d3-918a-e8a601754dcd\") " pod="openstack/swift-proxy-5fb5ff74dc-qd4wf" Jan 22 16:52:56 crc kubenswrapper[4758]: I0122 16:52:56.866971 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8c43412a-0632-40d3-918a-e8a601754dcd-etc-swift\") pod \"swift-proxy-5fb5ff74dc-qd4wf\" (UID: \"8c43412a-0632-40d3-918a-e8a601754dcd\") " pod="openstack/swift-proxy-5fb5ff74dc-qd4wf" Jan 22 16:52:56 crc kubenswrapper[4758]: I0122 16:52:56.868478 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c43412a-0632-40d3-918a-e8a601754dcd-combined-ca-bundle\") pod \"swift-proxy-5fb5ff74dc-qd4wf\" (UID: \"8c43412a-0632-40d3-918a-e8a601754dcd\") " pod="openstack/swift-proxy-5fb5ff74dc-qd4wf" Jan 22 16:52:56 crc kubenswrapper[4758]: I0122 16:52:56.894449 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c43412a-0632-40d3-918a-e8a601754dcd-internal-tls-certs\") pod \"swift-proxy-5fb5ff74dc-qd4wf\" (UID: \"8c43412a-0632-40d3-918a-e8a601754dcd\") " pod="openstack/swift-proxy-5fb5ff74dc-qd4wf" Jan 22 16:52:56 crc kubenswrapper[4758]: I0122 16:52:56.903899 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-88b76f788-th2jq" podUID="40487aaa-4c45-41b2-ab14-76477ed2f4bb" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.160:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.160:8443: connect: connection refused" Jan 22 16:52:56 crc kubenswrapper[4758]: I0122 16:52:56.907394 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c43412a-0632-40d3-918a-e8a601754dcd-public-tls-certs\") pod \"swift-proxy-5fb5ff74dc-qd4wf\" (UID: \"8c43412a-0632-40d3-918a-e8a601754dcd\") " pod="openstack/swift-proxy-5fb5ff74dc-qd4wf" Jan 22 16:52:56 crc kubenswrapper[4758]: I0122 16:52:56.915632 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz47w\" (UniqueName: \"kubernetes.io/projected/8c43412a-0632-40d3-918a-e8a601754dcd-kube-api-access-cz47w\") pod \"swift-proxy-5fb5ff74dc-qd4wf\" (UID: \"8c43412a-0632-40d3-918a-e8a601754dcd\") " pod="openstack/swift-proxy-5fb5ff74dc-qd4wf" Jan 22 16:52:56 crc kubenswrapper[4758]: I0122 16:52:56.935657 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd5b4616-f0db-4639-a791-c8882e65f6ca" path="/var/lib/kubelet/pods/cd5b4616-f0db-4639-a791-c8882e65f6ca/volumes" Jan 22 16:52:56 crc kubenswrapper[4758]: I0122 16:52:56.936956 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-88b76f788-th2jq" Jan 22 16:52:56 crc kubenswrapper[4758]: I0122 16:52:56.965795 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5fb5ff74dc-qd4wf" Jan 22 16:52:57 crc kubenswrapper[4758]: I0122 16:52:57.116861 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 22 16:52:57 crc kubenswrapper[4758]: I0122 16:52:57.117229 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 22 16:52:57 crc kubenswrapper[4758]: I0122 16:52:57.168037 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 22 16:52:57 crc kubenswrapper[4758]: I0122 16:52:57.176377 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 22 16:52:57 crc kubenswrapper[4758]: I0122 16:52:57.280021 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 22 16:52:57 crc kubenswrapper[4758]: I0122 16:52:57.280062 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 22 16:52:57 crc kubenswrapper[4758]: I0122 16:52:57.332327 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 22 16:52:57 crc kubenswrapper[4758]: I0122 16:52:57.344073 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 22 16:52:57 crc kubenswrapper[4758]: I0122 16:52:57.581837 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5fb5ff74dc-qd4wf"] Jan 22 16:52:57 crc kubenswrapper[4758]: I0122 16:52:57.664669 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 16:52:57 crc kubenswrapper[4758]: I0122 16:52:57.665445 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2" containerName="ceilometer-central-agent" containerID="cri-o://a87ec731c15c865dfd922ff358e50c07ec711fad452c4bc5d2435063607b9f52" gracePeriod=30 Jan 22 16:52:57 crc kubenswrapper[4758]: I0122 16:52:57.666317 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2" containerName="proxy-httpd" containerID="cri-o://37127c9b7a1671450fd4af1c038c3c3c5b2f6249e9734fc5f89e5d196c959cf0" gracePeriod=30 Jan 22 16:52:57 crc kubenswrapper[4758]: I0122 16:52:57.666496 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2" containerName="ceilometer-notification-agent" containerID="cri-o://6042a0e469a10144b1d7b8eec0dab7ca6f4bdf95df62b73166c44465f0907f22" gracePeriod=30 Jan 22 16:52:57 crc kubenswrapper[4758]: I0122 16:52:57.666570 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2" containerName="sg-core" containerID="cri-o://0551082ac671f06b3f279766df1407ea976e33632c448297e015ccc75c12a048" gracePeriod=30 Jan 22 16:52:57 crc kubenswrapper[4758]: I0122 16:52:57.684100 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 22 16:52:58 crc kubenswrapper[4758]: I0122 16:52:58.042426 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5fb5ff74dc-qd4wf" event={"ID":"8c43412a-0632-40d3-918a-e8a601754dcd","Type":"ContainerStarted","Data":"8c34a4cf1ea256f5f6917da2dc8d3ca5019213bf301a2c721ee5465d31d45d87"} Jan 22 16:52:58 crc kubenswrapper[4758]: I0122 16:52:58.042699 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5fb5ff74dc-qd4wf" event={"ID":"8c43412a-0632-40d3-918a-e8a601754dcd","Type":"ContainerStarted","Data":"a2d0ea4ba1cde18b4eed8d16e1cbc07215acc579018c4186fbbc09880cb92fb9"} Jan 22 16:52:58 crc kubenswrapper[4758]: I0122 16:52:58.048468 4758 generic.go:334] "Generic (PLEG): container finished" podID="e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2" containerID="37127c9b7a1671450fd4af1c038c3c3c5b2f6249e9734fc5f89e5d196c959cf0" exitCode=0 Jan 22 16:52:58 crc kubenswrapper[4758]: I0122 16:52:58.048501 4758 generic.go:334] "Generic (PLEG): container finished" podID="e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2" containerID="0551082ac671f06b3f279766df1407ea976e33632c448297e015ccc75c12a048" exitCode=2 Jan 22 16:52:58 crc kubenswrapper[4758]: I0122 16:52:58.048642 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2","Type":"ContainerDied","Data":"37127c9b7a1671450fd4af1c038c3c3c5b2f6249e9734fc5f89e5d196c959cf0"} Jan 22 16:52:58 crc kubenswrapper[4758]: I0122 16:52:58.048691 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2","Type":"ContainerDied","Data":"0551082ac671f06b3f279766df1407ea976e33632c448297e015ccc75c12a048"} Jan 22 16:52:58 crc kubenswrapper[4758]: I0122 16:52:58.049336 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 22 16:52:58 crc kubenswrapper[4758]: I0122 16:52:58.049510 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 22 16:52:58 crc kubenswrapper[4758]: I0122 16:52:58.050029 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 22 16:52:58 crc kubenswrapper[4758]: I0122 16:52:58.050221 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 22 16:52:58 crc kubenswrapper[4758]: I0122 16:52:58.541712 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 22 16:52:59 crc kubenswrapper[4758]: I0122 16:52:59.066311 4758 generic.go:334] "Generic (PLEG): container finished" podID="0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5" containerID="7881cf6a1ea9246b1451350e25b945ccd52405bd209ed861bedc85b51ac01118" exitCode=1 Jan 22 16:52:59 crc kubenswrapper[4758]: I0122 16:52:59.066395 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5","Type":"ContainerDied","Data":"7881cf6a1ea9246b1451350e25b945ccd52405bd209ed861bedc85b51ac01118"} Jan 22 16:52:59 crc kubenswrapper[4758]: I0122 16:52:59.066434 4758 scope.go:117] "RemoveContainer" containerID="82eb701a31f22b2189008d04498f33dda0d615831b0b09fbc67e94bf80067085" Jan 22 16:52:59 crc kubenswrapper[4758]: I0122 16:52:59.067233 4758 scope.go:117] "RemoveContainer" containerID="7881cf6a1ea9246b1451350e25b945ccd52405bd209ed861bedc85b51ac01118" Jan 22 16:52:59 crc kubenswrapper[4758]: E0122 16:52:59.067500 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5)\"" pod="openstack/watcher-decision-engine-0" podUID="0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5" Jan 22 16:52:59 crc kubenswrapper[4758]: I0122 16:52:59.077827 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5fb5ff74dc-qd4wf" event={"ID":"8c43412a-0632-40d3-918a-e8a601754dcd","Type":"ContainerStarted","Data":"046daa0c3a2438eca1d1efc466e62a0e161209df6cd1a905ff8652c24ea8737a"} Jan 22 16:52:59 crc kubenswrapper[4758]: I0122 16:52:59.078771 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5fb5ff74dc-qd4wf" Jan 22 16:52:59 crc kubenswrapper[4758]: I0122 16:52:59.078814 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5fb5ff74dc-qd4wf" Jan 22 16:52:59 crc kubenswrapper[4758]: I0122 16:52:59.082888 4758 generic.go:334] "Generic (PLEG): container finished" podID="e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2" containerID="6042a0e469a10144b1d7b8eec0dab7ca6f4bdf95df62b73166c44465f0907f22" exitCode=0 Jan 22 16:52:59 crc kubenswrapper[4758]: I0122 16:52:59.082914 4758 generic.go:334] "Generic (PLEG): container finished" podID="e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2" containerID="a87ec731c15c865dfd922ff358e50c07ec711fad452c4bc5d2435063607b9f52" exitCode=0 Jan 22 16:52:59 crc kubenswrapper[4758]: I0122 16:52:59.083729 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2","Type":"ContainerDied","Data":"6042a0e469a10144b1d7b8eec0dab7ca6f4bdf95df62b73166c44465f0907f22"} Jan 22 16:52:59 crc kubenswrapper[4758]: I0122 16:52:59.083807 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2","Type":"ContainerDied","Data":"a87ec731c15c865dfd922ff358e50c07ec711fad452c4bc5d2435063607b9f52"} Jan 22 16:52:59 crc kubenswrapper[4758]: I0122 16:52:59.118235 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-5fb5ff74dc-qd4wf" podStartSLOduration=3.118212838 podStartE2EDuration="3.118212838s" podCreationTimestamp="2026-01-22 16:52:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:52:59.11427166 +0000 UTC m=+1400.597610955" watchObservedRunningTime="2026-01-22 16:52:59.118212838 +0000 UTC m=+1400.601552113" Jan 22 16:52:59 crc kubenswrapper[4758]: I0122 16:52:59.181785 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 16:52:59 crc kubenswrapper[4758]: I0122 16:52:59.303591 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2-scripts\") pod \"e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2\" (UID: \"e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2\") " Jan 22 16:52:59 crc kubenswrapper[4758]: I0122 16:52:59.304077 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2-config-data\") pod \"e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2\" (UID: \"e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2\") " Jan 22 16:52:59 crc kubenswrapper[4758]: I0122 16:52:59.304128 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2-run-httpd\") pod \"e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2\" (UID: \"e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2\") " Jan 22 16:52:59 crc kubenswrapper[4758]: I0122 16:52:59.304251 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2-sg-core-conf-yaml\") pod \"e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2\" (UID: \"e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2\") " Jan 22 16:52:59 crc kubenswrapper[4758]: I0122 16:52:59.304311 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2-log-httpd\") pod \"e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2\" (UID: \"e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2\") " Jan 22 16:52:59 crc kubenswrapper[4758]: I0122 16:52:59.304341 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b79c9\" (UniqueName: \"kubernetes.io/projected/e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2-kube-api-access-b79c9\") pod \"e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2\" (UID: \"e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2\") " Jan 22 16:52:59 crc kubenswrapper[4758]: I0122 16:52:59.304449 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2-combined-ca-bundle\") pod \"e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2\" (UID: \"e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2\") " Jan 22 16:52:59 crc kubenswrapper[4758]: I0122 16:52:59.306971 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2" (UID: "e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:52:59 crc kubenswrapper[4758]: I0122 16:52:59.307380 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2" (UID: "e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:52:59 crc kubenswrapper[4758]: I0122 16:52:59.313964 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2-scripts" (OuterVolumeSpecName: "scripts") pod "e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2" (UID: "e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:59 crc kubenswrapper[4758]: I0122 16:52:59.325317 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2-kube-api-access-b79c9" (OuterVolumeSpecName: "kube-api-access-b79c9") pod "e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2" (UID: "e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2"). InnerVolumeSpecName "kube-api-access-b79c9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:52:59 crc kubenswrapper[4758]: I0122 16:52:59.377840 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2" (UID: "e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:59 crc kubenswrapper[4758]: I0122 16:52:59.406808 4758 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:59 crc kubenswrapper[4758]: I0122 16:52:59.406847 4758 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:59 crc kubenswrapper[4758]: I0122 16:52:59.406861 4758 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:59 crc kubenswrapper[4758]: I0122 16:52:59.406872 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b79c9\" (UniqueName: \"kubernetes.io/projected/e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2-kube-api-access-b79c9\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:59 crc kubenswrapper[4758]: I0122 16:52:59.406882 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:59 crc kubenswrapper[4758]: I0122 16:52:59.446991 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2" (UID: "e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:59 crc kubenswrapper[4758]: I0122 16:52:59.469868 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2-config-data" (OuterVolumeSpecName: "config-data") pod "e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2" (UID: "e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:59 crc kubenswrapper[4758]: I0122 16:52:59.508315 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:59 crc kubenswrapper[4758]: I0122 16:52:59.508361 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.094031 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2","Type":"ContainerDied","Data":"f5e4235e98107cb1e473405543af580e1db21446d3e7c65c4138bcb0b2364577"} Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.094080 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.094385 4758 scope.go:117] "RemoveContainer" containerID="37127c9b7a1671450fd4af1c038c3c3c5b2f6249e9734fc5f89e5d196c959cf0" Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.109975 4758 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.109997 4758 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.132415 4758 scope.go:117] "RemoveContainer" containerID="0551082ac671f06b3f279766df1407ea976e33632c448297e015ccc75c12a048" Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.135867 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.158699 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.164909 4758 scope.go:117] "RemoveContainer" containerID="6042a0e469a10144b1d7b8eec0dab7ca6f4bdf95df62b73166c44465f0907f22" Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.192819 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 16:53:00 crc kubenswrapper[4758]: E0122 16:53:00.193281 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2" containerName="ceilometer-notification-agent" Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.193294 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2" containerName="ceilometer-notification-agent" Jan 22 16:53:00 crc kubenswrapper[4758]: E0122 16:53:00.193344 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2" containerName="proxy-httpd" Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.193351 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2" containerName="proxy-httpd" Jan 22 16:53:00 crc kubenswrapper[4758]: E0122 16:53:00.193374 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2" containerName="sg-core" Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.193379 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2" containerName="sg-core" Jan 22 16:53:00 crc kubenswrapper[4758]: E0122 16:53:00.193391 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2" containerName="ceilometer-central-agent" Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.193398 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2" containerName="ceilometer-central-agent" Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.193583 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2" containerName="ceilometer-central-agent" Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.193599 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2" containerName="sg-core" Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.193622 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2" containerName="proxy-httpd" Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.193636 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2" containerName="ceilometer-notification-agent" Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.195414 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.198934 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.199171 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.201679 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.208256 4758 scope.go:117] "RemoveContainer" containerID="a87ec731c15c865dfd922ff358e50c07ec711fad452c4bc5d2435063607b9f52" Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.225906 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09104569-02cb-462b-84af-6e8d4f7e6a7d-scripts\") pod \"ceilometer-0\" (UID: \"09104569-02cb-462b-84af-6e8d4f7e6a7d\") " pod="openstack/ceilometer-0" Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.226039 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09104569-02cb-462b-84af-6e8d4f7e6a7d-log-httpd\") pod \"ceilometer-0\" (UID: \"09104569-02cb-462b-84af-6e8d4f7e6a7d\") " pod="openstack/ceilometer-0" Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.226133 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvmt4\" (UniqueName: \"kubernetes.io/projected/09104569-02cb-462b-84af-6e8d4f7e6a7d-kube-api-access-jvmt4\") pod \"ceilometer-0\" (UID: \"09104569-02cb-462b-84af-6e8d4f7e6a7d\") " pod="openstack/ceilometer-0" Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.226166 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09104569-02cb-462b-84af-6e8d4f7e6a7d-config-data\") pod \"ceilometer-0\" (UID: \"09104569-02cb-462b-84af-6e8d4f7e6a7d\") " pod="openstack/ceilometer-0" Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.226192 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09104569-02cb-462b-84af-6e8d4f7e6a7d-run-httpd\") pod \"ceilometer-0\" (UID: \"09104569-02cb-462b-84af-6e8d4f7e6a7d\") " pod="openstack/ceilometer-0" Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.226222 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09104569-02cb-462b-84af-6e8d4f7e6a7d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"09104569-02cb-462b-84af-6e8d4f7e6a7d\") " pod="openstack/ceilometer-0" Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.226461 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/09104569-02cb-462b-84af-6e8d4f7e6a7d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"09104569-02cb-462b-84af-6e8d4f7e6a7d\") " pod="openstack/ceilometer-0" Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.327496 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/09104569-02cb-462b-84af-6e8d4f7e6a7d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"09104569-02cb-462b-84af-6e8d4f7e6a7d\") " pod="openstack/ceilometer-0" Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.327551 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09104569-02cb-462b-84af-6e8d4f7e6a7d-scripts\") pod \"ceilometer-0\" (UID: \"09104569-02cb-462b-84af-6e8d4f7e6a7d\") " pod="openstack/ceilometer-0" Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.327594 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09104569-02cb-462b-84af-6e8d4f7e6a7d-log-httpd\") pod \"ceilometer-0\" (UID: \"09104569-02cb-462b-84af-6e8d4f7e6a7d\") " pod="openstack/ceilometer-0" Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.327624 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvmt4\" (UniqueName: \"kubernetes.io/projected/09104569-02cb-462b-84af-6e8d4f7e6a7d-kube-api-access-jvmt4\") pod \"ceilometer-0\" (UID: \"09104569-02cb-462b-84af-6e8d4f7e6a7d\") " pod="openstack/ceilometer-0" Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.327643 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09104569-02cb-462b-84af-6e8d4f7e6a7d-config-data\") pod \"ceilometer-0\" (UID: \"09104569-02cb-462b-84af-6e8d4f7e6a7d\") " pod="openstack/ceilometer-0" Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.327658 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09104569-02cb-462b-84af-6e8d4f7e6a7d-run-httpd\") pod \"ceilometer-0\" (UID: \"09104569-02cb-462b-84af-6e8d4f7e6a7d\") " pod="openstack/ceilometer-0" Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.327677 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09104569-02cb-462b-84af-6e8d4f7e6a7d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"09104569-02cb-462b-84af-6e8d4f7e6a7d\") " pod="openstack/ceilometer-0" Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.328297 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09104569-02cb-462b-84af-6e8d4f7e6a7d-log-httpd\") pod \"ceilometer-0\" (UID: \"09104569-02cb-462b-84af-6e8d4f7e6a7d\") " pod="openstack/ceilometer-0" Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.329678 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09104569-02cb-462b-84af-6e8d4f7e6a7d-run-httpd\") pod \"ceilometer-0\" (UID: \"09104569-02cb-462b-84af-6e8d4f7e6a7d\") " pod="openstack/ceilometer-0" Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.333722 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09104569-02cb-462b-84af-6e8d4f7e6a7d-config-data\") pod \"ceilometer-0\" (UID: \"09104569-02cb-462b-84af-6e8d4f7e6a7d\") " pod="openstack/ceilometer-0" Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.334596 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09104569-02cb-462b-84af-6e8d4f7e6a7d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"09104569-02cb-462b-84af-6e8d4f7e6a7d\") " pod="openstack/ceilometer-0" Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.343653 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09104569-02cb-462b-84af-6e8d4f7e6a7d-scripts\") pod \"ceilometer-0\" (UID: \"09104569-02cb-462b-84af-6e8d4f7e6a7d\") " pod="openstack/ceilometer-0" Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.346579 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvmt4\" (UniqueName: \"kubernetes.io/projected/09104569-02cb-462b-84af-6e8d4f7e6a7d-kube-api-access-jvmt4\") pod \"ceilometer-0\" (UID: \"09104569-02cb-462b-84af-6e8d4f7e6a7d\") " pod="openstack/ceilometer-0" Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.363382 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/09104569-02cb-462b-84af-6e8d4f7e6a7d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"09104569-02cb-462b-84af-6e8d4f7e6a7d\") " pod="openstack/ceilometer-0" Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.527058 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.776874 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.784833 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.824450 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2" path="/var/lib/kubelet/pods/e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2/volumes" Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.830414 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.830504 4758 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 16:53:00 crc kubenswrapper[4758]: I0122 16:53:00.955498 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 22 16:53:01 crc kubenswrapper[4758]: I0122 16:53:01.680491 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 22 16:53:02 crc kubenswrapper[4758]: W0122 16:53:02.675945 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c1d0803_658d_4bdb_8770_3d3921554591.slice/crio-839071acf6a607e5fdccef048f7b7875c23ea52a14bf7e3d9ab757714f863069 WatchSource:0}: Error finding container 839071acf6a607e5fdccef048f7b7875c23ea52a14bf7e3d9ab757714f863069: Status 404 returned error can't find the container with id 839071acf6a607e5fdccef048f7b7875c23ea52a14bf7e3d9ab757714f863069 Jan 22 16:53:02 crc kubenswrapper[4758]: W0122 16:53:02.683095 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1383243_b82d_4aaa_876f_aad36c14158a.slice/crio-17019e62cb4154d285ebe6847dc6cf2dc95ee42f42987bb3c08b929e970d7b29 WatchSource:0}: Error finding container 17019e62cb4154d285ebe6847dc6cf2dc95ee42f42987bb3c08b929e970d7b29: Status 404 returned error can't find the container with id 17019e62cb4154d285ebe6847dc6cf2dc95ee42f42987bb3c08b929e970d7b29 Jan 22 16:53:02 crc kubenswrapper[4758]: W0122 16:53:02.683291 4758 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c1d0803_658d_4bdb_8770_3d3921554591.slice/crio-conmon-0552816a9ba646bbab3b68c9a8675ed966b7ebf91f680a33ae8338aaf77e68a7.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c1d0803_658d_4bdb_8770_3d3921554591.slice/crio-conmon-0552816a9ba646bbab3b68c9a8675ed966b7ebf91f680a33ae8338aaf77e68a7.scope: no such file or directory Jan 22 16:53:02 crc kubenswrapper[4758]: W0122 16:53:02.683343 4758 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1383243_b82d_4aaa_876f_aad36c14158a.slice/crio-conmon-957da5532f165f92f1aca059a509fea4f4d557da7b77fc0b490d72e8187a1820.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1383243_b82d_4aaa_876f_aad36c14158a.slice/crio-conmon-957da5532f165f92f1aca059a509fea4f4d557da7b77fc0b490d72e8187a1820.scope: no such file or directory Jan 22 16:53:02 crc kubenswrapper[4758]: W0122 16:53:02.695486 4758 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1383243_b82d_4aaa_876f_aad36c14158a.slice/crio-957da5532f165f92f1aca059a509fea4f4d557da7b77fc0b490d72e8187a1820.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1383243_b82d_4aaa_876f_aad36c14158a.slice/crio-957da5532f165f92f1aca059a509fea4f4d557da7b77fc0b490d72e8187a1820.scope: no such file or directory Jan 22 16:53:02 crc kubenswrapper[4758]: W0122 16:53:02.695549 4758 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c1d0803_658d_4bdb_8770_3d3921554591.slice/crio-0552816a9ba646bbab3b68c9a8675ed966b7ebf91f680a33ae8338aaf77e68a7.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c1d0803_658d_4bdb_8770_3d3921554591.slice/crio-0552816a9ba646bbab3b68c9a8675ed966b7ebf91f680a33ae8338aaf77e68a7.scope: no such file or directory Jan 22 16:53:02 crc kubenswrapper[4758]: W0122 16:53:02.695716 4758 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a5401a8_4432_405a_8cdd_06d21ee90ece.slice/crio-conmon-9ed7716389bb42108c02f6a05b6d832a0b8d104d94073ae0784b73206992c4da.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a5401a8_4432_405a_8cdd_06d21ee90ece.slice/crio-conmon-9ed7716389bb42108c02f6a05b6d832a0b8d104d94073ae0784b73206992c4da.scope: no such file or directory Jan 22 16:53:02 crc kubenswrapper[4758]: E0122 16:53:02.698606 4758 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13009e8c_ff8c_4429_ba2d_3a0053fe0ff4.slice/crio-86345802f5a5a15aa9bf1c1de706ae793adceca42022e6cc8d86be975077107e: Error finding container 86345802f5a5a15aa9bf1c1de706ae793adceca42022e6cc8d86be975077107e: Status 404 returned error can't find the container with id 86345802f5a5a15aa9bf1c1de706ae793adceca42022e6cc8d86be975077107e Jan 22 16:53:02 crc kubenswrapper[4758]: E0122 16:53:02.703556 4758 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode140fc6a_db89_4748_be82_94765061de55.slice/crio-5430a78ab14ea97b8f9157db2c162b367099bcba084dd84c65f22ba450fefbe0: Error finding container 5430a78ab14ea97b8f9157db2c162b367099bcba084dd84c65f22ba450fefbe0: Status 404 returned error can't find the container with id 5430a78ab14ea97b8f9157db2c162b367099bcba084dd84c65f22ba450fefbe0 Jan 22 16:53:02 crc kubenswrapper[4758]: W0122 16:53:02.703731 4758 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode80f8eab_d5eb_4d0f_85f2_a4ff7e8fe5d2.slice/crio-conmon-6042a0e469a10144b1d7b8eec0dab7ca6f4bdf95df62b73166c44465f0907f22.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode80f8eab_d5eb_4d0f_85f2_a4ff7e8fe5d2.slice/crio-conmon-6042a0e469a10144b1d7b8eec0dab7ca6f4bdf95df62b73166c44465f0907f22.scope: no such file or directory Jan 22 16:53:02 crc kubenswrapper[4758]: W0122 16:53:02.703772 4758 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a5401a8_4432_405a_8cdd_06d21ee90ece.slice/crio-9ed7716389bb42108c02f6a05b6d832a0b8d104d94073ae0784b73206992c4da.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a5401a8_4432_405a_8cdd_06d21ee90ece.slice/crio-9ed7716389bb42108c02f6a05b6d832a0b8d104d94073ae0784b73206992c4da.scope: no such file or directory Jan 22 16:53:02 crc kubenswrapper[4758]: W0122 16:53:02.704784 4758 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode80f8eab_d5eb_4d0f_85f2_a4ff7e8fe5d2.slice/crio-6042a0e469a10144b1d7b8eec0dab7ca6f4bdf95df62b73166c44465f0907f22.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode80f8eab_d5eb_4d0f_85f2_a4ff7e8fe5d2.slice/crio-6042a0e469a10144b1d7b8eec0dab7ca6f4bdf95df62b73166c44465f0907f22.scope: no such file or directory Jan 22 16:53:02 crc kubenswrapper[4758]: W0122 16:53:02.704810 4758 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05f4cb00_a2ca_444c_8ca6_c85d56d9ed9d.slice/crio-conmon-1da3f1dc3b4e352657d6ec448c3e8750a635e4b7d4ebc56baf53a5ff63632e19.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05f4cb00_a2ca_444c_8ca6_c85d56d9ed9d.slice/crio-conmon-1da3f1dc3b4e352657d6ec448c3e8750a635e4b7d4ebc56baf53a5ff63632e19.scope: no such file or directory Jan 22 16:53:02 crc kubenswrapper[4758]: W0122 16:53:02.704828 4758 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05f4cb00_a2ca_444c_8ca6_c85d56d9ed9d.slice/crio-1da3f1dc3b4e352657d6ec448c3e8750a635e4b7d4ebc56baf53a5ff63632e19.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05f4cb00_a2ca_444c_8ca6_c85d56d9ed9d.slice/crio-1da3f1dc3b4e352657d6ec448c3e8750a635e4b7d4ebc56baf53a5ff63632e19.scope: no such file or directory Jan 22 16:53:02 crc kubenswrapper[4758]: W0122 16:53:02.704843 4758 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c1d0803_658d_4bdb_8770_3d3921554591.slice/crio-conmon-bcce4d53124194930cd7b4bba64d473ac0a2114c056e048e63f5ca406fbf45fc.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c1d0803_658d_4bdb_8770_3d3921554591.slice/crio-conmon-bcce4d53124194930cd7b4bba64d473ac0a2114c056e048e63f5ca406fbf45fc.scope: no such file or directory Jan 22 16:53:02 crc kubenswrapper[4758]: W0122 16:53:02.704859 4758 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1383243_b82d_4aaa_876f_aad36c14158a.slice/crio-conmon-686e4484c8c23dd808c8c81a761d97343138ed477de30a2c8c237cecd7b034ef.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1383243_b82d_4aaa_876f_aad36c14158a.slice/crio-conmon-686e4484c8c23dd808c8c81a761d97343138ed477de30a2c8c237cecd7b034ef.scope: no such file or directory Jan 22 16:53:02 crc kubenswrapper[4758]: W0122 16:53:02.704873 4758 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c1d0803_658d_4bdb_8770_3d3921554591.slice/crio-bcce4d53124194930cd7b4bba64d473ac0a2114c056e048e63f5ca406fbf45fc.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c1d0803_658d_4bdb_8770_3d3921554591.slice/crio-bcce4d53124194930cd7b4bba64d473ac0a2114c056e048e63f5ca406fbf45fc.scope: no such file or directory Jan 22 16:53:02 crc kubenswrapper[4758]: W0122 16:53:02.704888 4758 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1383243_b82d_4aaa_876f_aad36c14158a.slice/crio-686e4484c8c23dd808c8c81a761d97343138ed477de30a2c8c237cecd7b034ef.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1383243_b82d_4aaa_876f_aad36c14158a.slice/crio-686e4484c8c23dd808c8c81a761d97343138ed477de30a2c8c237cecd7b034ef.scope: no such file or directory Jan 22 16:53:02 crc kubenswrapper[4758]: W0122 16:53:02.704901 4758 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode80f8eab_d5eb_4d0f_85f2_a4ff7e8fe5d2.slice/crio-conmon-0551082ac671f06b3f279766df1407ea976e33632c448297e015ccc75c12a048.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode80f8eab_d5eb_4d0f_85f2_a4ff7e8fe5d2.slice/crio-conmon-0551082ac671f06b3f279766df1407ea976e33632c448297e015ccc75c12a048.scope: no such file or directory Jan 22 16:53:02 crc kubenswrapper[4758]: W0122 16:53:02.704915 4758 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode80f8eab_d5eb_4d0f_85f2_a4ff7e8fe5d2.slice/crio-0551082ac671f06b3f279766df1407ea976e33632c448297e015ccc75c12a048.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode80f8eab_d5eb_4d0f_85f2_a4ff7e8fe5d2.slice/crio-0551082ac671f06b3f279766df1407ea976e33632c448297e015ccc75c12a048.scope: no such file or directory Jan 22 16:53:02 crc kubenswrapper[4758]: E0122 16:53:02.707033 4758 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: , extraDiskErr: could not stat "/var/log/pods/openstack_ceilometer-0_e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2/ceilometer-central-agent/0.log" to get inode usage: stat /var/log/pods/openstack_ceilometer-0_e80f8eab-d5eb-4d0f-85f2-a4ff7e8fe5d2/ceilometer-central-agent/0.log: no such file or directory Jan 22 16:53:02 crc kubenswrapper[4758]: E0122 16:53:02.707241 4758 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: , extraDiskErr: could not stat "/var/log/pods/openstack_cinder-api-0_0a5401a8-4432-405a-8cdd-06d21ee90ece/cinder-api-log/0.log" to get inode usage: stat /var/log/pods/openstack_cinder-api-0_0a5401a8-4432-405a-8cdd-06d21ee90ece/cinder-api-log/0.log: no such file or directory Jan 22 16:53:02 crc kubenswrapper[4758]: E0122 16:53:02.709329 4758 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd5b4616_f0db_4639_a791_c8882e65f6ca.slice/crio-cf8801fa574f38bac6462d4899dfd5f6f8aafde8709b294654b251dbb92a8825: Error finding container cf8801fa574f38bac6462d4899dfd5f6f8aafde8709b294654b251dbb92a8825: Status 404 returned error can't find the container with id cf8801fa574f38bac6462d4899dfd5f6f8aafde8709b294654b251dbb92a8825 Jan 22 16:53:02 crc kubenswrapper[4758]: W0122 16:53:02.714609 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05f4cb00_a2ca_444c_8ca6_c85d56d9ed9d.slice/crio-fa2cdd68a8771f35842e6ee8c3b649e849d38378cc8174191e94cd5c7727eddf.scope WatchSource:0}: Error finding container fa2cdd68a8771f35842e6ee8c3b649e849d38378cc8174191e94cd5c7727eddf: Status 404 returned error can't find the container with id fa2cdd68a8771f35842e6ee8c3b649e849d38378cc8174191e94cd5c7727eddf Jan 22 16:53:02 crc kubenswrapper[4758]: W0122 16:53:02.719037 4758 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode80f8eab_d5eb_4d0f_85f2_a4ff7e8fe5d2.slice/crio-conmon-37127c9b7a1671450fd4af1c038c3c3c5b2f6249e9734fc5f89e5d196c959cf0.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode80f8eab_d5eb_4d0f_85f2_a4ff7e8fe5d2.slice/crio-conmon-37127c9b7a1671450fd4af1c038c3c3c5b2f6249e9734fc5f89e5d196c959cf0.scope: no such file or directory Jan 22 16:53:02 crc kubenswrapper[4758]: W0122 16:53:02.719076 4758 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode80f8eab_d5eb_4d0f_85f2_a4ff7e8fe5d2.slice/crio-37127c9b7a1671450fd4af1c038c3c3c5b2f6249e9734fc5f89e5d196c959cf0.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode80f8eab_d5eb_4d0f_85f2_a4ff7e8fe5d2.slice/crio-37127c9b7a1671450fd4af1c038c3c3c5b2f6249e9734fc5f89e5d196c959cf0.scope: no such file or directory Jan 22 16:53:02 crc kubenswrapper[4758]: E0122 16:53:02.928669 4758 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode80f8eab_d5eb_4d0f_85f2_a4ff7e8fe5d2.slice/crio-a87ec731c15c865dfd922ff358e50c07ec711fad452c4bc5d2435063607b9f52.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a5401a8_4432_405a_8cdd_06d21ee90ece.slice/crio-conmon-67f04c8edd0bfadf7999eb3e60499af7612f6aba062524c649cf701fd1c49e86.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c1d0803_658d_4bdb_8770_3d3921554591.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb37953c7_685d_4a7e_85fd_a2964e025825.slice/crio-1b6cc5ccbbfc7b0277304522d450bf801fa83ae1548aa7317a2ef97828b8b019.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd5b4616_f0db_4639_a791_c8882e65f6ca.slice/crio-conmon-3b2e7ee039de7f55d394f9218bcd174b16bf80f81b4fed8aba2bb2eff102017e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7312e42_6737_4296_a35b_39bbb4a6f21b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd5b4616_f0db_4639_a791_c8882e65f6ca.slice/crio-conmon-91aa93d9de3692d224a3e448cb4b5983ea39fcb4ed0a1602ea985f042784b45c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode80f8eab_d5eb_4d0f_85f2_a4ff7e8fe5d2.slice/crio-conmon-a87ec731c15c865dfd922ff358e50c07ec711fad452c4bc5d2435063607b9f52.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a5401a8_4432_405a_8cdd_06d21ee90ece.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd5b4616_f0db_4639_a791_c8882e65f6ca.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05f4cb00_a2ca_444c_8ca6_c85d56d9ed9d.slice/crio-conmon-fa2cdd68a8771f35842e6ee8c3b649e849d38378cc8174191e94cd5c7727eddf.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea53227e_7c78_42b4_959c_dd2531914be2.slice/crio-conmon-a242bb86d02a02912959476d1e89c5801e3e8b0a179d33e8ede7e504d5a32eae.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05f4cb00_a2ca_444c_8ca6_c85d56d9ed9d.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13009e8c_ff8c_4429_ba2d_3a0053fe0ff4.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb37953c7_685d_4a7e_85fd_a2964e025825.slice/crio-conmon-1b6cc5ccbbfc7b0277304522d450bf801fa83ae1548aa7317a2ef97828b8b019.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a5401a8_4432_405a_8cdd_06d21ee90ece.slice/crio-67f04c8edd0bfadf7999eb3e60499af7612f6aba062524c649cf701fd1c49e86.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7312e42_6737_4296_a35b_39bbb4a6f21b.slice/crio-0c91657a572b3b34b8817f7c25202435a5ff9b50a99f94fed486d107c72a8bd0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb37953c7_685d_4a7e_85fd_a2964e025825.slice/crio-21830ff0562d03fac8b6c3dcf351712b2fa2309112b08dc1b3eb9338d5071507\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea53227e_7c78_42b4_959c_dd2531914be2.slice/crio-c77ced53f64d07ef3a38ca638ea8cd3142878c1beb3143a78ba43a71d899d5f1\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode80f8eab_d5eb_4d0f_85f2_a4ff7e8fe5d2.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7312e42_6737_4296_a35b_39bbb4a6f21b.slice/crio-c0ef1600c909cea06f743be6661231c80d0f2cf31472785a373ddde21f6e6f4b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea53227e_7c78_42b4_959c_dd2531914be2.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd5b4616_f0db_4639_a791_c8882e65f6ca.slice/crio-3b2e7ee039de7f55d394f9218bcd174b16bf80f81b4fed8aba2bb2eff102017e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb37953c7_685d_4a7e_85fd_a2964e025825.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode140fc6a_db89_4748_be82_94765061de55.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1383243_b82d_4aaa_876f_aad36c14158a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7312e42_6737_4296_a35b_39bbb4a6f21b.slice/crio-cd241df4d9a9ca5fb55df0f9463dfe3812ee19ccbc679251cacb91b57217b4ea\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7312e42_6737_4296_a35b_39bbb4a6f21b.slice/crio-conmon-c0ef1600c909cea06f743be6661231c80d0f2cf31472785a373ddde21f6e6f4b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40487aaa_4c45_41b2_ab14_76477ed2f4bb.slice/crio-conmon-f512c542a3f7080a3e0e9498fe8473553577ff1a142250d2654113eab457a261.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13009e8c_ff8c_4429_ba2d_3a0053fe0ff4.slice/crio-07d58592b5fe3309684fc29c740b9416c6aab32053853beeb26cdde70d5380e2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode80f8eab_d5eb_4d0f_85f2_a4ff7e8fe5d2.slice/crio-f5e4235e98107cb1e473405543af580e1db21446d3e7c65c4138bcb0b2364577\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7312e42_6737_4296_a35b_39bbb4a6f21b.slice/crio-conmon-0c91657a572b3b34b8817f7c25202435a5ff9b50a99f94fed486d107c72a8bd0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a5401a8_4432_405a_8cdd_06d21ee90ece.slice/crio-5a482cb62e7fa3afac72f4431e885723af569f5bddbc1b69c24b2c83d129822b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40487aaa_4c45_41b2_ab14_76477ed2f4bb.slice/crio-f512c542a3f7080a3e0e9498fe8473553577ff1a142250d2654113eab457a261.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05f4cb00_a2ca_444c_8ca6_c85d56d9ed9d.slice/crio-df1d6d2dd7d9c2c797adc55ed22ba1381da8fcf9a069fa1eccc001007b7ed94b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd5b4616_f0db_4639_a791_c8882e65f6ca.slice/crio-91aa93d9de3692d224a3e448cb4b5983ea39fcb4ed0a1602ea985f042784b45c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13009e8c_ff8c_4429_ba2d_3a0053fe0ff4.slice/crio-conmon-07d58592b5fe3309684fc29c740b9416c6aab32053853beeb26cdde70d5380e2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea53227e_7c78_42b4_959c_dd2531914be2.slice/crio-a242bb86d02a02912959476d1e89c5801e3e8b0a179d33e8ede7e504d5a32eae.scope\": RecentStats: unable to find data in memory cache]" Jan 22 16:53:03 crc kubenswrapper[4758]: I0122 16:53:03.163303 4758 generic.go:334] "Generic (PLEG): container finished" podID="40487aaa-4c45-41b2-ab14-76477ed2f4bb" containerID="f512c542a3f7080a3e0e9498fe8473553577ff1a142250d2654113eab457a261" exitCode=137 Jan 22 16:53:03 crc kubenswrapper[4758]: I0122 16:53:03.163370 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-88b76f788-th2jq" event={"ID":"40487aaa-4c45-41b2-ab14-76477ed2f4bb","Type":"ContainerDied","Data":"f512c542a3f7080a3e0e9498fe8473553577ff1a142250d2654113eab457a261"} Jan 22 16:53:03 crc kubenswrapper[4758]: I0122 16:53:03.379467 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 16:53:03 crc kubenswrapper[4758]: I0122 16:53:03.770862 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-lgj69"] Jan 22 16:53:03 crc kubenswrapper[4758]: I0122 16:53:03.776297 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-lgj69" Jan 22 16:53:03 crc kubenswrapper[4758]: I0122 16:53:03.791367 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-lgj69"] Jan 22 16:53:03 crc kubenswrapper[4758]: I0122 16:53:03.842359 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-dpgv7"] Jan 22 16:53:03 crc kubenswrapper[4758]: I0122 16:53:03.847397 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dpgv7" Jan 22 16:53:03 crc kubenswrapper[4758]: I0122 16:53:03.858356 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-dpgv7"] Jan 22 16:53:03 crc kubenswrapper[4758]: I0122 16:53:03.868587 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-bc4e-account-create-update-mmj4j"] Jan 22 16:53:03 crc kubenswrapper[4758]: I0122 16:53:03.870313 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-bc4e-account-create-update-mmj4j" Jan 22 16:53:03 crc kubenswrapper[4758]: I0122 16:53:03.873836 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 22 16:53:03 crc kubenswrapper[4758]: I0122 16:53:03.880455 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-bc4e-account-create-update-mmj4j"] Jan 22 16:53:03 crc kubenswrapper[4758]: I0122 16:53:03.885258 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 22 16:53:03 crc kubenswrapper[4758]: I0122 16:53:03.915076 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gsg2\" (UniqueName: \"kubernetes.io/projected/581d442b-f2db-42dc-bec7-f3b0d32456fb-kube-api-access-6gsg2\") pod \"nova-api-db-create-lgj69\" (UID: \"581d442b-f2db-42dc-bec7-f3b0d32456fb\") " pod="openstack/nova-api-db-create-lgj69" Jan 22 16:53:03 crc kubenswrapper[4758]: I0122 16:53:03.915189 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/581d442b-f2db-42dc-bec7-f3b0d32456fb-operator-scripts\") pod \"nova-api-db-create-lgj69\" (UID: \"581d442b-f2db-42dc-bec7-f3b0d32456fb\") " pod="openstack/nova-api-db-create-lgj69" Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.017149 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gsg2\" (UniqueName: \"kubernetes.io/projected/581d442b-f2db-42dc-bec7-f3b0d32456fb-kube-api-access-6gsg2\") pod \"nova-api-db-create-lgj69\" (UID: \"581d442b-f2db-42dc-bec7-f3b0d32456fb\") " pod="openstack/nova-api-db-create-lgj69" Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.017234 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zft7v\" (UniqueName: \"kubernetes.io/projected/5f427bb1-80ef-4430-aad1-b2ff4b1f4370-kube-api-access-zft7v\") pod \"nova-api-bc4e-account-create-update-mmj4j\" (UID: \"5f427bb1-80ef-4430-aad1-b2ff4b1f4370\") " pod="openstack/nova-api-bc4e-account-create-update-mmj4j" Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.017455 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f427bb1-80ef-4430-aad1-b2ff4b1f4370-operator-scripts\") pod \"nova-api-bc4e-account-create-update-mmj4j\" (UID: \"5f427bb1-80ef-4430-aad1-b2ff4b1f4370\") " pod="openstack/nova-api-bc4e-account-create-update-mmj4j" Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.017525 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/581d442b-f2db-42dc-bec7-f3b0d32456fb-operator-scripts\") pod \"nova-api-db-create-lgj69\" (UID: \"581d442b-f2db-42dc-bec7-f3b0d32456fb\") " pod="openstack/nova-api-db-create-lgj69" Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.017568 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b045db5d-f4ac-430d-a697-aeb1a8353fa3-operator-scripts\") pod \"nova-cell0-db-create-dpgv7\" (UID: \"b045db5d-f4ac-430d-a697-aeb1a8353fa3\") " pod="openstack/nova-cell0-db-create-dpgv7" Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.017768 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9zzf\" (UniqueName: \"kubernetes.io/projected/b045db5d-f4ac-430d-a697-aeb1a8353fa3-kube-api-access-d9zzf\") pod \"nova-cell0-db-create-dpgv7\" (UID: \"b045db5d-f4ac-430d-a697-aeb1a8353fa3\") " pod="openstack/nova-cell0-db-create-dpgv7" Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.018368 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/581d442b-f2db-42dc-bec7-f3b0d32456fb-operator-scripts\") pod \"nova-api-db-create-lgj69\" (UID: \"581d442b-f2db-42dc-bec7-f3b0d32456fb\") " pod="openstack/nova-api-db-create-lgj69" Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.053546 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gsg2\" (UniqueName: \"kubernetes.io/projected/581d442b-f2db-42dc-bec7-f3b0d32456fb-kube-api-access-6gsg2\") pod \"nova-api-db-create-lgj69\" (UID: \"581d442b-f2db-42dc-bec7-f3b0d32456fb\") " pod="openstack/nova-api-db-create-lgj69" Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.058890 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-zx7m7"] Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.060476 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-zx7m7" Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.068439 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-489c-account-create-update-262gf"] Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.073445 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-489c-account-create-update-262gf" Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.080410 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-zx7m7"] Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.081626 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.107859 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-489c-account-create-update-262gf"] Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.108370 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-lgj69" Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.119074 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zft7v\" (UniqueName: \"kubernetes.io/projected/5f427bb1-80ef-4430-aad1-b2ff4b1f4370-kube-api-access-zft7v\") pod \"nova-api-bc4e-account-create-update-mmj4j\" (UID: \"5f427bb1-80ef-4430-aad1-b2ff4b1f4370\") " pod="openstack/nova-api-bc4e-account-create-update-mmj4j" Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.119201 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f427bb1-80ef-4430-aad1-b2ff4b1f4370-operator-scripts\") pod \"nova-api-bc4e-account-create-update-mmj4j\" (UID: \"5f427bb1-80ef-4430-aad1-b2ff4b1f4370\") " pod="openstack/nova-api-bc4e-account-create-update-mmj4j" Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.119269 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b045db5d-f4ac-430d-a697-aeb1a8353fa3-operator-scripts\") pod \"nova-cell0-db-create-dpgv7\" (UID: \"b045db5d-f4ac-430d-a697-aeb1a8353fa3\") " pod="openstack/nova-cell0-db-create-dpgv7" Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.119314 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9zzf\" (UniqueName: \"kubernetes.io/projected/b045db5d-f4ac-430d-a697-aeb1a8353fa3-kube-api-access-d9zzf\") pod \"nova-cell0-db-create-dpgv7\" (UID: \"b045db5d-f4ac-430d-a697-aeb1a8353fa3\") " pod="openstack/nova-cell0-db-create-dpgv7" Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.120529 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f427bb1-80ef-4430-aad1-b2ff4b1f4370-operator-scripts\") pod \"nova-api-bc4e-account-create-update-mmj4j\" (UID: \"5f427bb1-80ef-4430-aad1-b2ff4b1f4370\") " pod="openstack/nova-api-bc4e-account-create-update-mmj4j" Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.120829 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b045db5d-f4ac-430d-a697-aeb1a8353fa3-operator-scripts\") pod \"nova-cell0-db-create-dpgv7\" (UID: \"b045db5d-f4ac-430d-a697-aeb1a8353fa3\") " pod="openstack/nova-cell0-db-create-dpgv7" Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.137166 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9zzf\" (UniqueName: \"kubernetes.io/projected/b045db5d-f4ac-430d-a697-aeb1a8353fa3-kube-api-access-d9zzf\") pod \"nova-cell0-db-create-dpgv7\" (UID: \"b045db5d-f4ac-430d-a697-aeb1a8353fa3\") " pod="openstack/nova-cell0-db-create-dpgv7" Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.147522 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zft7v\" (UniqueName: \"kubernetes.io/projected/5f427bb1-80ef-4430-aad1-b2ff4b1f4370-kube-api-access-zft7v\") pod \"nova-api-bc4e-account-create-update-mmj4j\" (UID: \"5f427bb1-80ef-4430-aad1-b2ff4b1f4370\") " pod="openstack/nova-api-bc4e-account-create-update-mmj4j" Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.170565 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dpgv7" Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.220120 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-bc4e-account-create-update-mmj4j" Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.221383 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twgpk\" (UniqueName: \"kubernetes.io/projected/ef1c7aa3-019f-4178-8a2b-dbb9a69fba64-kube-api-access-twgpk\") pod \"nova-cell0-489c-account-create-update-262gf\" (UID: \"ef1c7aa3-019f-4178-8a2b-dbb9a69fba64\") " pod="openstack/nova-cell0-489c-account-create-update-262gf" Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.221422 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp8l7\" (UniqueName: \"kubernetes.io/projected/a9d9b3df-8ebe-49a4-9a23-0aa7dfc15ea4-kube-api-access-pp8l7\") pod \"nova-cell1-db-create-zx7m7\" (UID: \"a9d9b3df-8ebe-49a4-9a23-0aa7dfc15ea4\") " pod="openstack/nova-cell1-db-create-zx7m7" Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.221525 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9d9b3df-8ebe-49a4-9a23-0aa7dfc15ea4-operator-scripts\") pod \"nova-cell1-db-create-zx7m7\" (UID: \"a9d9b3df-8ebe-49a4-9a23-0aa7dfc15ea4\") " pod="openstack/nova-cell1-db-create-zx7m7" Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.221571 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef1c7aa3-019f-4178-8a2b-dbb9a69fba64-operator-scripts\") pod \"nova-cell0-489c-account-create-update-262gf\" (UID: \"ef1c7aa3-019f-4178-8a2b-dbb9a69fba64\") " pod="openstack/nova-cell0-489c-account-create-update-262gf" Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.240894 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-e851-account-create-update-mlg8s"] Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.242231 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e851-account-create-update-mlg8s" Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.246925 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.252178 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-e851-account-create-update-mlg8s"] Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.323219 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9d9b3df-8ebe-49a4-9a23-0aa7dfc15ea4-operator-scripts\") pod \"nova-cell1-db-create-zx7m7\" (UID: \"a9d9b3df-8ebe-49a4-9a23-0aa7dfc15ea4\") " pod="openstack/nova-cell1-db-create-zx7m7" Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.323296 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e75be79-a61a-4e9b-92de-fc51822da088-operator-scripts\") pod \"nova-cell1-e851-account-create-update-mlg8s\" (UID: \"2e75be79-a61a-4e9b-92de-fc51822da088\") " pod="openstack/nova-cell1-e851-account-create-update-mlg8s" Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.323328 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef1c7aa3-019f-4178-8a2b-dbb9a69fba64-operator-scripts\") pod \"nova-cell0-489c-account-create-update-262gf\" (UID: \"ef1c7aa3-019f-4178-8a2b-dbb9a69fba64\") " pod="openstack/nova-cell0-489c-account-create-update-262gf" Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.323362 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twgpk\" (UniqueName: \"kubernetes.io/projected/ef1c7aa3-019f-4178-8a2b-dbb9a69fba64-kube-api-access-twgpk\") pod \"nova-cell0-489c-account-create-update-262gf\" (UID: \"ef1c7aa3-019f-4178-8a2b-dbb9a69fba64\") " pod="openstack/nova-cell0-489c-account-create-update-262gf" Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.323389 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m8wj\" (UniqueName: \"kubernetes.io/projected/2e75be79-a61a-4e9b-92de-fc51822da088-kube-api-access-4m8wj\") pod \"nova-cell1-e851-account-create-update-mlg8s\" (UID: \"2e75be79-a61a-4e9b-92de-fc51822da088\") " pod="openstack/nova-cell1-e851-account-create-update-mlg8s" Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.323415 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp8l7\" (UniqueName: \"kubernetes.io/projected/a9d9b3df-8ebe-49a4-9a23-0aa7dfc15ea4-kube-api-access-pp8l7\") pod \"nova-cell1-db-create-zx7m7\" (UID: \"a9d9b3df-8ebe-49a4-9a23-0aa7dfc15ea4\") " pod="openstack/nova-cell1-db-create-zx7m7" Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.324502 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9d9b3df-8ebe-49a4-9a23-0aa7dfc15ea4-operator-scripts\") pod \"nova-cell1-db-create-zx7m7\" (UID: \"a9d9b3df-8ebe-49a4-9a23-0aa7dfc15ea4\") " pod="openstack/nova-cell1-db-create-zx7m7" Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.324517 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef1c7aa3-019f-4178-8a2b-dbb9a69fba64-operator-scripts\") pod \"nova-cell0-489c-account-create-update-262gf\" (UID: \"ef1c7aa3-019f-4178-8a2b-dbb9a69fba64\") " pod="openstack/nova-cell0-489c-account-create-update-262gf" Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.346412 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twgpk\" (UniqueName: \"kubernetes.io/projected/ef1c7aa3-019f-4178-8a2b-dbb9a69fba64-kube-api-access-twgpk\") pod \"nova-cell0-489c-account-create-update-262gf\" (UID: \"ef1c7aa3-019f-4178-8a2b-dbb9a69fba64\") " pod="openstack/nova-cell0-489c-account-create-update-262gf" Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.349594 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp8l7\" (UniqueName: \"kubernetes.io/projected/a9d9b3df-8ebe-49a4-9a23-0aa7dfc15ea4-kube-api-access-pp8l7\") pod \"nova-cell1-db-create-zx7m7\" (UID: \"a9d9b3df-8ebe-49a4-9a23-0aa7dfc15ea4\") " pod="openstack/nova-cell1-db-create-zx7m7" Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.425567 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e75be79-a61a-4e9b-92de-fc51822da088-operator-scripts\") pod \"nova-cell1-e851-account-create-update-mlg8s\" (UID: \"2e75be79-a61a-4e9b-92de-fc51822da088\") " pod="openstack/nova-cell1-e851-account-create-update-mlg8s" Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.425658 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m8wj\" (UniqueName: \"kubernetes.io/projected/2e75be79-a61a-4e9b-92de-fc51822da088-kube-api-access-4m8wj\") pod \"nova-cell1-e851-account-create-update-mlg8s\" (UID: \"2e75be79-a61a-4e9b-92de-fc51822da088\") " pod="openstack/nova-cell1-e851-account-create-update-mlg8s" Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.426403 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e75be79-a61a-4e9b-92de-fc51822da088-operator-scripts\") pod \"nova-cell1-e851-account-create-update-mlg8s\" (UID: \"2e75be79-a61a-4e9b-92de-fc51822da088\") " pod="openstack/nova-cell1-e851-account-create-update-mlg8s" Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.432890 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-zx7m7" Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.440377 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m8wj\" (UniqueName: \"kubernetes.io/projected/2e75be79-a61a-4e9b-92de-fc51822da088-kube-api-access-4m8wj\") pod \"nova-cell1-e851-account-create-update-mlg8s\" (UID: \"2e75be79-a61a-4e9b-92de-fc51822da088\") " pod="openstack/nova-cell1-e851-account-create-update-mlg8s" Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.525862 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-489c-account-create-update-262gf" Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.561497 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e851-account-create-update-mlg8s" Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.726778 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.726843 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 22 16:53:04 crc kubenswrapper[4758]: I0122 16:53:04.727892 4758 scope.go:117] "RemoveContainer" containerID="7881cf6a1ea9246b1451350e25b945ccd52405bd209ed861bedc85b51ac01118" Jan 22 16:53:04 crc kubenswrapper[4758]: E0122 16:53:04.728137 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5)\"" pod="openstack/watcher-decision-engine-0" podUID="0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5" Jan 22 16:53:06 crc kubenswrapper[4758]: I0122 16:53:06.901207 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-88b76f788-th2jq" podUID="40487aaa-4c45-41b2-ab14-76477ed2f4bb" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.160:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.160:8443: connect: connection refused" Jan 22 16:53:06 crc kubenswrapper[4758]: I0122 16:53:06.972021 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5fb5ff74dc-qd4wf" Jan 22 16:53:06 crc kubenswrapper[4758]: I0122 16:53:06.973884 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5fb5ff74dc-qd4wf" Jan 22 16:53:08 crc kubenswrapper[4758]: I0122 16:53:08.348404 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-88b76f788-th2jq" Jan 22 16:53:08 crc kubenswrapper[4758]: I0122 16:53:08.530108 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40487aaa-4c45-41b2-ab14-76477ed2f4bb-logs\") pod \"40487aaa-4c45-41b2-ab14-76477ed2f4bb\" (UID: \"40487aaa-4c45-41b2-ab14-76477ed2f4bb\") " Jan 22 16:53:08 crc kubenswrapper[4758]: I0122 16:53:08.530373 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40487aaa-4c45-41b2-ab14-76477ed2f4bb-combined-ca-bundle\") pod \"40487aaa-4c45-41b2-ab14-76477ed2f4bb\" (UID: \"40487aaa-4c45-41b2-ab14-76477ed2f4bb\") " Jan 22 16:53:08 crc kubenswrapper[4758]: I0122 16:53:08.530436 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntmsn\" (UniqueName: \"kubernetes.io/projected/40487aaa-4c45-41b2-ab14-76477ed2f4bb-kube-api-access-ntmsn\") pod \"40487aaa-4c45-41b2-ab14-76477ed2f4bb\" (UID: \"40487aaa-4c45-41b2-ab14-76477ed2f4bb\") " Jan 22 16:53:08 crc kubenswrapper[4758]: I0122 16:53:08.530467 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/40487aaa-4c45-41b2-ab14-76477ed2f4bb-horizon-secret-key\") pod \"40487aaa-4c45-41b2-ab14-76477ed2f4bb\" (UID: \"40487aaa-4c45-41b2-ab14-76477ed2f4bb\") " Jan 22 16:53:08 crc kubenswrapper[4758]: I0122 16:53:08.530555 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/40487aaa-4c45-41b2-ab14-76477ed2f4bb-horizon-tls-certs\") pod \"40487aaa-4c45-41b2-ab14-76477ed2f4bb\" (UID: \"40487aaa-4c45-41b2-ab14-76477ed2f4bb\") " Jan 22 16:53:08 crc kubenswrapper[4758]: I0122 16:53:08.530598 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/40487aaa-4c45-41b2-ab14-76477ed2f4bb-config-data\") pod \"40487aaa-4c45-41b2-ab14-76477ed2f4bb\" (UID: \"40487aaa-4c45-41b2-ab14-76477ed2f4bb\") " Jan 22 16:53:08 crc kubenswrapper[4758]: I0122 16:53:08.530626 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/40487aaa-4c45-41b2-ab14-76477ed2f4bb-scripts\") pod \"40487aaa-4c45-41b2-ab14-76477ed2f4bb\" (UID: \"40487aaa-4c45-41b2-ab14-76477ed2f4bb\") " Jan 22 16:53:08 crc kubenswrapper[4758]: I0122 16:53:08.530870 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40487aaa-4c45-41b2-ab14-76477ed2f4bb-logs" (OuterVolumeSpecName: "logs") pod "40487aaa-4c45-41b2-ab14-76477ed2f4bb" (UID: "40487aaa-4c45-41b2-ab14-76477ed2f4bb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:53:08 crc kubenswrapper[4758]: I0122 16:53:08.531442 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40487aaa-4c45-41b2-ab14-76477ed2f4bb-logs\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:08 crc kubenswrapper[4758]: I0122 16:53:08.536792 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40487aaa-4c45-41b2-ab14-76477ed2f4bb-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "40487aaa-4c45-41b2-ab14-76477ed2f4bb" (UID: "40487aaa-4c45-41b2-ab14-76477ed2f4bb"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:08 crc kubenswrapper[4758]: I0122 16:53:08.537950 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40487aaa-4c45-41b2-ab14-76477ed2f4bb-kube-api-access-ntmsn" (OuterVolumeSpecName: "kube-api-access-ntmsn") pod "40487aaa-4c45-41b2-ab14-76477ed2f4bb" (UID: "40487aaa-4c45-41b2-ab14-76477ed2f4bb"). InnerVolumeSpecName "kube-api-access-ntmsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:53:08 crc kubenswrapper[4758]: I0122 16:53:08.559382 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40487aaa-4c45-41b2-ab14-76477ed2f4bb-scripts" (OuterVolumeSpecName: "scripts") pod "40487aaa-4c45-41b2-ab14-76477ed2f4bb" (UID: "40487aaa-4c45-41b2-ab14-76477ed2f4bb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:53:08 crc kubenswrapper[4758]: I0122 16:53:08.565537 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40487aaa-4c45-41b2-ab14-76477ed2f4bb-config-data" (OuterVolumeSpecName: "config-data") pod "40487aaa-4c45-41b2-ab14-76477ed2f4bb" (UID: "40487aaa-4c45-41b2-ab14-76477ed2f4bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:53:08 crc kubenswrapper[4758]: I0122 16:53:08.598275 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-dpgv7"] Jan 22 16:53:08 crc kubenswrapper[4758]: I0122 16:53:08.602364 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40487aaa-4c45-41b2-ab14-76477ed2f4bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "40487aaa-4c45-41b2-ab14-76477ed2f4bb" (UID: "40487aaa-4c45-41b2-ab14-76477ed2f4bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:08 crc kubenswrapper[4758]: I0122 16:53:08.602515 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40487aaa-4c45-41b2-ab14-76477ed2f4bb-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "40487aaa-4c45-41b2-ab14-76477ed2f4bb" (UID: "40487aaa-4c45-41b2-ab14-76477ed2f4bb"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:08 crc kubenswrapper[4758]: I0122 16:53:08.633607 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/40487aaa-4c45-41b2-ab14-76477ed2f4bb-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:08 crc kubenswrapper[4758]: I0122 16:53:08.633648 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/40487aaa-4c45-41b2-ab14-76477ed2f4bb-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:08 crc kubenswrapper[4758]: I0122 16:53:08.633660 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40487aaa-4c45-41b2-ab14-76477ed2f4bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:08 crc kubenswrapper[4758]: I0122 16:53:08.633673 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntmsn\" (UniqueName: \"kubernetes.io/projected/40487aaa-4c45-41b2-ab14-76477ed2f4bb-kube-api-access-ntmsn\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:08 crc kubenswrapper[4758]: I0122 16:53:08.633684 4758 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/40487aaa-4c45-41b2-ab14-76477ed2f4bb-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:08 crc kubenswrapper[4758]: I0122 16:53:08.633696 4758 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/40487aaa-4c45-41b2-ab14-76477ed2f4bb-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:08 crc kubenswrapper[4758]: I0122 16:53:08.703992 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 16:53:08 crc kubenswrapper[4758]: W0122 16:53:08.718078 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9d9b3df_8ebe_49a4_9a23_0aa7dfc15ea4.slice/crio-39e19f6acde9c350bc9647629c292482d065cbaf187e00114b3cd36c00292a8b WatchSource:0}: Error finding container 39e19f6acde9c350bc9647629c292482d065cbaf187e00114b3cd36c00292a8b: Status 404 returned error can't find the container with id 39e19f6acde9c350bc9647629c292482d065cbaf187e00114b3cd36c00292a8b Jan 22 16:53:08 crc kubenswrapper[4758]: I0122 16:53:08.718148 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-zx7m7"] Jan 22 16:53:08 crc kubenswrapper[4758]: I0122 16:53:08.772509 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-489c-account-create-update-262gf"] Jan 22 16:53:08 crc kubenswrapper[4758]: I0122 16:53:08.886429 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-e851-account-create-update-mlg8s"] Jan 22 16:53:08 crc kubenswrapper[4758]: I0122 16:53:08.898855 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-bc4e-account-create-update-mmj4j"] Jan 22 16:53:08 crc kubenswrapper[4758]: I0122 16:53:08.905961 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-lgj69"] Jan 22 16:53:09 crc kubenswrapper[4758]: I0122 16:53:09.296078 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-lgj69" event={"ID":"581d442b-f2db-42dc-bec7-f3b0d32456fb","Type":"ContainerStarted","Data":"866e781011ae24278a73c0ee969011772dbbfaf32ebffd26697a4d672ca6cded"} Jan 22 16:53:09 crc kubenswrapper[4758]: I0122 16:53:09.301409 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dpgv7" event={"ID":"b045db5d-f4ac-430d-a697-aeb1a8353fa3","Type":"ContainerStarted","Data":"fe10408f917bc5a4639ba726c854eae9d6a9338197201764120a5b6ad8a4776c"} Jan 22 16:53:09 crc kubenswrapper[4758]: I0122 16:53:09.301458 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dpgv7" event={"ID":"b045db5d-f4ac-430d-a697-aeb1a8353fa3","Type":"ContainerStarted","Data":"b7ba5e0adecd5e2a246bd90d5a96067613363e8fd79f7bc6df903293c0887def"} Jan 22 16:53:09 crc kubenswrapper[4758]: I0122 16:53:09.306279 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"f05be9d3-0051-48ce-9100-e436b5f14762","Type":"ContainerStarted","Data":"7389dff9eefe4183e084e60dc37272c8c2186610ed84ed98c5870c96819a070c"} Jan 22 16:53:09 crc kubenswrapper[4758]: I0122 16:53:09.308666 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09104569-02cb-462b-84af-6e8d4f7e6a7d","Type":"ContainerStarted","Data":"9dd1d3ecf5fc5b9376f83efe71d6cd9c542cf79b00f5e2147d9ab865866cb6e8"} Jan 22 16:53:09 crc kubenswrapper[4758]: I0122 16:53:09.311556 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-88b76f788-th2jq" event={"ID":"40487aaa-4c45-41b2-ab14-76477ed2f4bb","Type":"ContainerDied","Data":"f3f9941d0319e5be68d31ff2956ac7851959f13ebf64cc637fe65406c78ee073"} Jan 22 16:53:09 crc kubenswrapper[4758]: I0122 16:53:09.311597 4758 scope.go:117] "RemoveContainer" containerID="3f804875d0ec8e65f89084335817802426f37c82f619dc121c0a2be09bd1b67f" Jan 22 16:53:09 crc kubenswrapper[4758]: I0122 16:53:09.311957 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-88b76f788-th2jq" Jan 22 16:53:09 crc kubenswrapper[4758]: I0122 16:53:09.315779 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-zx7m7" event={"ID":"a9d9b3df-8ebe-49a4-9a23-0aa7dfc15ea4","Type":"ContainerStarted","Data":"39e19f6acde9c350bc9647629c292482d065cbaf187e00114b3cd36c00292a8b"} Jan 22 16:53:09 crc kubenswrapper[4758]: I0122 16:53:09.317818 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e851-account-create-update-mlg8s" event={"ID":"2e75be79-a61a-4e9b-92de-fc51822da088","Type":"ContainerStarted","Data":"74350b428fa335648e17ebcda29021a4a4353c87779b39d803298e4e71a27326"} Jan 22 16:53:09 crc kubenswrapper[4758]: I0122 16:53:09.325071 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-bc4e-account-create-update-mmj4j" event={"ID":"5f427bb1-80ef-4430-aad1-b2ff4b1f4370","Type":"ContainerStarted","Data":"766ce73d3e002a1c9fa6167924ee94486b274bc4fe9044c6c2bd9a324c618a90"} Jan 22 16:53:09 crc kubenswrapper[4758]: I0122 16:53:09.328373 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-489c-account-create-update-262gf" event={"ID":"ef1c7aa3-019f-4178-8a2b-dbb9a69fba64","Type":"ContainerStarted","Data":"ad30071a45b901d5c2d380d70d2739aecad93d493f8bd0e9df4cc803a1f432db"} Jan 22 16:53:09 crc kubenswrapper[4758]: I0122 16:53:09.341553 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-dpgv7" podStartSLOduration=6.341531996 podStartE2EDuration="6.341531996s" podCreationTimestamp="2026-01-22 16:53:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:53:09.324982413 +0000 UTC m=+1410.808321698" watchObservedRunningTime="2026-01-22 16:53:09.341531996 +0000 UTC m=+1410.824871291" Jan 22 16:53:09 crc kubenswrapper[4758]: I0122 16:53:09.344027 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.459523505 podStartE2EDuration="22.344016094s" podCreationTimestamp="2026-01-22 16:52:47 +0000 UTC" firstStartedPulling="2026-01-22 16:52:49.147648546 +0000 UTC m=+1390.630987831" lastFinishedPulling="2026-01-22 16:53:08.032141135 +0000 UTC m=+1409.515480420" observedRunningTime="2026-01-22 16:53:09.338212246 +0000 UTC m=+1410.821551531" watchObservedRunningTime="2026-01-22 16:53:09.344016094 +0000 UTC m=+1410.827355379" Jan 22 16:53:09 crc kubenswrapper[4758]: I0122 16:53:09.371650 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-88b76f788-th2jq"] Jan 22 16:53:09 crc kubenswrapper[4758]: I0122 16:53:09.384376 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-88b76f788-th2jq"] Jan 22 16:53:09 crc kubenswrapper[4758]: I0122 16:53:09.504839 4758 scope.go:117] "RemoveContainer" containerID="f512c542a3f7080a3e0e9498fe8473553577ff1a142250d2654113eab457a261" Jan 22 16:53:10 crc kubenswrapper[4758]: I0122 16:53:10.339412 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09104569-02cb-462b-84af-6e8d4f7e6a7d","Type":"ContainerStarted","Data":"19883b7e488b77fb7d77cf08b0d0273bd5505866fcff5136812af83897473892"} Jan 22 16:53:10 crc kubenswrapper[4758]: I0122 16:53:10.339758 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09104569-02cb-462b-84af-6e8d4f7e6a7d","Type":"ContainerStarted","Data":"03d13398d6dbb95224afb5b90d44b67b2756a347424307e1a23d66168870c353"} Jan 22 16:53:10 crc kubenswrapper[4758]: I0122 16:53:10.344830 4758 generic.go:334] "Generic (PLEG): container finished" podID="ef1c7aa3-019f-4178-8a2b-dbb9a69fba64" containerID="17318a3dc2c6cb2ae2f943646a145e9108db8d6c009d308b0604b2731ed85f47" exitCode=0 Jan 22 16:53:10 crc kubenswrapper[4758]: I0122 16:53:10.344888 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-489c-account-create-update-262gf" event={"ID":"ef1c7aa3-019f-4178-8a2b-dbb9a69fba64","Type":"ContainerDied","Data":"17318a3dc2c6cb2ae2f943646a145e9108db8d6c009d308b0604b2731ed85f47"} Jan 22 16:53:10 crc kubenswrapper[4758]: I0122 16:53:10.346464 4758 generic.go:334] "Generic (PLEG): container finished" podID="a9d9b3df-8ebe-49a4-9a23-0aa7dfc15ea4" containerID="ab96ac48962808683a539b7e299acb866974855dda2c573b47c85ff3f69f5a4b" exitCode=0 Jan 22 16:53:10 crc kubenswrapper[4758]: I0122 16:53:10.346531 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-zx7m7" event={"ID":"a9d9b3df-8ebe-49a4-9a23-0aa7dfc15ea4","Type":"ContainerDied","Data":"ab96ac48962808683a539b7e299acb866974855dda2c573b47c85ff3f69f5a4b"} Jan 22 16:53:10 crc kubenswrapper[4758]: I0122 16:53:10.348420 4758 generic.go:334] "Generic (PLEG): container finished" podID="2e75be79-a61a-4e9b-92de-fc51822da088" containerID="48ec60e0582253735c044eaf382f2a2d3de1738a8c388fbd002d1303b6cff8ee" exitCode=0 Jan 22 16:53:10 crc kubenswrapper[4758]: I0122 16:53:10.348507 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e851-account-create-update-mlg8s" event={"ID":"2e75be79-a61a-4e9b-92de-fc51822da088","Type":"ContainerDied","Data":"48ec60e0582253735c044eaf382f2a2d3de1738a8c388fbd002d1303b6cff8ee"} Jan 22 16:53:10 crc kubenswrapper[4758]: I0122 16:53:10.351970 4758 generic.go:334] "Generic (PLEG): container finished" podID="581d442b-f2db-42dc-bec7-f3b0d32456fb" containerID="0da8b121f1a90f41679d3a4d85f1f3d708c60e7ba2a50175f60392dbc18bed65" exitCode=0 Jan 22 16:53:10 crc kubenswrapper[4758]: I0122 16:53:10.352083 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-lgj69" event={"ID":"581d442b-f2db-42dc-bec7-f3b0d32456fb","Type":"ContainerDied","Data":"0da8b121f1a90f41679d3a4d85f1f3d708c60e7ba2a50175f60392dbc18bed65"} Jan 22 16:53:10 crc kubenswrapper[4758]: I0122 16:53:10.354919 4758 generic.go:334] "Generic (PLEG): container finished" podID="b045db5d-f4ac-430d-a697-aeb1a8353fa3" containerID="fe10408f917bc5a4639ba726c854eae9d6a9338197201764120a5b6ad8a4776c" exitCode=0 Jan 22 16:53:10 crc kubenswrapper[4758]: I0122 16:53:10.355015 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dpgv7" event={"ID":"b045db5d-f4ac-430d-a697-aeb1a8353fa3","Type":"ContainerDied","Data":"fe10408f917bc5a4639ba726c854eae9d6a9338197201764120a5b6ad8a4776c"} Jan 22 16:53:10 crc kubenswrapper[4758]: I0122 16:53:10.356878 4758 generic.go:334] "Generic (PLEG): container finished" podID="5f427bb1-80ef-4430-aad1-b2ff4b1f4370" containerID="a7aeadf3f379101c9b8098bfc406fac71974c449092423e455debdf5153545c0" exitCode=0 Jan 22 16:53:10 crc kubenswrapper[4758]: I0122 16:53:10.356952 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-bc4e-account-create-update-mmj4j" event={"ID":"5f427bb1-80ef-4430-aad1-b2ff4b1f4370","Type":"ContainerDied","Data":"a7aeadf3f379101c9b8098bfc406fac71974c449092423e455debdf5153545c0"} Jan 22 16:53:10 crc kubenswrapper[4758]: I0122 16:53:10.824037 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40487aaa-4c45-41b2-ab14-76477ed2f4bb" path="/var/lib/kubelet/pods/40487aaa-4c45-41b2-ab14-76477ed2f4bb/volumes" Jan 22 16:53:11 crc kubenswrapper[4758]: I0122 16:53:11.370173 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09104569-02cb-462b-84af-6e8d4f7e6a7d","Type":"ContainerStarted","Data":"850ec3b0875f710ab19fbbbbbb222350378b22f6b1706dcb709be6f8365ed510"} Jan 22 16:53:11 crc kubenswrapper[4758]: I0122 16:53:11.867495 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dpgv7" Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.020352 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b045db5d-f4ac-430d-a697-aeb1a8353fa3-operator-scripts\") pod \"b045db5d-f4ac-430d-a697-aeb1a8353fa3\" (UID: \"b045db5d-f4ac-430d-a697-aeb1a8353fa3\") " Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.020403 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9zzf\" (UniqueName: \"kubernetes.io/projected/b045db5d-f4ac-430d-a697-aeb1a8353fa3-kube-api-access-d9zzf\") pod \"b045db5d-f4ac-430d-a697-aeb1a8353fa3\" (UID: \"b045db5d-f4ac-430d-a697-aeb1a8353fa3\") " Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.020877 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b045db5d-f4ac-430d-a697-aeb1a8353fa3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b045db5d-f4ac-430d-a697-aeb1a8353fa3" (UID: "b045db5d-f4ac-430d-a697-aeb1a8353fa3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.021471 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b045db5d-f4ac-430d-a697-aeb1a8353fa3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.029586 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b045db5d-f4ac-430d-a697-aeb1a8353fa3-kube-api-access-d9zzf" (OuterVolumeSpecName: "kube-api-access-d9zzf") pod "b045db5d-f4ac-430d-a697-aeb1a8353fa3" (UID: "b045db5d-f4ac-430d-a697-aeb1a8353fa3"). InnerVolumeSpecName "kube-api-access-d9zzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.122914 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9zzf\" (UniqueName: \"kubernetes.io/projected/b045db5d-f4ac-430d-a697-aeb1a8353fa3-kube-api-access-d9zzf\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.123331 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-zx7m7" Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.133667 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-bc4e-account-create-update-mmj4j" Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.149153 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-489c-account-create-update-262gf" Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.156449 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e851-account-create-update-mlg8s" Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.168670 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-lgj69" Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.224414 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef1c7aa3-019f-4178-8a2b-dbb9a69fba64-operator-scripts\") pod \"ef1c7aa3-019f-4178-8a2b-dbb9a69fba64\" (UID: \"ef1c7aa3-019f-4178-8a2b-dbb9a69fba64\") " Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.224478 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zft7v\" (UniqueName: \"kubernetes.io/projected/5f427bb1-80ef-4430-aad1-b2ff4b1f4370-kube-api-access-zft7v\") pod \"5f427bb1-80ef-4430-aad1-b2ff4b1f4370\" (UID: \"5f427bb1-80ef-4430-aad1-b2ff4b1f4370\") " Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.224554 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pp8l7\" (UniqueName: \"kubernetes.io/projected/a9d9b3df-8ebe-49a4-9a23-0aa7dfc15ea4-kube-api-access-pp8l7\") pod \"a9d9b3df-8ebe-49a4-9a23-0aa7dfc15ea4\" (UID: \"a9d9b3df-8ebe-49a4-9a23-0aa7dfc15ea4\") " Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.224622 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twgpk\" (UniqueName: \"kubernetes.io/projected/ef1c7aa3-019f-4178-8a2b-dbb9a69fba64-kube-api-access-twgpk\") pod \"ef1c7aa3-019f-4178-8a2b-dbb9a69fba64\" (UID: \"ef1c7aa3-019f-4178-8a2b-dbb9a69fba64\") " Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.224682 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f427bb1-80ef-4430-aad1-b2ff4b1f4370-operator-scripts\") pod \"5f427bb1-80ef-4430-aad1-b2ff4b1f4370\" (UID: \"5f427bb1-80ef-4430-aad1-b2ff4b1f4370\") " Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.224722 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9d9b3df-8ebe-49a4-9a23-0aa7dfc15ea4-operator-scripts\") pod \"a9d9b3df-8ebe-49a4-9a23-0aa7dfc15ea4\" (UID: \"a9d9b3df-8ebe-49a4-9a23-0aa7dfc15ea4\") " Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.225000 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef1c7aa3-019f-4178-8a2b-dbb9a69fba64-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ef1c7aa3-019f-4178-8a2b-dbb9a69fba64" (UID: "ef1c7aa3-019f-4178-8a2b-dbb9a69fba64"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.225267 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9d9b3df-8ebe-49a4-9a23-0aa7dfc15ea4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a9d9b3df-8ebe-49a4-9a23-0aa7dfc15ea4" (UID: "a9d9b3df-8ebe-49a4-9a23-0aa7dfc15ea4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.225294 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f427bb1-80ef-4430-aad1-b2ff4b1f4370-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5f427bb1-80ef-4430-aad1-b2ff4b1f4370" (UID: "5f427bb1-80ef-4430-aad1-b2ff4b1f4370"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.225487 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef1c7aa3-019f-4178-8a2b-dbb9a69fba64-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.225504 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9d9b3df-8ebe-49a4-9a23-0aa7dfc15ea4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.228534 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9d9b3df-8ebe-49a4-9a23-0aa7dfc15ea4-kube-api-access-pp8l7" (OuterVolumeSpecName: "kube-api-access-pp8l7") pod "a9d9b3df-8ebe-49a4-9a23-0aa7dfc15ea4" (UID: "a9d9b3df-8ebe-49a4-9a23-0aa7dfc15ea4"). InnerVolumeSpecName "kube-api-access-pp8l7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.228570 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f427bb1-80ef-4430-aad1-b2ff4b1f4370-kube-api-access-zft7v" (OuterVolumeSpecName: "kube-api-access-zft7v") pod "5f427bb1-80ef-4430-aad1-b2ff4b1f4370" (UID: "5f427bb1-80ef-4430-aad1-b2ff4b1f4370"). InnerVolumeSpecName "kube-api-access-zft7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.230457 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef1c7aa3-019f-4178-8a2b-dbb9a69fba64-kube-api-access-twgpk" (OuterVolumeSpecName: "kube-api-access-twgpk") pod "ef1c7aa3-019f-4178-8a2b-dbb9a69fba64" (UID: "ef1c7aa3-019f-4178-8a2b-dbb9a69fba64"). InnerVolumeSpecName "kube-api-access-twgpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.326779 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4m8wj\" (UniqueName: \"kubernetes.io/projected/2e75be79-a61a-4e9b-92de-fc51822da088-kube-api-access-4m8wj\") pod \"2e75be79-a61a-4e9b-92de-fc51822da088\" (UID: \"2e75be79-a61a-4e9b-92de-fc51822da088\") " Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.327281 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/581d442b-f2db-42dc-bec7-f3b0d32456fb-operator-scripts\") pod \"581d442b-f2db-42dc-bec7-f3b0d32456fb\" (UID: \"581d442b-f2db-42dc-bec7-f3b0d32456fb\") " Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.327393 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gsg2\" (UniqueName: \"kubernetes.io/projected/581d442b-f2db-42dc-bec7-f3b0d32456fb-kube-api-access-6gsg2\") pod \"581d442b-f2db-42dc-bec7-f3b0d32456fb\" (UID: \"581d442b-f2db-42dc-bec7-f3b0d32456fb\") " Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.327530 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e75be79-a61a-4e9b-92de-fc51822da088-operator-scripts\") pod \"2e75be79-a61a-4e9b-92de-fc51822da088\" (UID: \"2e75be79-a61a-4e9b-92de-fc51822da088\") " Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.327887 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e75be79-a61a-4e9b-92de-fc51822da088-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2e75be79-a61a-4e9b-92de-fc51822da088" (UID: "2e75be79-a61a-4e9b-92de-fc51822da088"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.327986 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/581d442b-f2db-42dc-bec7-f3b0d32456fb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "581d442b-f2db-42dc-bec7-f3b0d32456fb" (UID: "581d442b-f2db-42dc-bec7-f3b0d32456fb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.328241 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twgpk\" (UniqueName: \"kubernetes.io/projected/ef1c7aa3-019f-4178-8a2b-dbb9a69fba64-kube-api-access-twgpk\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.328343 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f427bb1-80ef-4430-aad1-b2ff4b1f4370-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.328434 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/581d442b-f2db-42dc-bec7-f3b0d32456fb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.328492 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e75be79-a61a-4e9b-92de-fc51822da088-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.328549 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zft7v\" (UniqueName: \"kubernetes.io/projected/5f427bb1-80ef-4430-aad1-b2ff4b1f4370-kube-api-access-zft7v\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.328623 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pp8l7\" (UniqueName: \"kubernetes.io/projected/a9d9b3df-8ebe-49a4-9a23-0aa7dfc15ea4-kube-api-access-pp8l7\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.330186 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e75be79-a61a-4e9b-92de-fc51822da088-kube-api-access-4m8wj" (OuterVolumeSpecName: "kube-api-access-4m8wj") pod "2e75be79-a61a-4e9b-92de-fc51822da088" (UID: "2e75be79-a61a-4e9b-92de-fc51822da088"). InnerVolumeSpecName "kube-api-access-4m8wj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.330681 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/581d442b-f2db-42dc-bec7-f3b0d32456fb-kube-api-access-6gsg2" (OuterVolumeSpecName: "kube-api-access-6gsg2") pod "581d442b-f2db-42dc-bec7-f3b0d32456fb" (UID: "581d442b-f2db-42dc-bec7-f3b0d32456fb"). InnerVolumeSpecName "kube-api-access-6gsg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.383269 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dpgv7" event={"ID":"b045db5d-f4ac-430d-a697-aeb1a8353fa3","Type":"ContainerDied","Data":"b7ba5e0adecd5e2a246bd90d5a96067613363e8fd79f7bc6df903293c0887def"} Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.383317 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7ba5e0adecd5e2a246bd90d5a96067613363e8fd79f7bc6df903293c0887def" Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.383334 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dpgv7" Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.386079 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-bc4e-account-create-update-mmj4j" event={"ID":"5f427bb1-80ef-4430-aad1-b2ff4b1f4370","Type":"ContainerDied","Data":"766ce73d3e002a1c9fa6167924ee94486b274bc4fe9044c6c2bd9a324c618a90"} Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.386118 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="766ce73d3e002a1c9fa6167924ee94486b274bc4fe9044c6c2bd9a324c618a90" Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.386123 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-bc4e-account-create-update-mmj4j" Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.387887 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-489c-account-create-update-262gf" event={"ID":"ef1c7aa3-019f-4178-8a2b-dbb9a69fba64","Type":"ContainerDied","Data":"ad30071a45b901d5c2d380d70d2739aecad93d493f8bd0e9df4cc803a1f432db"} Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.387910 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad30071a45b901d5c2d380d70d2739aecad93d493f8bd0e9df4cc803a1f432db" Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.387943 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-489c-account-create-update-262gf" Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.399650 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-zx7m7" Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.399687 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-zx7m7" event={"ID":"a9d9b3df-8ebe-49a4-9a23-0aa7dfc15ea4","Type":"ContainerDied","Data":"39e19f6acde9c350bc9647629c292482d065cbaf187e00114b3cd36c00292a8b"} Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.399718 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39e19f6acde9c350bc9647629c292482d065cbaf187e00114b3cd36c00292a8b" Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.410017 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e851-account-create-update-mlg8s" event={"ID":"2e75be79-a61a-4e9b-92de-fc51822da088","Type":"ContainerDied","Data":"74350b428fa335648e17ebcda29021a4a4353c87779b39d803298e4e71a27326"} Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.410041 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74350b428fa335648e17ebcda29021a4a4353c87779b39d803298e4e71a27326" Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.410081 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e851-account-create-update-mlg8s" Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.415500 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-lgj69" event={"ID":"581d442b-f2db-42dc-bec7-f3b0d32456fb","Type":"ContainerDied","Data":"866e781011ae24278a73c0ee969011772dbbfaf32ebffd26697a4d672ca6cded"} Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.415535 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="866e781011ae24278a73c0ee969011772dbbfaf32ebffd26697a4d672ca6cded" Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.415599 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-lgj69" Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.431227 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gsg2\" (UniqueName: \"kubernetes.io/projected/581d442b-f2db-42dc-bec7-f3b0d32456fb-kube-api-access-6gsg2\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:12 crc kubenswrapper[4758]: I0122 16:53:12.431265 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4m8wj\" (UniqueName: \"kubernetes.io/projected/2e75be79-a61a-4e9b-92de-fc51822da088-kube-api-access-4m8wj\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:13 crc kubenswrapper[4758]: I0122 16:53:13.428301 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09104569-02cb-462b-84af-6e8d4f7e6a7d","Type":"ContainerStarted","Data":"47a10e26e93699f6a8d6f6e42e2a45b1e9a64a9345ba9a328d25965b8130f875"} Jan 22 16:53:13 crc kubenswrapper[4758]: I0122 16:53:13.428588 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 16:53:13 crc kubenswrapper[4758]: I0122 16:53:13.428579 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="09104569-02cb-462b-84af-6e8d4f7e6a7d" containerName="sg-core" containerID="cri-o://850ec3b0875f710ab19fbbbbbb222350378b22f6b1706dcb709be6f8365ed510" gracePeriod=30 Jan 22 16:53:13 crc kubenswrapper[4758]: I0122 16:53:13.428545 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="09104569-02cb-462b-84af-6e8d4f7e6a7d" containerName="ceilometer-central-agent" containerID="cri-o://03d13398d6dbb95224afb5b90d44b67b2756a347424307e1a23d66168870c353" gracePeriod=30 Jan 22 16:53:13 crc kubenswrapper[4758]: I0122 16:53:13.428611 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="09104569-02cb-462b-84af-6e8d4f7e6a7d" containerName="proxy-httpd" containerID="cri-o://47a10e26e93699f6a8d6f6e42e2a45b1e9a64a9345ba9a328d25965b8130f875" gracePeriod=30 Jan 22 16:53:13 crc kubenswrapper[4758]: I0122 16:53:13.428583 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="09104569-02cb-462b-84af-6e8d4f7e6a7d" containerName="ceilometer-notification-agent" containerID="cri-o://19883b7e488b77fb7d77cf08b0d0273bd5505866fcff5136812af83897473892" gracePeriod=30 Jan 22 16:53:13 crc kubenswrapper[4758]: I0122 16:53:13.453702 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=9.836921187 podStartE2EDuration="13.45368451s" podCreationTimestamp="2026-01-22 16:53:00 +0000 UTC" firstStartedPulling="2026-01-22 16:53:08.718260819 +0000 UTC m=+1410.201600104" lastFinishedPulling="2026-01-22 16:53:12.335024142 +0000 UTC m=+1413.818363427" observedRunningTime="2026-01-22 16:53:13.451647705 +0000 UTC m=+1414.934986990" watchObservedRunningTime="2026-01-22 16:53:13.45368451 +0000 UTC m=+1414.937023795" Jan 22 16:53:13 crc kubenswrapper[4758]: I0122 16:53:13.839961 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:53:13 crc kubenswrapper[4758]: I0122 16:53:13.840330 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:53:14 crc kubenswrapper[4758]: I0122 16:53:14.427296 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qgsmp"] Jan 22 16:53:14 crc kubenswrapper[4758]: E0122 16:53:14.427819 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9d9b3df-8ebe-49a4-9a23-0aa7dfc15ea4" containerName="mariadb-database-create" Jan 22 16:53:14 crc kubenswrapper[4758]: I0122 16:53:14.427840 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9d9b3df-8ebe-49a4-9a23-0aa7dfc15ea4" containerName="mariadb-database-create" Jan 22 16:53:14 crc kubenswrapper[4758]: E0122 16:53:14.427859 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40487aaa-4c45-41b2-ab14-76477ed2f4bb" containerName="horizon-log" Jan 22 16:53:14 crc kubenswrapper[4758]: I0122 16:53:14.427867 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="40487aaa-4c45-41b2-ab14-76477ed2f4bb" containerName="horizon-log" Jan 22 16:53:14 crc kubenswrapper[4758]: E0122 16:53:14.427879 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40487aaa-4c45-41b2-ab14-76477ed2f4bb" containerName="horizon" Jan 22 16:53:14 crc kubenswrapper[4758]: I0122 16:53:14.427888 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="40487aaa-4c45-41b2-ab14-76477ed2f4bb" containerName="horizon" Jan 22 16:53:14 crc kubenswrapper[4758]: E0122 16:53:14.427903 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="581d442b-f2db-42dc-bec7-f3b0d32456fb" containerName="mariadb-database-create" Jan 22 16:53:14 crc kubenswrapper[4758]: I0122 16:53:14.427913 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="581d442b-f2db-42dc-bec7-f3b0d32456fb" containerName="mariadb-database-create" Jan 22 16:53:14 crc kubenswrapper[4758]: E0122 16:53:14.427932 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f427bb1-80ef-4430-aad1-b2ff4b1f4370" containerName="mariadb-account-create-update" Jan 22 16:53:14 crc kubenswrapper[4758]: I0122 16:53:14.427940 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f427bb1-80ef-4430-aad1-b2ff4b1f4370" containerName="mariadb-account-create-update" Jan 22 16:53:14 crc kubenswrapper[4758]: E0122 16:53:14.427956 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b045db5d-f4ac-430d-a697-aeb1a8353fa3" containerName="mariadb-database-create" Jan 22 16:53:14 crc kubenswrapper[4758]: I0122 16:53:14.427964 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="b045db5d-f4ac-430d-a697-aeb1a8353fa3" containerName="mariadb-database-create" Jan 22 16:53:14 crc kubenswrapper[4758]: E0122 16:53:14.427977 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e75be79-a61a-4e9b-92de-fc51822da088" containerName="mariadb-account-create-update" Jan 22 16:53:14 crc kubenswrapper[4758]: I0122 16:53:14.427985 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e75be79-a61a-4e9b-92de-fc51822da088" containerName="mariadb-account-create-update" Jan 22 16:53:14 crc kubenswrapper[4758]: E0122 16:53:14.428007 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef1c7aa3-019f-4178-8a2b-dbb9a69fba64" containerName="mariadb-account-create-update" Jan 22 16:53:14 crc kubenswrapper[4758]: I0122 16:53:14.428015 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef1c7aa3-019f-4178-8a2b-dbb9a69fba64" containerName="mariadb-account-create-update" Jan 22 16:53:14 crc kubenswrapper[4758]: I0122 16:53:14.428241 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="b045db5d-f4ac-430d-a697-aeb1a8353fa3" containerName="mariadb-database-create" Jan 22 16:53:14 crc kubenswrapper[4758]: I0122 16:53:14.428258 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f427bb1-80ef-4430-aad1-b2ff4b1f4370" containerName="mariadb-account-create-update" Jan 22 16:53:14 crc kubenswrapper[4758]: I0122 16:53:14.428280 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9d9b3df-8ebe-49a4-9a23-0aa7dfc15ea4" containerName="mariadb-database-create" Jan 22 16:53:14 crc kubenswrapper[4758]: I0122 16:53:14.428296 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e75be79-a61a-4e9b-92de-fc51822da088" containerName="mariadb-account-create-update" Jan 22 16:53:14 crc kubenswrapper[4758]: I0122 16:53:14.428311 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef1c7aa3-019f-4178-8a2b-dbb9a69fba64" containerName="mariadb-account-create-update" Jan 22 16:53:14 crc kubenswrapper[4758]: I0122 16:53:14.428326 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="40487aaa-4c45-41b2-ab14-76477ed2f4bb" containerName="horizon-log" Jan 22 16:53:14 crc kubenswrapper[4758]: I0122 16:53:14.428345 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="40487aaa-4c45-41b2-ab14-76477ed2f4bb" containerName="horizon" Jan 22 16:53:14 crc kubenswrapper[4758]: I0122 16:53:14.428354 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="581d442b-f2db-42dc-bec7-f3b0d32456fb" containerName="mariadb-database-create" Jan 22 16:53:14 crc kubenswrapper[4758]: I0122 16:53:14.429232 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qgsmp" Jan 22 16:53:14 crc kubenswrapper[4758]: I0122 16:53:14.439595 4758 generic.go:334] "Generic (PLEG): container finished" podID="09104569-02cb-462b-84af-6e8d4f7e6a7d" containerID="47a10e26e93699f6a8d6f6e42e2a45b1e9a64a9345ba9a328d25965b8130f875" exitCode=0 Jan 22 16:53:14 crc kubenswrapper[4758]: I0122 16:53:14.439626 4758 generic.go:334] "Generic (PLEG): container finished" podID="09104569-02cb-462b-84af-6e8d4f7e6a7d" containerID="850ec3b0875f710ab19fbbbbbb222350378b22f6b1706dcb709be6f8365ed510" exitCode=2 Jan 22 16:53:14 crc kubenswrapper[4758]: I0122 16:53:14.439634 4758 generic.go:334] "Generic (PLEG): container finished" podID="09104569-02cb-462b-84af-6e8d4f7e6a7d" containerID="19883b7e488b77fb7d77cf08b0d0273bd5505866fcff5136812af83897473892" exitCode=0 Jan 22 16:53:14 crc kubenswrapper[4758]: I0122 16:53:14.439655 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09104569-02cb-462b-84af-6e8d4f7e6a7d","Type":"ContainerDied","Data":"47a10e26e93699f6a8d6f6e42e2a45b1e9a64a9345ba9a328d25965b8130f875"} Jan 22 16:53:14 crc kubenswrapper[4758]: I0122 16:53:14.439681 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09104569-02cb-462b-84af-6e8d4f7e6a7d","Type":"ContainerDied","Data":"850ec3b0875f710ab19fbbbbbb222350378b22f6b1706dcb709be6f8365ed510"} Jan 22 16:53:14 crc kubenswrapper[4758]: I0122 16:53:14.439699 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09104569-02cb-462b-84af-6e8d4f7e6a7d","Type":"ContainerDied","Data":"19883b7e488b77fb7d77cf08b0d0273bd5505866fcff5136812af83897473892"} Jan 22 16:53:14 crc kubenswrapper[4758]: I0122 16:53:14.442025 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qgsmp"] Jan 22 16:53:14 crc kubenswrapper[4758]: I0122 16:53:14.445465 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 22 16:53:14 crc kubenswrapper[4758]: I0122 16:53:14.456853 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-r6mc9" Jan 22 16:53:14 crc kubenswrapper[4758]: I0122 16:53:14.456982 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 22 16:53:14 crc kubenswrapper[4758]: I0122 16:53:14.578411 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc06c7d9-b286-48cd-a359-6c18d1cc0e80-scripts\") pod \"nova-cell0-conductor-db-sync-qgsmp\" (UID: \"fc06c7d9-b286-48cd-a359-6c18d1cc0e80\") " pod="openstack/nova-cell0-conductor-db-sync-qgsmp" Jan 22 16:53:14 crc kubenswrapper[4758]: I0122 16:53:14.578506 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc06c7d9-b286-48cd-a359-6c18d1cc0e80-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qgsmp\" (UID: \"fc06c7d9-b286-48cd-a359-6c18d1cc0e80\") " pod="openstack/nova-cell0-conductor-db-sync-qgsmp" Jan 22 16:53:14 crc kubenswrapper[4758]: I0122 16:53:14.578583 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bhfv\" (UniqueName: \"kubernetes.io/projected/fc06c7d9-b286-48cd-a359-6c18d1cc0e80-kube-api-access-5bhfv\") pod \"nova-cell0-conductor-db-sync-qgsmp\" (UID: \"fc06c7d9-b286-48cd-a359-6c18d1cc0e80\") " pod="openstack/nova-cell0-conductor-db-sync-qgsmp" Jan 22 16:53:14 crc kubenswrapper[4758]: I0122 16:53:14.578761 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc06c7d9-b286-48cd-a359-6c18d1cc0e80-config-data\") pod \"nova-cell0-conductor-db-sync-qgsmp\" (UID: \"fc06c7d9-b286-48cd-a359-6c18d1cc0e80\") " pod="openstack/nova-cell0-conductor-db-sync-qgsmp" Jan 22 16:53:14 crc kubenswrapper[4758]: I0122 16:53:14.680650 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc06c7d9-b286-48cd-a359-6c18d1cc0e80-scripts\") pod \"nova-cell0-conductor-db-sync-qgsmp\" (UID: \"fc06c7d9-b286-48cd-a359-6c18d1cc0e80\") " pod="openstack/nova-cell0-conductor-db-sync-qgsmp" Jan 22 16:53:14 crc kubenswrapper[4758]: I0122 16:53:14.680735 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc06c7d9-b286-48cd-a359-6c18d1cc0e80-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qgsmp\" (UID: \"fc06c7d9-b286-48cd-a359-6c18d1cc0e80\") " pod="openstack/nova-cell0-conductor-db-sync-qgsmp" Jan 22 16:53:14 crc kubenswrapper[4758]: I0122 16:53:14.680805 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bhfv\" (UniqueName: \"kubernetes.io/projected/fc06c7d9-b286-48cd-a359-6c18d1cc0e80-kube-api-access-5bhfv\") pod \"nova-cell0-conductor-db-sync-qgsmp\" (UID: \"fc06c7d9-b286-48cd-a359-6c18d1cc0e80\") " pod="openstack/nova-cell0-conductor-db-sync-qgsmp" Jan 22 16:53:14 crc kubenswrapper[4758]: I0122 16:53:14.680882 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc06c7d9-b286-48cd-a359-6c18d1cc0e80-config-data\") pod \"nova-cell0-conductor-db-sync-qgsmp\" (UID: \"fc06c7d9-b286-48cd-a359-6c18d1cc0e80\") " pod="openstack/nova-cell0-conductor-db-sync-qgsmp" Jan 22 16:53:14 crc kubenswrapper[4758]: I0122 16:53:14.686378 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc06c7d9-b286-48cd-a359-6c18d1cc0e80-scripts\") pod \"nova-cell0-conductor-db-sync-qgsmp\" (UID: \"fc06c7d9-b286-48cd-a359-6c18d1cc0e80\") " pod="openstack/nova-cell0-conductor-db-sync-qgsmp" Jan 22 16:53:14 crc kubenswrapper[4758]: I0122 16:53:14.686726 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc06c7d9-b286-48cd-a359-6c18d1cc0e80-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qgsmp\" (UID: \"fc06c7d9-b286-48cd-a359-6c18d1cc0e80\") " pod="openstack/nova-cell0-conductor-db-sync-qgsmp" Jan 22 16:53:14 crc kubenswrapper[4758]: I0122 16:53:14.697872 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc06c7d9-b286-48cd-a359-6c18d1cc0e80-config-data\") pod \"nova-cell0-conductor-db-sync-qgsmp\" (UID: \"fc06c7d9-b286-48cd-a359-6c18d1cc0e80\") " pod="openstack/nova-cell0-conductor-db-sync-qgsmp" Jan 22 16:53:14 crc kubenswrapper[4758]: I0122 16:53:14.707764 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bhfv\" (UniqueName: \"kubernetes.io/projected/fc06c7d9-b286-48cd-a359-6c18d1cc0e80-kube-api-access-5bhfv\") pod \"nova-cell0-conductor-db-sync-qgsmp\" (UID: \"fc06c7d9-b286-48cd-a359-6c18d1cc0e80\") " pod="openstack/nova-cell0-conductor-db-sync-qgsmp" Jan 22 16:53:14 crc kubenswrapper[4758]: I0122 16:53:14.726250 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 22 16:53:14 crc kubenswrapper[4758]: I0122 16:53:14.727150 4758 scope.go:117] "RemoveContainer" containerID="7881cf6a1ea9246b1451350e25b945ccd52405bd209ed861bedc85b51ac01118" Jan 22 16:53:14 crc kubenswrapper[4758]: I0122 16:53:14.727344 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 22 16:53:14 crc kubenswrapper[4758]: I0122 16:53:14.792291 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qgsmp" Jan 22 16:53:15 crc kubenswrapper[4758]: I0122 16:53:15.331219 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qgsmp"] Jan 22 16:53:15 crc kubenswrapper[4758]: I0122 16:53:15.454799 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qgsmp" event={"ID":"fc06c7d9-b286-48cd-a359-6c18d1cc0e80","Type":"ContainerStarted","Data":"5d9f60788a6a31b9064b2981ac5a025e1801a7272a155115eeb23625fa0a0f7c"} Jan 22 16:53:15 crc kubenswrapper[4758]: I0122 16:53:15.456989 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5","Type":"ContainerStarted","Data":"811915bfcfccc9a4a5f800579b083a1bf643cbdcda278638ddf797e1bd37b62d"} Jan 22 16:53:16 crc kubenswrapper[4758]: I0122 16:53:16.469626 4758 generic.go:334] "Generic (PLEG): container finished" podID="09104569-02cb-462b-84af-6e8d4f7e6a7d" containerID="03d13398d6dbb95224afb5b90d44b67b2756a347424307e1a23d66168870c353" exitCode=0 Jan 22 16:53:16 crc kubenswrapper[4758]: I0122 16:53:16.469788 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09104569-02cb-462b-84af-6e8d4f7e6a7d","Type":"ContainerDied","Data":"03d13398d6dbb95224afb5b90d44b67b2756a347424307e1a23d66168870c353"} Jan 22 16:53:16 crc kubenswrapper[4758]: I0122 16:53:16.689185 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 16:53:16 crc kubenswrapper[4758]: I0122 16:53:16.840091 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/09104569-02cb-462b-84af-6e8d4f7e6a7d-sg-core-conf-yaml\") pod \"09104569-02cb-462b-84af-6e8d4f7e6a7d\" (UID: \"09104569-02cb-462b-84af-6e8d4f7e6a7d\") " Jan 22 16:53:16 crc kubenswrapper[4758]: I0122 16:53:16.840154 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09104569-02cb-462b-84af-6e8d4f7e6a7d-log-httpd\") pod \"09104569-02cb-462b-84af-6e8d4f7e6a7d\" (UID: \"09104569-02cb-462b-84af-6e8d4f7e6a7d\") " Jan 22 16:53:16 crc kubenswrapper[4758]: I0122 16:53:16.840173 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09104569-02cb-462b-84af-6e8d4f7e6a7d-config-data\") pod \"09104569-02cb-462b-84af-6e8d4f7e6a7d\" (UID: \"09104569-02cb-462b-84af-6e8d4f7e6a7d\") " Jan 22 16:53:16 crc kubenswrapper[4758]: I0122 16:53:16.840194 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09104569-02cb-462b-84af-6e8d4f7e6a7d-run-httpd\") pod \"09104569-02cb-462b-84af-6e8d4f7e6a7d\" (UID: \"09104569-02cb-462b-84af-6e8d4f7e6a7d\") " Jan 22 16:53:16 crc kubenswrapper[4758]: I0122 16:53:16.840223 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvmt4\" (UniqueName: \"kubernetes.io/projected/09104569-02cb-462b-84af-6e8d4f7e6a7d-kube-api-access-jvmt4\") pod \"09104569-02cb-462b-84af-6e8d4f7e6a7d\" (UID: \"09104569-02cb-462b-84af-6e8d4f7e6a7d\") " Jan 22 16:53:16 crc kubenswrapper[4758]: I0122 16:53:16.840256 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09104569-02cb-462b-84af-6e8d4f7e6a7d-scripts\") pod \"09104569-02cb-462b-84af-6e8d4f7e6a7d\" (UID: \"09104569-02cb-462b-84af-6e8d4f7e6a7d\") " Jan 22 16:53:16 crc kubenswrapper[4758]: I0122 16:53:16.840370 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09104569-02cb-462b-84af-6e8d4f7e6a7d-combined-ca-bundle\") pod \"09104569-02cb-462b-84af-6e8d4f7e6a7d\" (UID: \"09104569-02cb-462b-84af-6e8d4f7e6a7d\") " Jan 22 16:53:16 crc kubenswrapper[4758]: I0122 16:53:16.849176 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09104569-02cb-462b-84af-6e8d4f7e6a7d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "09104569-02cb-462b-84af-6e8d4f7e6a7d" (UID: "09104569-02cb-462b-84af-6e8d4f7e6a7d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:53:16 crc kubenswrapper[4758]: I0122 16:53:16.849459 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09104569-02cb-462b-84af-6e8d4f7e6a7d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "09104569-02cb-462b-84af-6e8d4f7e6a7d" (UID: "09104569-02cb-462b-84af-6e8d4f7e6a7d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:53:16 crc kubenswrapper[4758]: I0122 16:53:16.866014 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09104569-02cb-462b-84af-6e8d4f7e6a7d-kube-api-access-jvmt4" (OuterVolumeSpecName: "kube-api-access-jvmt4") pod "09104569-02cb-462b-84af-6e8d4f7e6a7d" (UID: "09104569-02cb-462b-84af-6e8d4f7e6a7d"). InnerVolumeSpecName "kube-api-access-jvmt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:53:16 crc kubenswrapper[4758]: I0122 16:53:16.899358 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09104569-02cb-462b-84af-6e8d4f7e6a7d-scripts" (OuterVolumeSpecName: "scripts") pod "09104569-02cb-462b-84af-6e8d4f7e6a7d" (UID: "09104569-02cb-462b-84af-6e8d4f7e6a7d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:16 crc kubenswrapper[4758]: I0122 16:53:16.929926 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09104569-02cb-462b-84af-6e8d4f7e6a7d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "09104569-02cb-462b-84af-6e8d4f7e6a7d" (UID: "09104569-02cb-462b-84af-6e8d4f7e6a7d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:16 crc kubenswrapper[4758]: I0122 16:53:16.950410 4758 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09104569-02cb-462b-84af-6e8d4f7e6a7d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:16 crc kubenswrapper[4758]: I0122 16:53:16.950458 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvmt4\" (UniqueName: \"kubernetes.io/projected/09104569-02cb-462b-84af-6e8d4f7e6a7d-kube-api-access-jvmt4\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:16 crc kubenswrapper[4758]: I0122 16:53:16.950469 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09104569-02cb-462b-84af-6e8d4f7e6a7d-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:16 crc kubenswrapper[4758]: I0122 16:53:16.950479 4758 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/09104569-02cb-462b-84af-6e8d4f7e6a7d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:16 crc kubenswrapper[4758]: I0122 16:53:16.950487 4758 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09104569-02cb-462b-84af-6e8d4f7e6a7d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:16 crc kubenswrapper[4758]: I0122 16:53:16.972903 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09104569-02cb-462b-84af-6e8d4f7e6a7d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "09104569-02cb-462b-84af-6e8d4f7e6a7d" (UID: "09104569-02cb-462b-84af-6e8d4f7e6a7d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:17 crc kubenswrapper[4758]: I0122 16:53:17.044556 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09104569-02cb-462b-84af-6e8d4f7e6a7d-config-data" (OuterVolumeSpecName: "config-data") pod "09104569-02cb-462b-84af-6e8d4f7e6a7d" (UID: "09104569-02cb-462b-84af-6e8d4f7e6a7d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:17 crc kubenswrapper[4758]: I0122 16:53:17.052365 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09104569-02cb-462b-84af-6e8d4f7e6a7d-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:17 crc kubenswrapper[4758]: I0122 16:53:17.052401 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09104569-02cb-462b-84af-6e8d4f7e6a7d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:17 crc kubenswrapper[4758]: I0122 16:53:17.484832 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09104569-02cb-462b-84af-6e8d4f7e6a7d","Type":"ContainerDied","Data":"9dd1d3ecf5fc5b9376f83efe71d6cd9c542cf79b00f5e2147d9ab865866cb6e8"} Jan 22 16:53:17 crc kubenswrapper[4758]: I0122 16:53:17.485077 4758 scope.go:117] "RemoveContainer" containerID="47a10e26e93699f6a8d6f6e42e2a45b1e9a64a9345ba9a328d25965b8130f875" Jan 22 16:53:17 crc kubenswrapper[4758]: I0122 16:53:17.484915 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 16:53:17 crc kubenswrapper[4758]: I0122 16:53:17.515346 4758 scope.go:117] "RemoveContainer" containerID="850ec3b0875f710ab19fbbbbbb222350378b22f6b1706dcb709be6f8365ed510" Jan 22 16:53:17 crc kubenswrapper[4758]: I0122 16:53:17.516215 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 16:53:17 crc kubenswrapper[4758]: I0122 16:53:17.524457 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 16:53:17 crc kubenswrapper[4758]: I0122 16:53:17.543396 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 16:53:17 crc kubenswrapper[4758]: E0122 16:53:17.543900 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09104569-02cb-462b-84af-6e8d4f7e6a7d" containerName="ceilometer-central-agent" Jan 22 16:53:17 crc kubenswrapper[4758]: I0122 16:53:17.543918 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="09104569-02cb-462b-84af-6e8d4f7e6a7d" containerName="ceilometer-central-agent" Jan 22 16:53:17 crc kubenswrapper[4758]: E0122 16:53:17.543944 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09104569-02cb-462b-84af-6e8d4f7e6a7d" containerName="sg-core" Jan 22 16:53:17 crc kubenswrapper[4758]: I0122 16:53:17.543956 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="09104569-02cb-462b-84af-6e8d4f7e6a7d" containerName="sg-core" Jan 22 16:53:17 crc kubenswrapper[4758]: E0122 16:53:17.543980 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09104569-02cb-462b-84af-6e8d4f7e6a7d" containerName="ceilometer-notification-agent" Jan 22 16:53:17 crc kubenswrapper[4758]: I0122 16:53:17.543989 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="09104569-02cb-462b-84af-6e8d4f7e6a7d" containerName="ceilometer-notification-agent" Jan 22 16:53:17 crc kubenswrapper[4758]: E0122 16:53:17.544004 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09104569-02cb-462b-84af-6e8d4f7e6a7d" containerName="proxy-httpd" Jan 22 16:53:17 crc kubenswrapper[4758]: I0122 16:53:17.544011 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="09104569-02cb-462b-84af-6e8d4f7e6a7d" containerName="proxy-httpd" Jan 22 16:53:17 crc kubenswrapper[4758]: I0122 16:53:17.544243 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="09104569-02cb-462b-84af-6e8d4f7e6a7d" containerName="proxy-httpd" Jan 22 16:53:17 crc kubenswrapper[4758]: I0122 16:53:17.544263 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="09104569-02cb-462b-84af-6e8d4f7e6a7d" containerName="ceilometer-central-agent" Jan 22 16:53:17 crc kubenswrapper[4758]: I0122 16:53:17.544280 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="09104569-02cb-462b-84af-6e8d4f7e6a7d" containerName="ceilometer-notification-agent" Jan 22 16:53:17 crc kubenswrapper[4758]: I0122 16:53:17.544301 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="09104569-02cb-462b-84af-6e8d4f7e6a7d" containerName="sg-core" Jan 22 16:53:17 crc kubenswrapper[4758]: I0122 16:53:17.547443 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 16:53:17 crc kubenswrapper[4758]: I0122 16:53:17.550852 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 16:53:17 crc kubenswrapper[4758]: I0122 16:53:17.551291 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 16:53:17 crc kubenswrapper[4758]: I0122 16:53:17.566033 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 16:53:17 crc kubenswrapper[4758]: I0122 16:53:17.568307 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bed104d4-892d-43f0-bb3f-82be92304823-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bed104d4-892d-43f0-bb3f-82be92304823\") " pod="openstack/ceilometer-0" Jan 22 16:53:17 crc kubenswrapper[4758]: I0122 16:53:17.568716 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bed104d4-892d-43f0-bb3f-82be92304823-config-data\") pod \"ceilometer-0\" (UID: \"bed104d4-892d-43f0-bb3f-82be92304823\") " pod="openstack/ceilometer-0" Jan 22 16:53:17 crc kubenswrapper[4758]: I0122 16:53:17.568774 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bed104d4-892d-43f0-bb3f-82be92304823-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bed104d4-892d-43f0-bb3f-82be92304823\") " pod="openstack/ceilometer-0" Jan 22 16:53:17 crc kubenswrapper[4758]: I0122 16:53:17.568827 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bed104d4-892d-43f0-bb3f-82be92304823-run-httpd\") pod \"ceilometer-0\" (UID: \"bed104d4-892d-43f0-bb3f-82be92304823\") " pod="openstack/ceilometer-0" Jan 22 16:53:17 crc kubenswrapper[4758]: I0122 16:53:17.568926 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bed104d4-892d-43f0-bb3f-82be92304823-log-httpd\") pod \"ceilometer-0\" (UID: \"bed104d4-892d-43f0-bb3f-82be92304823\") " pod="openstack/ceilometer-0" Jan 22 16:53:17 crc kubenswrapper[4758]: I0122 16:53:17.569028 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsllq\" (UniqueName: \"kubernetes.io/projected/bed104d4-892d-43f0-bb3f-82be92304823-kube-api-access-tsllq\") pod \"ceilometer-0\" (UID: \"bed104d4-892d-43f0-bb3f-82be92304823\") " pod="openstack/ceilometer-0" Jan 22 16:53:17 crc kubenswrapper[4758]: I0122 16:53:17.569114 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bed104d4-892d-43f0-bb3f-82be92304823-scripts\") pod \"ceilometer-0\" (UID: \"bed104d4-892d-43f0-bb3f-82be92304823\") " pod="openstack/ceilometer-0" Jan 22 16:53:17 crc kubenswrapper[4758]: I0122 16:53:17.667449 4758 scope.go:117] "RemoveContainer" containerID="19883b7e488b77fb7d77cf08b0d0273bd5505866fcff5136812af83897473892" Jan 22 16:53:17 crc kubenswrapper[4758]: I0122 16:53:17.671332 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bed104d4-892d-43f0-bb3f-82be92304823-log-httpd\") pod \"ceilometer-0\" (UID: \"bed104d4-892d-43f0-bb3f-82be92304823\") " pod="openstack/ceilometer-0" Jan 22 16:53:17 crc kubenswrapper[4758]: I0122 16:53:17.671408 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsllq\" (UniqueName: \"kubernetes.io/projected/bed104d4-892d-43f0-bb3f-82be92304823-kube-api-access-tsllq\") pod \"ceilometer-0\" (UID: \"bed104d4-892d-43f0-bb3f-82be92304823\") " pod="openstack/ceilometer-0" Jan 22 16:53:17 crc kubenswrapper[4758]: I0122 16:53:17.671461 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bed104d4-892d-43f0-bb3f-82be92304823-scripts\") pod \"ceilometer-0\" (UID: \"bed104d4-892d-43f0-bb3f-82be92304823\") " pod="openstack/ceilometer-0" Jan 22 16:53:17 crc kubenswrapper[4758]: I0122 16:53:17.671513 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bed104d4-892d-43f0-bb3f-82be92304823-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bed104d4-892d-43f0-bb3f-82be92304823\") " pod="openstack/ceilometer-0" Jan 22 16:53:17 crc kubenswrapper[4758]: I0122 16:53:17.671559 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bed104d4-892d-43f0-bb3f-82be92304823-config-data\") pod \"ceilometer-0\" (UID: \"bed104d4-892d-43f0-bb3f-82be92304823\") " pod="openstack/ceilometer-0" Jan 22 16:53:17 crc kubenswrapper[4758]: I0122 16:53:17.671578 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bed104d4-892d-43f0-bb3f-82be92304823-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bed104d4-892d-43f0-bb3f-82be92304823\") " pod="openstack/ceilometer-0" Jan 22 16:53:17 crc kubenswrapper[4758]: I0122 16:53:17.671605 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bed104d4-892d-43f0-bb3f-82be92304823-run-httpd\") pod \"ceilometer-0\" (UID: \"bed104d4-892d-43f0-bb3f-82be92304823\") " pod="openstack/ceilometer-0" Jan 22 16:53:17 crc kubenswrapper[4758]: I0122 16:53:17.671862 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bed104d4-892d-43f0-bb3f-82be92304823-log-httpd\") pod \"ceilometer-0\" (UID: \"bed104d4-892d-43f0-bb3f-82be92304823\") " pod="openstack/ceilometer-0" Jan 22 16:53:17 crc kubenswrapper[4758]: I0122 16:53:17.671966 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bed104d4-892d-43f0-bb3f-82be92304823-run-httpd\") pod \"ceilometer-0\" (UID: \"bed104d4-892d-43f0-bb3f-82be92304823\") " pod="openstack/ceilometer-0" Jan 22 16:53:17 crc kubenswrapper[4758]: I0122 16:53:17.680718 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bed104d4-892d-43f0-bb3f-82be92304823-config-data\") pod \"ceilometer-0\" (UID: \"bed104d4-892d-43f0-bb3f-82be92304823\") " pod="openstack/ceilometer-0" Jan 22 16:53:17 crc kubenswrapper[4758]: I0122 16:53:17.688621 4758 scope.go:117] "RemoveContainer" containerID="03d13398d6dbb95224afb5b90d44b67b2756a347424307e1a23d66168870c353" Jan 22 16:53:17 crc kubenswrapper[4758]: I0122 16:53:17.695916 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bed104d4-892d-43f0-bb3f-82be92304823-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bed104d4-892d-43f0-bb3f-82be92304823\") " pod="openstack/ceilometer-0" Jan 22 16:53:17 crc kubenswrapper[4758]: I0122 16:53:17.700211 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bed104d4-892d-43f0-bb3f-82be92304823-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bed104d4-892d-43f0-bb3f-82be92304823\") " pod="openstack/ceilometer-0" Jan 22 16:53:17 crc kubenswrapper[4758]: I0122 16:53:17.701167 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bed104d4-892d-43f0-bb3f-82be92304823-scripts\") pod \"ceilometer-0\" (UID: \"bed104d4-892d-43f0-bb3f-82be92304823\") " pod="openstack/ceilometer-0" Jan 22 16:53:17 crc kubenswrapper[4758]: I0122 16:53:17.703455 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsllq\" (UniqueName: \"kubernetes.io/projected/bed104d4-892d-43f0-bb3f-82be92304823-kube-api-access-tsllq\") pod \"ceilometer-0\" (UID: \"bed104d4-892d-43f0-bb3f-82be92304823\") " pod="openstack/ceilometer-0" Jan 22 16:53:17 crc kubenswrapper[4758]: I0122 16:53:17.868108 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 16:53:18 crc kubenswrapper[4758]: I0122 16:53:18.403261 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 16:53:18 crc kubenswrapper[4758]: W0122 16:53:18.412194 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbed104d4_892d_43f0_bb3f_82be92304823.slice/crio-d59b9bd9043d282c17b394d00eb3d47ed0ca43efad9fa7be03545fc7a64f7541 WatchSource:0}: Error finding container d59b9bd9043d282c17b394d00eb3d47ed0ca43efad9fa7be03545fc7a64f7541: Status 404 returned error can't find the container with id d59b9bd9043d282c17b394d00eb3d47ed0ca43efad9fa7be03545fc7a64f7541 Jan 22 16:53:18 crc kubenswrapper[4758]: I0122 16:53:18.495569 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bed104d4-892d-43f0-bb3f-82be92304823","Type":"ContainerStarted","Data":"d59b9bd9043d282c17b394d00eb3d47ed0ca43efad9fa7be03545fc7a64f7541"} Jan 22 16:53:18 crc kubenswrapper[4758]: I0122 16:53:18.819309 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09104569-02cb-462b-84af-6e8d4f7e6a7d" path="/var/lib/kubelet/pods/09104569-02cb-462b-84af-6e8d4f7e6a7d/volumes" Jan 22 16:53:18 crc kubenswrapper[4758]: I0122 16:53:18.835522 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 16:53:19 crc kubenswrapper[4758]: I0122 16:53:19.506723 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bed104d4-892d-43f0-bb3f-82be92304823","Type":"ContainerStarted","Data":"366a1997478e77bedb233a1f3ac0f5f546950c0174b8131d5943788ccfd4fb19"} Jan 22 16:53:24 crc kubenswrapper[4758]: I0122 16:53:24.726319 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 22 16:53:24 crc kubenswrapper[4758]: I0122 16:53:24.762892 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 22 16:53:25 crc kubenswrapper[4758]: I0122 16:53:25.571731 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 22 16:53:25 crc kubenswrapper[4758]: I0122 16:53:25.605586 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 22 16:53:27 crc kubenswrapper[4758]: I0122 16:53:27.593249 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qgsmp" event={"ID":"fc06c7d9-b286-48cd-a359-6c18d1cc0e80","Type":"ContainerStarted","Data":"21601da3f1fa3b099a62055dd594476ed77fb3ef4a75505adb0aaba258d9abde"} Jan 22 16:53:27 crc kubenswrapper[4758]: I0122 16:53:27.596970 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bed104d4-892d-43f0-bb3f-82be92304823","Type":"ContainerStarted","Data":"c15498997136c2b8dea881ebe011726235496f505cef0ddfb729add561479842"} Jan 22 16:53:27 crc kubenswrapper[4758]: I0122 16:53:27.620596 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-qgsmp" podStartSLOduration=2.590466251 podStartE2EDuration="13.620569948s" podCreationTimestamp="2026-01-22 16:53:14 +0000 UTC" firstStartedPulling="2026-01-22 16:53:15.328177158 +0000 UTC m=+1416.811516443" lastFinishedPulling="2026-01-22 16:53:26.358280855 +0000 UTC m=+1427.841620140" observedRunningTime="2026-01-22 16:53:27.613034102 +0000 UTC m=+1429.096373387" watchObservedRunningTime="2026-01-22 16:53:27.620569948 +0000 UTC m=+1429.103909233" Jan 22 16:53:28 crc kubenswrapper[4758]: I0122 16:53:28.176885 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 22 16:53:28 crc kubenswrapper[4758]: I0122 16:53:28.606929 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bed104d4-892d-43f0-bb3f-82be92304823","Type":"ContainerStarted","Data":"1a975546051aef55a64ddfa36ba2a05e0deef332086003db5c7cf1fa74e55131"} Jan 22 16:53:28 crc kubenswrapper[4758]: I0122 16:53:28.607184 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5" containerName="watcher-decision-engine" containerID="cri-o://811915bfcfccc9a4a5f800579b083a1bf643cbdcda278638ddf797e1bd37b62d" gracePeriod=30 Jan 22 16:53:29 crc kubenswrapper[4758]: I0122 16:53:29.617930 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bed104d4-892d-43f0-bb3f-82be92304823","Type":"ContainerStarted","Data":"828a738bdf7c4e8e10eec2a3e01773bff8d1367dcdd77fe8582ffa74de917648"} Jan 22 16:53:29 crc kubenswrapper[4758]: I0122 16:53:29.618318 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bed104d4-892d-43f0-bb3f-82be92304823" containerName="ceilometer-central-agent" containerID="cri-o://366a1997478e77bedb233a1f3ac0f5f546950c0174b8131d5943788ccfd4fb19" gracePeriod=30 Jan 22 16:53:29 crc kubenswrapper[4758]: I0122 16:53:29.618497 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bed104d4-892d-43f0-bb3f-82be92304823" containerName="proxy-httpd" containerID="cri-o://828a738bdf7c4e8e10eec2a3e01773bff8d1367dcdd77fe8582ffa74de917648" gracePeriod=30 Jan 22 16:53:29 crc kubenswrapper[4758]: I0122 16:53:29.618540 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bed104d4-892d-43f0-bb3f-82be92304823" containerName="sg-core" containerID="cri-o://1a975546051aef55a64ddfa36ba2a05e0deef332086003db5c7cf1fa74e55131" gracePeriod=30 Jan 22 16:53:29 crc kubenswrapper[4758]: I0122 16:53:29.618574 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bed104d4-892d-43f0-bb3f-82be92304823" containerName="ceilometer-notification-agent" containerID="cri-o://c15498997136c2b8dea881ebe011726235496f505cef0ddfb729add561479842" gracePeriod=30 Jan 22 16:53:29 crc kubenswrapper[4758]: I0122 16:53:29.618588 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 16:53:29 crc kubenswrapper[4758]: I0122 16:53:29.655000 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.215466719 podStartE2EDuration="12.65497525s" podCreationTimestamp="2026-01-22 16:53:17 +0000 UTC" firstStartedPulling="2026-01-22 16:53:18.417357789 +0000 UTC m=+1419.900697074" lastFinishedPulling="2026-01-22 16:53:28.85686631 +0000 UTC m=+1430.340205605" observedRunningTime="2026-01-22 16:53:29.640353161 +0000 UTC m=+1431.123692446" watchObservedRunningTime="2026-01-22 16:53:29.65497525 +0000 UTC m=+1431.138314525" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.430855 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.611799 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsllq\" (UniqueName: \"kubernetes.io/projected/bed104d4-892d-43f0-bb3f-82be92304823-kube-api-access-tsllq\") pod \"bed104d4-892d-43f0-bb3f-82be92304823\" (UID: \"bed104d4-892d-43f0-bb3f-82be92304823\") " Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.612016 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bed104d4-892d-43f0-bb3f-82be92304823-run-httpd\") pod \"bed104d4-892d-43f0-bb3f-82be92304823\" (UID: \"bed104d4-892d-43f0-bb3f-82be92304823\") " Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.612064 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bed104d4-892d-43f0-bb3f-82be92304823-config-data\") pod \"bed104d4-892d-43f0-bb3f-82be92304823\" (UID: \"bed104d4-892d-43f0-bb3f-82be92304823\") " Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.612098 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bed104d4-892d-43f0-bb3f-82be92304823-sg-core-conf-yaml\") pod \"bed104d4-892d-43f0-bb3f-82be92304823\" (UID: \"bed104d4-892d-43f0-bb3f-82be92304823\") " Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.612168 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bed104d4-892d-43f0-bb3f-82be92304823-scripts\") pod \"bed104d4-892d-43f0-bb3f-82be92304823\" (UID: \"bed104d4-892d-43f0-bb3f-82be92304823\") " Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.612243 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bed104d4-892d-43f0-bb3f-82be92304823-log-httpd\") pod \"bed104d4-892d-43f0-bb3f-82be92304823\" (UID: \"bed104d4-892d-43f0-bb3f-82be92304823\") " Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.612318 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bed104d4-892d-43f0-bb3f-82be92304823-combined-ca-bundle\") pod \"bed104d4-892d-43f0-bb3f-82be92304823\" (UID: \"bed104d4-892d-43f0-bb3f-82be92304823\") " Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.612727 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bed104d4-892d-43f0-bb3f-82be92304823-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bed104d4-892d-43f0-bb3f-82be92304823" (UID: "bed104d4-892d-43f0-bb3f-82be92304823"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.612890 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bed104d4-892d-43f0-bb3f-82be92304823-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bed104d4-892d-43f0-bb3f-82be92304823" (UID: "bed104d4-892d-43f0-bb3f-82be92304823"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.613039 4758 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bed104d4-892d-43f0-bb3f-82be92304823-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.613060 4758 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bed104d4-892d-43f0-bb3f-82be92304823-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.628899 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bed104d4-892d-43f0-bb3f-82be92304823-scripts" (OuterVolumeSpecName: "scripts") pod "bed104d4-892d-43f0-bb3f-82be92304823" (UID: "bed104d4-892d-43f0-bb3f-82be92304823"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.631358 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bed104d4-892d-43f0-bb3f-82be92304823-kube-api-access-tsllq" (OuterVolumeSpecName: "kube-api-access-tsllq") pod "bed104d4-892d-43f0-bb3f-82be92304823" (UID: "bed104d4-892d-43f0-bb3f-82be92304823"). InnerVolumeSpecName "kube-api-access-tsllq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.634122 4758 generic.go:334] "Generic (PLEG): container finished" podID="bed104d4-892d-43f0-bb3f-82be92304823" containerID="828a738bdf7c4e8e10eec2a3e01773bff8d1367dcdd77fe8582ffa74de917648" exitCode=0 Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.634153 4758 generic.go:334] "Generic (PLEG): container finished" podID="bed104d4-892d-43f0-bb3f-82be92304823" containerID="1a975546051aef55a64ddfa36ba2a05e0deef332086003db5c7cf1fa74e55131" exitCode=2 Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.634159 4758 generic.go:334] "Generic (PLEG): container finished" podID="bed104d4-892d-43f0-bb3f-82be92304823" containerID="c15498997136c2b8dea881ebe011726235496f505cef0ddfb729add561479842" exitCode=0 Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.634165 4758 generic.go:334] "Generic (PLEG): container finished" podID="bed104d4-892d-43f0-bb3f-82be92304823" containerID="366a1997478e77bedb233a1f3ac0f5f546950c0174b8131d5943788ccfd4fb19" exitCode=0 Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.634183 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bed104d4-892d-43f0-bb3f-82be92304823","Type":"ContainerDied","Data":"828a738bdf7c4e8e10eec2a3e01773bff8d1367dcdd77fe8582ffa74de917648"} Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.634208 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bed104d4-892d-43f0-bb3f-82be92304823","Type":"ContainerDied","Data":"1a975546051aef55a64ddfa36ba2a05e0deef332086003db5c7cf1fa74e55131"} Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.634218 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bed104d4-892d-43f0-bb3f-82be92304823","Type":"ContainerDied","Data":"c15498997136c2b8dea881ebe011726235496f505cef0ddfb729add561479842"} Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.634226 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bed104d4-892d-43f0-bb3f-82be92304823","Type":"ContainerDied","Data":"366a1997478e77bedb233a1f3ac0f5f546950c0174b8131d5943788ccfd4fb19"} Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.634235 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bed104d4-892d-43f0-bb3f-82be92304823","Type":"ContainerDied","Data":"d59b9bd9043d282c17b394d00eb3d47ed0ca43efad9fa7be03545fc7a64f7541"} Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.634249 4758 scope.go:117] "RemoveContainer" containerID="828a738bdf7c4e8e10eec2a3e01773bff8d1367dcdd77fe8582ffa74de917648" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.634424 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.657198 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bed104d4-892d-43f0-bb3f-82be92304823-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bed104d4-892d-43f0-bb3f-82be92304823" (UID: "bed104d4-892d-43f0-bb3f-82be92304823"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.701589 4758 scope.go:117] "RemoveContainer" containerID="1a975546051aef55a64ddfa36ba2a05e0deef332086003db5c7cf1fa74e55131" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.714264 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsllq\" (UniqueName: \"kubernetes.io/projected/bed104d4-892d-43f0-bb3f-82be92304823-kube-api-access-tsllq\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.714293 4758 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bed104d4-892d-43f0-bb3f-82be92304823-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.714302 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bed104d4-892d-43f0-bb3f-82be92304823-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.723362 4758 scope.go:117] "RemoveContainer" containerID="c15498997136c2b8dea881ebe011726235496f505cef0ddfb729add561479842" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.742298 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bed104d4-892d-43f0-bb3f-82be92304823-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bed104d4-892d-43f0-bb3f-82be92304823" (UID: "bed104d4-892d-43f0-bb3f-82be92304823"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.744980 4758 scope.go:117] "RemoveContainer" containerID="366a1997478e77bedb233a1f3ac0f5f546950c0174b8131d5943788ccfd4fb19" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.748010 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bed104d4-892d-43f0-bb3f-82be92304823-config-data" (OuterVolumeSpecName: "config-data") pod "bed104d4-892d-43f0-bb3f-82be92304823" (UID: "bed104d4-892d-43f0-bb3f-82be92304823"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.765780 4758 scope.go:117] "RemoveContainer" containerID="828a738bdf7c4e8e10eec2a3e01773bff8d1367dcdd77fe8582ffa74de917648" Jan 22 16:53:30 crc kubenswrapper[4758]: E0122 16:53:30.779390 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"828a738bdf7c4e8e10eec2a3e01773bff8d1367dcdd77fe8582ffa74de917648\": container with ID starting with 828a738bdf7c4e8e10eec2a3e01773bff8d1367dcdd77fe8582ffa74de917648 not found: ID does not exist" containerID="828a738bdf7c4e8e10eec2a3e01773bff8d1367dcdd77fe8582ffa74de917648" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.779469 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"828a738bdf7c4e8e10eec2a3e01773bff8d1367dcdd77fe8582ffa74de917648"} err="failed to get container status \"828a738bdf7c4e8e10eec2a3e01773bff8d1367dcdd77fe8582ffa74de917648\": rpc error: code = NotFound desc = could not find container \"828a738bdf7c4e8e10eec2a3e01773bff8d1367dcdd77fe8582ffa74de917648\": container with ID starting with 828a738bdf7c4e8e10eec2a3e01773bff8d1367dcdd77fe8582ffa74de917648 not found: ID does not exist" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.779505 4758 scope.go:117] "RemoveContainer" containerID="1a975546051aef55a64ddfa36ba2a05e0deef332086003db5c7cf1fa74e55131" Jan 22 16:53:30 crc kubenswrapper[4758]: E0122 16:53:30.780198 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a975546051aef55a64ddfa36ba2a05e0deef332086003db5c7cf1fa74e55131\": container with ID starting with 1a975546051aef55a64ddfa36ba2a05e0deef332086003db5c7cf1fa74e55131 not found: ID does not exist" containerID="1a975546051aef55a64ddfa36ba2a05e0deef332086003db5c7cf1fa74e55131" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.780224 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a975546051aef55a64ddfa36ba2a05e0deef332086003db5c7cf1fa74e55131"} err="failed to get container status \"1a975546051aef55a64ddfa36ba2a05e0deef332086003db5c7cf1fa74e55131\": rpc error: code = NotFound desc = could not find container \"1a975546051aef55a64ddfa36ba2a05e0deef332086003db5c7cf1fa74e55131\": container with ID starting with 1a975546051aef55a64ddfa36ba2a05e0deef332086003db5c7cf1fa74e55131 not found: ID does not exist" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.780273 4758 scope.go:117] "RemoveContainer" containerID="c15498997136c2b8dea881ebe011726235496f505cef0ddfb729add561479842" Jan 22 16:53:30 crc kubenswrapper[4758]: E0122 16:53:30.780545 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c15498997136c2b8dea881ebe011726235496f505cef0ddfb729add561479842\": container with ID starting with c15498997136c2b8dea881ebe011726235496f505cef0ddfb729add561479842 not found: ID does not exist" containerID="c15498997136c2b8dea881ebe011726235496f505cef0ddfb729add561479842" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.780596 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c15498997136c2b8dea881ebe011726235496f505cef0ddfb729add561479842"} err="failed to get container status \"c15498997136c2b8dea881ebe011726235496f505cef0ddfb729add561479842\": rpc error: code = NotFound desc = could not find container \"c15498997136c2b8dea881ebe011726235496f505cef0ddfb729add561479842\": container with ID starting with c15498997136c2b8dea881ebe011726235496f505cef0ddfb729add561479842 not found: ID does not exist" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.780615 4758 scope.go:117] "RemoveContainer" containerID="366a1997478e77bedb233a1f3ac0f5f546950c0174b8131d5943788ccfd4fb19" Jan 22 16:53:30 crc kubenswrapper[4758]: E0122 16:53:30.780888 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"366a1997478e77bedb233a1f3ac0f5f546950c0174b8131d5943788ccfd4fb19\": container with ID starting with 366a1997478e77bedb233a1f3ac0f5f546950c0174b8131d5943788ccfd4fb19 not found: ID does not exist" containerID="366a1997478e77bedb233a1f3ac0f5f546950c0174b8131d5943788ccfd4fb19" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.780933 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"366a1997478e77bedb233a1f3ac0f5f546950c0174b8131d5943788ccfd4fb19"} err="failed to get container status \"366a1997478e77bedb233a1f3ac0f5f546950c0174b8131d5943788ccfd4fb19\": rpc error: code = NotFound desc = could not find container \"366a1997478e77bedb233a1f3ac0f5f546950c0174b8131d5943788ccfd4fb19\": container with ID starting with 366a1997478e77bedb233a1f3ac0f5f546950c0174b8131d5943788ccfd4fb19 not found: ID does not exist" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.780949 4758 scope.go:117] "RemoveContainer" containerID="828a738bdf7c4e8e10eec2a3e01773bff8d1367dcdd77fe8582ffa74de917648" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.781137 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"828a738bdf7c4e8e10eec2a3e01773bff8d1367dcdd77fe8582ffa74de917648"} err="failed to get container status \"828a738bdf7c4e8e10eec2a3e01773bff8d1367dcdd77fe8582ffa74de917648\": rpc error: code = NotFound desc = could not find container \"828a738bdf7c4e8e10eec2a3e01773bff8d1367dcdd77fe8582ffa74de917648\": container with ID starting with 828a738bdf7c4e8e10eec2a3e01773bff8d1367dcdd77fe8582ffa74de917648 not found: ID does not exist" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.781179 4758 scope.go:117] "RemoveContainer" containerID="1a975546051aef55a64ddfa36ba2a05e0deef332086003db5c7cf1fa74e55131" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.781374 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a975546051aef55a64ddfa36ba2a05e0deef332086003db5c7cf1fa74e55131"} err="failed to get container status \"1a975546051aef55a64ddfa36ba2a05e0deef332086003db5c7cf1fa74e55131\": rpc error: code = NotFound desc = could not find container \"1a975546051aef55a64ddfa36ba2a05e0deef332086003db5c7cf1fa74e55131\": container with ID starting with 1a975546051aef55a64ddfa36ba2a05e0deef332086003db5c7cf1fa74e55131 not found: ID does not exist" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.781391 4758 scope.go:117] "RemoveContainer" containerID="c15498997136c2b8dea881ebe011726235496f505cef0ddfb729add561479842" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.781612 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c15498997136c2b8dea881ebe011726235496f505cef0ddfb729add561479842"} err="failed to get container status \"c15498997136c2b8dea881ebe011726235496f505cef0ddfb729add561479842\": rpc error: code = NotFound desc = could not find container \"c15498997136c2b8dea881ebe011726235496f505cef0ddfb729add561479842\": container with ID starting with c15498997136c2b8dea881ebe011726235496f505cef0ddfb729add561479842 not found: ID does not exist" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.781626 4758 scope.go:117] "RemoveContainer" containerID="366a1997478e77bedb233a1f3ac0f5f546950c0174b8131d5943788ccfd4fb19" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.781861 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"366a1997478e77bedb233a1f3ac0f5f546950c0174b8131d5943788ccfd4fb19"} err="failed to get container status \"366a1997478e77bedb233a1f3ac0f5f546950c0174b8131d5943788ccfd4fb19\": rpc error: code = NotFound desc = could not find container \"366a1997478e77bedb233a1f3ac0f5f546950c0174b8131d5943788ccfd4fb19\": container with ID starting with 366a1997478e77bedb233a1f3ac0f5f546950c0174b8131d5943788ccfd4fb19 not found: ID does not exist" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.781882 4758 scope.go:117] "RemoveContainer" containerID="828a738bdf7c4e8e10eec2a3e01773bff8d1367dcdd77fe8582ffa74de917648" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.782115 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"828a738bdf7c4e8e10eec2a3e01773bff8d1367dcdd77fe8582ffa74de917648"} err="failed to get container status \"828a738bdf7c4e8e10eec2a3e01773bff8d1367dcdd77fe8582ffa74de917648\": rpc error: code = NotFound desc = could not find container \"828a738bdf7c4e8e10eec2a3e01773bff8d1367dcdd77fe8582ffa74de917648\": container with ID starting with 828a738bdf7c4e8e10eec2a3e01773bff8d1367dcdd77fe8582ffa74de917648 not found: ID does not exist" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.782132 4758 scope.go:117] "RemoveContainer" containerID="1a975546051aef55a64ddfa36ba2a05e0deef332086003db5c7cf1fa74e55131" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.785263 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a975546051aef55a64ddfa36ba2a05e0deef332086003db5c7cf1fa74e55131"} err="failed to get container status \"1a975546051aef55a64ddfa36ba2a05e0deef332086003db5c7cf1fa74e55131\": rpc error: code = NotFound desc = could not find container \"1a975546051aef55a64ddfa36ba2a05e0deef332086003db5c7cf1fa74e55131\": container with ID starting with 1a975546051aef55a64ddfa36ba2a05e0deef332086003db5c7cf1fa74e55131 not found: ID does not exist" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.785286 4758 scope.go:117] "RemoveContainer" containerID="c15498997136c2b8dea881ebe011726235496f505cef0ddfb729add561479842" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.786056 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c15498997136c2b8dea881ebe011726235496f505cef0ddfb729add561479842"} err="failed to get container status \"c15498997136c2b8dea881ebe011726235496f505cef0ddfb729add561479842\": rpc error: code = NotFound desc = could not find container \"c15498997136c2b8dea881ebe011726235496f505cef0ddfb729add561479842\": container with ID starting with c15498997136c2b8dea881ebe011726235496f505cef0ddfb729add561479842 not found: ID does not exist" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.786080 4758 scope.go:117] "RemoveContainer" containerID="366a1997478e77bedb233a1f3ac0f5f546950c0174b8131d5943788ccfd4fb19" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.786399 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"366a1997478e77bedb233a1f3ac0f5f546950c0174b8131d5943788ccfd4fb19"} err="failed to get container status \"366a1997478e77bedb233a1f3ac0f5f546950c0174b8131d5943788ccfd4fb19\": rpc error: code = NotFound desc = could not find container \"366a1997478e77bedb233a1f3ac0f5f546950c0174b8131d5943788ccfd4fb19\": container with ID starting with 366a1997478e77bedb233a1f3ac0f5f546950c0174b8131d5943788ccfd4fb19 not found: ID does not exist" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.786420 4758 scope.go:117] "RemoveContainer" containerID="828a738bdf7c4e8e10eec2a3e01773bff8d1367dcdd77fe8582ffa74de917648" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.788930 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"828a738bdf7c4e8e10eec2a3e01773bff8d1367dcdd77fe8582ffa74de917648"} err="failed to get container status \"828a738bdf7c4e8e10eec2a3e01773bff8d1367dcdd77fe8582ffa74de917648\": rpc error: code = NotFound desc = could not find container \"828a738bdf7c4e8e10eec2a3e01773bff8d1367dcdd77fe8582ffa74de917648\": container with ID starting with 828a738bdf7c4e8e10eec2a3e01773bff8d1367dcdd77fe8582ffa74de917648 not found: ID does not exist" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.788952 4758 scope.go:117] "RemoveContainer" containerID="1a975546051aef55a64ddfa36ba2a05e0deef332086003db5c7cf1fa74e55131" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.789174 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a975546051aef55a64ddfa36ba2a05e0deef332086003db5c7cf1fa74e55131"} err="failed to get container status \"1a975546051aef55a64ddfa36ba2a05e0deef332086003db5c7cf1fa74e55131\": rpc error: code = NotFound desc = could not find container \"1a975546051aef55a64ddfa36ba2a05e0deef332086003db5c7cf1fa74e55131\": container with ID starting with 1a975546051aef55a64ddfa36ba2a05e0deef332086003db5c7cf1fa74e55131 not found: ID does not exist" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.789201 4758 scope.go:117] "RemoveContainer" containerID="c15498997136c2b8dea881ebe011726235496f505cef0ddfb729add561479842" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.789430 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c15498997136c2b8dea881ebe011726235496f505cef0ddfb729add561479842"} err="failed to get container status \"c15498997136c2b8dea881ebe011726235496f505cef0ddfb729add561479842\": rpc error: code = NotFound desc = could not find container \"c15498997136c2b8dea881ebe011726235496f505cef0ddfb729add561479842\": container with ID starting with c15498997136c2b8dea881ebe011726235496f505cef0ddfb729add561479842 not found: ID does not exist" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.789457 4758 scope.go:117] "RemoveContainer" containerID="366a1997478e77bedb233a1f3ac0f5f546950c0174b8131d5943788ccfd4fb19" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.789671 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"366a1997478e77bedb233a1f3ac0f5f546950c0174b8131d5943788ccfd4fb19"} err="failed to get container status \"366a1997478e77bedb233a1f3ac0f5f546950c0174b8131d5943788ccfd4fb19\": rpc error: code = NotFound desc = could not find container \"366a1997478e77bedb233a1f3ac0f5f546950c0174b8131d5943788ccfd4fb19\": container with ID starting with 366a1997478e77bedb233a1f3ac0f5f546950c0174b8131d5943788ccfd4fb19 not found: ID does not exist" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.881418 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bed104d4-892d-43f0-bb3f-82be92304823-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:30 crc kubenswrapper[4758]: I0122 16:53:30.881474 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bed104d4-892d-43f0-bb3f-82be92304823-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:31 crc kubenswrapper[4758]: I0122 16:53:31.019410 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 16:53:31 crc kubenswrapper[4758]: I0122 16:53:31.032032 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 16:53:31 crc kubenswrapper[4758]: I0122 16:53:31.048371 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 16:53:31 crc kubenswrapper[4758]: E0122 16:53:31.048728 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bed104d4-892d-43f0-bb3f-82be92304823" containerName="proxy-httpd" Jan 22 16:53:31 crc kubenswrapper[4758]: I0122 16:53:31.048761 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="bed104d4-892d-43f0-bb3f-82be92304823" containerName="proxy-httpd" Jan 22 16:53:31 crc kubenswrapper[4758]: E0122 16:53:31.048793 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bed104d4-892d-43f0-bb3f-82be92304823" containerName="sg-core" Jan 22 16:53:31 crc kubenswrapper[4758]: I0122 16:53:31.048801 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="bed104d4-892d-43f0-bb3f-82be92304823" containerName="sg-core" Jan 22 16:53:31 crc kubenswrapper[4758]: E0122 16:53:31.048819 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bed104d4-892d-43f0-bb3f-82be92304823" containerName="ceilometer-notification-agent" Jan 22 16:53:31 crc kubenswrapper[4758]: I0122 16:53:31.048824 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="bed104d4-892d-43f0-bb3f-82be92304823" containerName="ceilometer-notification-agent" Jan 22 16:53:31 crc kubenswrapper[4758]: E0122 16:53:31.048840 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bed104d4-892d-43f0-bb3f-82be92304823" containerName="ceilometer-central-agent" Jan 22 16:53:31 crc kubenswrapper[4758]: I0122 16:53:31.048845 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="bed104d4-892d-43f0-bb3f-82be92304823" containerName="ceilometer-central-agent" Jan 22 16:53:31 crc kubenswrapper[4758]: I0122 16:53:31.049190 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="bed104d4-892d-43f0-bb3f-82be92304823" containerName="proxy-httpd" Jan 22 16:53:31 crc kubenswrapper[4758]: I0122 16:53:31.049205 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="bed104d4-892d-43f0-bb3f-82be92304823" containerName="sg-core" Jan 22 16:53:31 crc kubenswrapper[4758]: I0122 16:53:31.049215 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="bed104d4-892d-43f0-bb3f-82be92304823" containerName="ceilometer-notification-agent" Jan 22 16:53:31 crc kubenswrapper[4758]: I0122 16:53:31.049226 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="bed104d4-892d-43f0-bb3f-82be92304823" containerName="ceilometer-central-agent" Jan 22 16:53:31 crc kubenswrapper[4758]: I0122 16:53:31.054949 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 16:53:31 crc kubenswrapper[4758]: I0122 16:53:31.057629 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 16:53:31 crc kubenswrapper[4758]: I0122 16:53:31.057644 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 16:53:31 crc kubenswrapper[4758]: I0122 16:53:31.067591 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 16:53:31 crc kubenswrapper[4758]: I0122 16:53:31.191022 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2159289-e740-441a-80f8-0ce0d0806e52-scripts\") pod \"ceilometer-0\" (UID: \"a2159289-e740-441a-80f8-0ce0d0806e52\") " pod="openstack/ceilometer-0" Jan 22 16:53:31 crc kubenswrapper[4758]: I0122 16:53:31.191081 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2159289-e740-441a-80f8-0ce0d0806e52-run-httpd\") pod \"ceilometer-0\" (UID: \"a2159289-e740-441a-80f8-0ce0d0806e52\") " pod="openstack/ceilometer-0" Jan 22 16:53:31 crc kubenswrapper[4758]: I0122 16:53:31.191121 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2159289-e740-441a-80f8-0ce0d0806e52-log-httpd\") pod \"ceilometer-0\" (UID: \"a2159289-e740-441a-80f8-0ce0d0806e52\") " pod="openstack/ceilometer-0" Jan 22 16:53:31 crc kubenswrapper[4758]: I0122 16:53:31.191135 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2159289-e740-441a-80f8-0ce0d0806e52-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a2159289-e740-441a-80f8-0ce0d0806e52\") " pod="openstack/ceilometer-0" Jan 22 16:53:31 crc kubenswrapper[4758]: I0122 16:53:31.193060 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2159289-e740-441a-80f8-0ce0d0806e52-config-data\") pod \"ceilometer-0\" (UID: \"a2159289-e740-441a-80f8-0ce0d0806e52\") " pod="openstack/ceilometer-0" Jan 22 16:53:31 crc kubenswrapper[4758]: I0122 16:53:31.193142 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a2159289-e740-441a-80f8-0ce0d0806e52-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a2159289-e740-441a-80f8-0ce0d0806e52\") " pod="openstack/ceilometer-0" Jan 22 16:53:31 crc kubenswrapper[4758]: I0122 16:53:31.193215 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bplws\" (UniqueName: \"kubernetes.io/projected/a2159289-e740-441a-80f8-0ce0d0806e52-kube-api-access-bplws\") pod \"ceilometer-0\" (UID: \"a2159289-e740-441a-80f8-0ce0d0806e52\") " pod="openstack/ceilometer-0" Jan 22 16:53:31 crc kubenswrapper[4758]: I0122 16:53:31.294830 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2159289-e740-441a-80f8-0ce0d0806e52-config-data\") pod \"ceilometer-0\" (UID: \"a2159289-e740-441a-80f8-0ce0d0806e52\") " pod="openstack/ceilometer-0" Jan 22 16:53:31 crc kubenswrapper[4758]: I0122 16:53:31.294883 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a2159289-e740-441a-80f8-0ce0d0806e52-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a2159289-e740-441a-80f8-0ce0d0806e52\") " pod="openstack/ceilometer-0" Jan 22 16:53:31 crc kubenswrapper[4758]: I0122 16:53:31.294918 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bplws\" (UniqueName: \"kubernetes.io/projected/a2159289-e740-441a-80f8-0ce0d0806e52-kube-api-access-bplws\") pod \"ceilometer-0\" (UID: \"a2159289-e740-441a-80f8-0ce0d0806e52\") " pod="openstack/ceilometer-0" Jan 22 16:53:31 crc kubenswrapper[4758]: I0122 16:53:31.294957 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2159289-e740-441a-80f8-0ce0d0806e52-scripts\") pod \"ceilometer-0\" (UID: \"a2159289-e740-441a-80f8-0ce0d0806e52\") " pod="openstack/ceilometer-0" Jan 22 16:53:31 crc kubenswrapper[4758]: I0122 16:53:31.295008 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2159289-e740-441a-80f8-0ce0d0806e52-run-httpd\") pod \"ceilometer-0\" (UID: \"a2159289-e740-441a-80f8-0ce0d0806e52\") " pod="openstack/ceilometer-0" Jan 22 16:53:31 crc kubenswrapper[4758]: I0122 16:53:31.295047 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2159289-e740-441a-80f8-0ce0d0806e52-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a2159289-e740-441a-80f8-0ce0d0806e52\") " pod="openstack/ceilometer-0" Jan 22 16:53:31 crc kubenswrapper[4758]: I0122 16:53:31.295070 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2159289-e740-441a-80f8-0ce0d0806e52-log-httpd\") pod \"ceilometer-0\" (UID: \"a2159289-e740-441a-80f8-0ce0d0806e52\") " pod="openstack/ceilometer-0" Jan 22 16:53:31 crc kubenswrapper[4758]: I0122 16:53:31.296566 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2159289-e740-441a-80f8-0ce0d0806e52-log-httpd\") pod \"ceilometer-0\" (UID: \"a2159289-e740-441a-80f8-0ce0d0806e52\") " pod="openstack/ceilometer-0" Jan 22 16:53:31 crc kubenswrapper[4758]: I0122 16:53:31.298645 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2159289-e740-441a-80f8-0ce0d0806e52-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a2159289-e740-441a-80f8-0ce0d0806e52\") " pod="openstack/ceilometer-0" Jan 22 16:53:31 crc kubenswrapper[4758]: I0122 16:53:31.311237 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2159289-e740-441a-80f8-0ce0d0806e52-run-httpd\") pod \"ceilometer-0\" (UID: \"a2159289-e740-441a-80f8-0ce0d0806e52\") " pod="openstack/ceilometer-0" Jan 22 16:53:31 crc kubenswrapper[4758]: I0122 16:53:31.311722 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2159289-e740-441a-80f8-0ce0d0806e52-config-data\") pod \"ceilometer-0\" (UID: \"a2159289-e740-441a-80f8-0ce0d0806e52\") " pod="openstack/ceilometer-0" Jan 22 16:53:31 crc kubenswrapper[4758]: I0122 16:53:31.317957 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2159289-e740-441a-80f8-0ce0d0806e52-scripts\") pod \"ceilometer-0\" (UID: \"a2159289-e740-441a-80f8-0ce0d0806e52\") " pod="openstack/ceilometer-0" Jan 22 16:53:31 crc kubenswrapper[4758]: I0122 16:53:31.320030 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a2159289-e740-441a-80f8-0ce0d0806e52-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a2159289-e740-441a-80f8-0ce0d0806e52\") " pod="openstack/ceilometer-0" Jan 22 16:53:31 crc kubenswrapper[4758]: I0122 16:53:31.335656 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bplws\" (UniqueName: \"kubernetes.io/projected/a2159289-e740-441a-80f8-0ce0d0806e52-kube-api-access-bplws\") pod \"ceilometer-0\" (UID: \"a2159289-e740-441a-80f8-0ce0d0806e52\") " pod="openstack/ceilometer-0" Jan 22 16:53:31 crc kubenswrapper[4758]: I0122 16:53:31.337656 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 16:53:31 crc kubenswrapper[4758]: I0122 16:53:31.338050 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="9c95d79b-1cd4-4f71-9ab9-16081fbc54e5" containerName="glance-log" containerID="cri-o://1424128c818a82b2e4b52afb45b200751b14469a6aae9b89229e70f0efbf92b1" gracePeriod=30 Jan 22 16:53:31 crc kubenswrapper[4758]: I0122 16:53:31.338176 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="9c95d79b-1cd4-4f71-9ab9-16081fbc54e5" containerName="glance-httpd" containerID="cri-o://80422c2fb8eae76c949d5f97b8728f124a6c645d863ef08a5e105c3a30f3e98b" gracePeriod=30 Jan 22 16:53:31 crc kubenswrapper[4758]: I0122 16:53:31.372413 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 16:53:31 crc kubenswrapper[4758]: I0122 16:53:31.646138 4758 generic.go:334] "Generic (PLEG): container finished" podID="9c95d79b-1cd4-4f71-9ab9-16081fbc54e5" containerID="1424128c818a82b2e4b52afb45b200751b14469a6aae9b89229e70f0efbf92b1" exitCode=143 Jan 22 16:53:31 crc kubenswrapper[4758]: I0122 16:53:31.646392 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9c95d79b-1cd4-4f71-9ab9-16081fbc54e5","Type":"ContainerDied","Data":"1424128c818a82b2e4b52afb45b200751b14469a6aae9b89229e70f0efbf92b1"} Jan 22 16:53:31 crc kubenswrapper[4758]: I0122 16:53:31.902924 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 16:53:31 crc kubenswrapper[4758]: W0122 16:53:31.912848 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2159289_e740_441a_80f8_0ce0d0806e52.slice/crio-302a40d42a80394d7604f6ce7b72d1227d5adbaedfbcff235667ede3f1edc4b7 WatchSource:0}: Error finding container 302a40d42a80394d7604f6ce7b72d1227d5adbaedfbcff235667ede3f1edc4b7: Status 404 returned error can't find the container with id 302a40d42a80394d7604f6ce7b72d1227d5adbaedfbcff235667ede3f1edc4b7 Jan 22 16:53:32 crc kubenswrapper[4758]: I0122 16:53:32.477576 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 16:53:32 crc kubenswrapper[4758]: I0122 16:53:32.478058 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="efc0b77e-57a1-4a76-93ae-c56db1fd3969" containerName="glance-log" containerID="cri-o://6689e0cbf87d5b680eb4600e83e72d441be2e8d99698c2c06afc9f56d1e4deeb" gracePeriod=30 Jan 22 16:53:32 crc kubenswrapper[4758]: I0122 16:53:32.478201 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="efc0b77e-57a1-4a76-93ae-c56db1fd3969" containerName="glance-httpd" containerID="cri-o://054f06827200d506586945a26f061d13371e9e314fb007b7f47420d84f5a5d27" gracePeriod=30 Jan 22 16:53:32 crc kubenswrapper[4758]: I0122 16:53:32.688499 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2159289-e740-441a-80f8-0ce0d0806e52","Type":"ContainerStarted","Data":"40b4be892f1dad1c0a5d0dc146d7263fd345c1f36cb6b5fd1a42eede28379bfc"} Jan 22 16:53:32 crc kubenswrapper[4758]: I0122 16:53:32.688549 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2159289-e740-441a-80f8-0ce0d0806e52","Type":"ContainerStarted","Data":"70ee73a3d29c61fb0b5990cbab94843b27b8971510b0c5c2c45873ae0392f7c0"} Jan 22 16:53:32 crc kubenswrapper[4758]: I0122 16:53:32.688558 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2159289-e740-441a-80f8-0ce0d0806e52","Type":"ContainerStarted","Data":"302a40d42a80394d7604f6ce7b72d1227d5adbaedfbcff235667ede3f1edc4b7"} Jan 22 16:53:32 crc kubenswrapper[4758]: I0122 16:53:32.822361 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bed104d4-892d-43f0-bb3f-82be92304823" path="/var/lib/kubelet/pods/bed104d4-892d-43f0-bb3f-82be92304823/volumes" Jan 22 16:53:33 crc kubenswrapper[4758]: I0122 16:53:33.559887 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 16:53:33 crc kubenswrapper[4758]: I0122 16:53:33.663639 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9r6w7\" (UniqueName: \"kubernetes.io/projected/9c95d79b-1cd4-4f71-9ab9-16081fbc54e5-kube-api-access-9r6w7\") pod \"9c95d79b-1cd4-4f71-9ab9-16081fbc54e5\" (UID: \"9c95d79b-1cd4-4f71-9ab9-16081fbc54e5\") " Jan 22 16:53:33 crc kubenswrapper[4758]: I0122 16:53:33.663687 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c95d79b-1cd4-4f71-9ab9-16081fbc54e5-scripts\") pod \"9c95d79b-1cd4-4f71-9ab9-16081fbc54e5\" (UID: \"9c95d79b-1cd4-4f71-9ab9-16081fbc54e5\") " Jan 22 16:53:33 crc kubenswrapper[4758]: I0122 16:53:33.663709 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c95d79b-1cd4-4f71-9ab9-16081fbc54e5-combined-ca-bundle\") pod \"9c95d79b-1cd4-4f71-9ab9-16081fbc54e5\" (UID: \"9c95d79b-1cd4-4f71-9ab9-16081fbc54e5\") " Jan 22 16:53:33 crc kubenswrapper[4758]: I0122 16:53:33.663751 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9c95d79b-1cd4-4f71-9ab9-16081fbc54e5-httpd-run\") pod \"9c95d79b-1cd4-4f71-9ab9-16081fbc54e5\" (UID: \"9c95d79b-1cd4-4f71-9ab9-16081fbc54e5\") " Jan 22 16:53:33 crc kubenswrapper[4758]: I0122 16:53:33.663822 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c95d79b-1cd4-4f71-9ab9-16081fbc54e5-public-tls-certs\") pod \"9c95d79b-1cd4-4f71-9ab9-16081fbc54e5\" (UID: \"9c95d79b-1cd4-4f71-9ab9-16081fbc54e5\") " Jan 22 16:53:33 crc kubenswrapper[4758]: I0122 16:53:33.663886 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c95d79b-1cd4-4f71-9ab9-16081fbc54e5-config-data\") pod \"9c95d79b-1cd4-4f71-9ab9-16081fbc54e5\" (UID: \"9c95d79b-1cd4-4f71-9ab9-16081fbc54e5\") " Jan 22 16:53:33 crc kubenswrapper[4758]: I0122 16:53:33.663950 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"9c95d79b-1cd4-4f71-9ab9-16081fbc54e5\" (UID: \"9c95d79b-1cd4-4f71-9ab9-16081fbc54e5\") " Jan 22 16:53:33 crc kubenswrapper[4758]: I0122 16:53:33.663972 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c95d79b-1cd4-4f71-9ab9-16081fbc54e5-logs\") pod \"9c95d79b-1cd4-4f71-9ab9-16081fbc54e5\" (UID: \"9c95d79b-1cd4-4f71-9ab9-16081fbc54e5\") " Jan 22 16:53:33 crc kubenswrapper[4758]: I0122 16:53:33.664912 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c95d79b-1cd4-4f71-9ab9-16081fbc54e5-logs" (OuterVolumeSpecName: "logs") pod "9c95d79b-1cd4-4f71-9ab9-16081fbc54e5" (UID: "9c95d79b-1cd4-4f71-9ab9-16081fbc54e5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:53:33 crc kubenswrapper[4758]: I0122 16:53:33.667894 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c95d79b-1cd4-4f71-9ab9-16081fbc54e5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "9c95d79b-1cd4-4f71-9ab9-16081fbc54e5" (UID: "9c95d79b-1cd4-4f71-9ab9-16081fbc54e5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:53:33 crc kubenswrapper[4758]: I0122 16:53:33.675558 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c95d79b-1cd4-4f71-9ab9-16081fbc54e5-kube-api-access-9r6w7" (OuterVolumeSpecName: "kube-api-access-9r6w7") pod "9c95d79b-1cd4-4f71-9ab9-16081fbc54e5" (UID: "9c95d79b-1cd4-4f71-9ab9-16081fbc54e5"). InnerVolumeSpecName "kube-api-access-9r6w7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:53:33 crc kubenswrapper[4758]: I0122 16:53:33.696246 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "9c95d79b-1cd4-4f71-9ab9-16081fbc54e5" (UID: "9c95d79b-1cd4-4f71-9ab9-16081fbc54e5"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 22 16:53:33 crc kubenswrapper[4758]: I0122 16:53:33.721142 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c95d79b-1cd4-4f71-9ab9-16081fbc54e5-scripts" (OuterVolumeSpecName: "scripts") pod "9c95d79b-1cd4-4f71-9ab9-16081fbc54e5" (UID: "9c95d79b-1cd4-4f71-9ab9-16081fbc54e5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:33 crc kubenswrapper[4758]: I0122 16:53:33.725248 4758 generic.go:334] "Generic (PLEG): container finished" podID="9c95d79b-1cd4-4f71-9ab9-16081fbc54e5" containerID="80422c2fb8eae76c949d5f97b8728f124a6c645d863ef08a5e105c3a30f3e98b" exitCode=0 Jan 22 16:53:33 crc kubenswrapper[4758]: I0122 16:53:33.725319 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9c95d79b-1cd4-4f71-9ab9-16081fbc54e5","Type":"ContainerDied","Data":"80422c2fb8eae76c949d5f97b8728f124a6c645d863ef08a5e105c3a30f3e98b"} Jan 22 16:53:33 crc kubenswrapper[4758]: I0122 16:53:33.725352 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9c95d79b-1cd4-4f71-9ab9-16081fbc54e5","Type":"ContainerDied","Data":"ae7b623d7c963e4d8fa161d307c65cdfa118cab21c97a1e8714073ef0305a67a"} Jan 22 16:53:33 crc kubenswrapper[4758]: I0122 16:53:33.725495 4758 scope.go:117] "RemoveContainer" containerID="80422c2fb8eae76c949d5f97b8728f124a6c645d863ef08a5e105c3a30f3e98b" Jan 22 16:53:33 crc kubenswrapper[4758]: I0122 16:53:33.725661 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 16:53:33 crc kubenswrapper[4758]: I0122 16:53:33.732155 4758 generic.go:334] "Generic (PLEG): container finished" podID="0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5" containerID="811915bfcfccc9a4a5f800579b083a1bf643cbdcda278638ddf797e1bd37b62d" exitCode=0 Jan 22 16:53:33 crc kubenswrapper[4758]: I0122 16:53:33.732222 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5","Type":"ContainerDied","Data":"811915bfcfccc9a4a5f800579b083a1bf643cbdcda278638ddf797e1bd37b62d"} Jan 22 16:53:33 crc kubenswrapper[4758]: I0122 16:53:33.751889 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c95d79b-1cd4-4f71-9ab9-16081fbc54e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c95d79b-1cd4-4f71-9ab9-16081fbc54e5" (UID: "9c95d79b-1cd4-4f71-9ab9-16081fbc54e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:33 crc kubenswrapper[4758]: I0122 16:53:33.756428 4758 generic.go:334] "Generic (PLEG): container finished" podID="efc0b77e-57a1-4a76-93ae-c56db1fd3969" containerID="6689e0cbf87d5b680eb4600e83e72d441be2e8d99698c2c06afc9f56d1e4deeb" exitCode=143 Jan 22 16:53:33 crc kubenswrapper[4758]: I0122 16:53:33.756468 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"efc0b77e-57a1-4a76-93ae-c56db1fd3969","Type":"ContainerDied","Data":"6689e0cbf87d5b680eb4600e83e72d441be2e8d99698c2c06afc9f56d1e4deeb"} Jan 22 16:53:33 crc kubenswrapper[4758]: I0122 16:53:33.766048 4758 scope.go:117] "RemoveContainer" containerID="1424128c818a82b2e4b52afb45b200751b14469a6aae9b89229e70f0efbf92b1" Jan 22 16:53:33 crc kubenswrapper[4758]: I0122 16:53:33.767841 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c95d79b-1cd4-4f71-9ab9-16081fbc54e5-config-data" (OuterVolumeSpecName: "config-data") pod "9c95d79b-1cd4-4f71-9ab9-16081fbc54e5" (UID: "9c95d79b-1cd4-4f71-9ab9-16081fbc54e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:33 crc kubenswrapper[4758]: I0122 16:53:33.769399 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9r6w7\" (UniqueName: \"kubernetes.io/projected/9c95d79b-1cd4-4f71-9ab9-16081fbc54e5-kube-api-access-9r6w7\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:33 crc kubenswrapper[4758]: I0122 16:53:33.769428 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c95d79b-1cd4-4f71-9ab9-16081fbc54e5-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:33 crc kubenswrapper[4758]: I0122 16:53:33.769439 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c95d79b-1cd4-4f71-9ab9-16081fbc54e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:33 crc kubenswrapper[4758]: I0122 16:53:33.769453 4758 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9c95d79b-1cd4-4f71-9ab9-16081fbc54e5-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:33 crc kubenswrapper[4758]: I0122 16:53:33.769463 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c95d79b-1cd4-4f71-9ab9-16081fbc54e5-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:33 crc kubenswrapper[4758]: I0122 16:53:33.769489 4758 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 22 16:53:33 crc kubenswrapper[4758]: I0122 16:53:33.769501 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c95d79b-1cd4-4f71-9ab9-16081fbc54e5-logs\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:33 crc kubenswrapper[4758]: I0122 16:53:33.772827 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c95d79b-1cd4-4f71-9ab9-16081fbc54e5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9c95d79b-1cd4-4f71-9ab9-16081fbc54e5" (UID: "9c95d79b-1cd4-4f71-9ab9-16081fbc54e5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:33 crc kubenswrapper[4758]: I0122 16:53:33.800867 4758 scope.go:117] "RemoveContainer" containerID="80422c2fb8eae76c949d5f97b8728f124a6c645d863ef08a5e105c3a30f3e98b" Jan 22 16:53:33 crc kubenswrapper[4758]: E0122 16:53:33.808526 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80422c2fb8eae76c949d5f97b8728f124a6c645d863ef08a5e105c3a30f3e98b\": container with ID starting with 80422c2fb8eae76c949d5f97b8728f124a6c645d863ef08a5e105c3a30f3e98b not found: ID does not exist" containerID="80422c2fb8eae76c949d5f97b8728f124a6c645d863ef08a5e105c3a30f3e98b" Jan 22 16:53:33 crc kubenswrapper[4758]: I0122 16:53:33.808581 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80422c2fb8eae76c949d5f97b8728f124a6c645d863ef08a5e105c3a30f3e98b"} err="failed to get container status \"80422c2fb8eae76c949d5f97b8728f124a6c645d863ef08a5e105c3a30f3e98b\": rpc error: code = NotFound desc = could not find container \"80422c2fb8eae76c949d5f97b8728f124a6c645d863ef08a5e105c3a30f3e98b\": container with ID starting with 80422c2fb8eae76c949d5f97b8728f124a6c645d863ef08a5e105c3a30f3e98b not found: ID does not exist" Jan 22 16:53:33 crc kubenswrapper[4758]: I0122 16:53:33.808611 4758 scope.go:117] "RemoveContainer" containerID="1424128c818a82b2e4b52afb45b200751b14469a6aae9b89229e70f0efbf92b1" Jan 22 16:53:33 crc kubenswrapper[4758]: I0122 16:53:33.810257 4758 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 22 16:53:33 crc kubenswrapper[4758]: E0122 16:53:33.838645 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1424128c818a82b2e4b52afb45b200751b14469a6aae9b89229e70f0efbf92b1\": container with ID starting with 1424128c818a82b2e4b52afb45b200751b14469a6aae9b89229e70f0efbf92b1 not found: ID does not exist" containerID="1424128c818a82b2e4b52afb45b200751b14469a6aae9b89229e70f0efbf92b1" Jan 22 16:53:33 crc kubenswrapper[4758]: I0122 16:53:33.838756 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1424128c818a82b2e4b52afb45b200751b14469a6aae9b89229e70f0efbf92b1"} err="failed to get container status \"1424128c818a82b2e4b52afb45b200751b14469a6aae9b89229e70f0efbf92b1\": rpc error: code = NotFound desc = could not find container \"1424128c818a82b2e4b52afb45b200751b14469a6aae9b89229e70f0efbf92b1\": container with ID starting with 1424128c818a82b2e4b52afb45b200751b14469a6aae9b89229e70f0efbf92b1 not found: ID does not exist" Jan 22 16:53:33 crc kubenswrapper[4758]: I0122 16:53:33.838794 4758 scope.go:117] "RemoveContainer" containerID="7881cf6a1ea9246b1451350e25b945ccd52405bd209ed861bedc85b51ac01118" Jan 22 16:53:33 crc kubenswrapper[4758]: I0122 16:53:33.876141 4758 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c95d79b-1cd4-4f71-9ab9-16081fbc54e5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:33 crc kubenswrapper[4758]: I0122 16:53:33.876178 4758 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:33 crc kubenswrapper[4758]: E0122 16:53:33.909021 4758 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b0ecf47_60c2_42f1_ba2f_a8be9c9bf5c5.slice/crio-811915bfcfccc9a4a5f800579b083a1bf643cbdcda278638ddf797e1bd37b62d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b0ecf47_60c2_42f1_ba2f_a8be9c9bf5c5.slice/crio-conmon-811915bfcfccc9a4a5f800579b083a1bf643cbdcda278638ddf797e1bd37b62d.scope\": RecentStats: unable to find data in memory cache]" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.186624 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.212917 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.228479 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 16:53:34 crc kubenswrapper[4758]: E0122 16:53:34.232682 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c95d79b-1cd4-4f71-9ab9-16081fbc54e5" containerName="glance-log" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.233710 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c95d79b-1cd4-4f71-9ab9-16081fbc54e5" containerName="glance-log" Jan 22 16:53:34 crc kubenswrapper[4758]: E0122 16:53:34.234939 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c95d79b-1cd4-4f71-9ab9-16081fbc54e5" containerName="glance-httpd" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.235027 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c95d79b-1cd4-4f71-9ab9-16081fbc54e5" containerName="glance-httpd" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.235398 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c95d79b-1cd4-4f71-9ab9-16081fbc54e5" containerName="glance-log" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.235664 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c95d79b-1cd4-4f71-9ab9-16081fbc54e5" containerName="glance-httpd" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.238474 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.242374 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.247096 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.247094 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.326371 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbbd5d99-3b1f-4e99-b3f9-a8c39af70665-logs\") pod \"glance-default-external-api-0\" (UID: \"cbbd5d99-3b1f-4e99-b3f9-a8c39af70665\") " pod="openstack/glance-default-external-api-0" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.326430 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbbd5d99-3b1f-4e99-b3f9-a8c39af70665-scripts\") pod \"glance-default-external-api-0\" (UID: \"cbbd5d99-3b1f-4e99-b3f9-a8c39af70665\") " pod="openstack/glance-default-external-api-0" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.326528 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbbd5d99-3b1f-4e99-b3f9-a8c39af70665-config-data\") pod \"glance-default-external-api-0\" (UID: \"cbbd5d99-3b1f-4e99-b3f9-a8c39af70665\") " pod="openstack/glance-default-external-api-0" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.326579 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ptv9\" (UniqueName: \"kubernetes.io/projected/cbbd5d99-3b1f-4e99-b3f9-a8c39af70665-kube-api-access-2ptv9\") pod \"glance-default-external-api-0\" (UID: \"cbbd5d99-3b1f-4e99-b3f9-a8c39af70665\") " pod="openstack/glance-default-external-api-0" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.326638 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cbbd5d99-3b1f-4e99-b3f9-a8c39af70665-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cbbd5d99-3b1f-4e99-b3f9-a8c39af70665\") " pod="openstack/glance-default-external-api-0" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.326697 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"cbbd5d99-3b1f-4e99-b3f9-a8c39af70665\") " pod="openstack/glance-default-external-api-0" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.326725 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbbd5d99-3b1f-4e99-b3f9-a8c39af70665-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cbbd5d99-3b1f-4e99-b3f9-a8c39af70665\") " pod="openstack/glance-default-external-api-0" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.326788 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbbd5d99-3b1f-4e99-b3f9-a8c39af70665-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cbbd5d99-3b1f-4e99-b3f9-a8c39af70665\") " pod="openstack/glance-default-external-api-0" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.375968 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.428002 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5-custom-prometheus-ca\") pod \"0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5\" (UID: \"0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5\") " Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.428067 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5-combined-ca-bundle\") pod \"0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5\" (UID: \"0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5\") " Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.428249 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5q2b\" (UniqueName: \"kubernetes.io/projected/0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5-kube-api-access-g5q2b\") pod \"0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5\" (UID: \"0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5\") " Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.428275 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5-config-data\") pod \"0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5\" (UID: \"0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5\") " Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.428302 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5-logs\") pod \"0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5\" (UID: \"0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5\") " Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.428602 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbbd5d99-3b1f-4e99-b3f9-a8c39af70665-config-data\") pod \"glance-default-external-api-0\" (UID: \"cbbd5d99-3b1f-4e99-b3f9-a8c39af70665\") " pod="openstack/glance-default-external-api-0" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.428651 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ptv9\" (UniqueName: \"kubernetes.io/projected/cbbd5d99-3b1f-4e99-b3f9-a8c39af70665-kube-api-access-2ptv9\") pod \"glance-default-external-api-0\" (UID: \"cbbd5d99-3b1f-4e99-b3f9-a8c39af70665\") " pod="openstack/glance-default-external-api-0" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.428701 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cbbd5d99-3b1f-4e99-b3f9-a8c39af70665-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cbbd5d99-3b1f-4e99-b3f9-a8c39af70665\") " pod="openstack/glance-default-external-api-0" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.428812 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"cbbd5d99-3b1f-4e99-b3f9-a8c39af70665\") " pod="openstack/glance-default-external-api-0" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.428837 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbbd5d99-3b1f-4e99-b3f9-a8c39af70665-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cbbd5d99-3b1f-4e99-b3f9-a8c39af70665\") " pod="openstack/glance-default-external-api-0" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.428873 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbbd5d99-3b1f-4e99-b3f9-a8c39af70665-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cbbd5d99-3b1f-4e99-b3f9-a8c39af70665\") " pod="openstack/glance-default-external-api-0" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.428916 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbbd5d99-3b1f-4e99-b3f9-a8c39af70665-logs\") pod \"glance-default-external-api-0\" (UID: \"cbbd5d99-3b1f-4e99-b3f9-a8c39af70665\") " pod="openstack/glance-default-external-api-0" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.428936 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbbd5d99-3b1f-4e99-b3f9-a8c39af70665-scripts\") pod \"glance-default-external-api-0\" (UID: \"cbbd5d99-3b1f-4e99-b3f9-a8c39af70665\") " pod="openstack/glance-default-external-api-0" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.429630 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5-logs" (OuterVolumeSpecName: "logs") pod "0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5" (UID: "0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.432985 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cbbd5d99-3b1f-4e99-b3f9-a8c39af70665-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cbbd5d99-3b1f-4e99-b3f9-a8c39af70665\") " pod="openstack/glance-default-external-api-0" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.433046 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbbd5d99-3b1f-4e99-b3f9-a8c39af70665-scripts\") pod \"glance-default-external-api-0\" (UID: \"cbbd5d99-3b1f-4e99-b3f9-a8c39af70665\") " pod="openstack/glance-default-external-api-0" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.433288 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbbd5d99-3b1f-4e99-b3f9-a8c39af70665-logs\") pod \"glance-default-external-api-0\" (UID: \"cbbd5d99-3b1f-4e99-b3f9-a8c39af70665\") " pod="openstack/glance-default-external-api-0" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.433378 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"cbbd5d99-3b1f-4e99-b3f9-a8c39af70665\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-external-api-0" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.435069 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5-kube-api-access-g5q2b" (OuterVolumeSpecName: "kube-api-access-g5q2b") pod "0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5" (UID: "0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5"). InnerVolumeSpecName "kube-api-access-g5q2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.437074 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbbd5d99-3b1f-4e99-b3f9-a8c39af70665-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cbbd5d99-3b1f-4e99-b3f9-a8c39af70665\") " pod="openstack/glance-default-external-api-0" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.453446 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbbd5d99-3b1f-4e99-b3f9-a8c39af70665-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cbbd5d99-3b1f-4e99-b3f9-a8c39af70665\") " pod="openstack/glance-default-external-api-0" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.455964 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbbd5d99-3b1f-4e99-b3f9-a8c39af70665-config-data\") pod \"glance-default-external-api-0\" (UID: \"cbbd5d99-3b1f-4e99-b3f9-a8c39af70665\") " pod="openstack/glance-default-external-api-0" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.470143 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ptv9\" (UniqueName: \"kubernetes.io/projected/cbbd5d99-3b1f-4e99-b3f9-a8c39af70665-kube-api-access-2ptv9\") pod \"glance-default-external-api-0\" (UID: \"cbbd5d99-3b1f-4e99-b3f9-a8c39af70665\") " pod="openstack/glance-default-external-api-0" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.487759 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"cbbd5d99-3b1f-4e99-b3f9-a8c39af70665\") " pod="openstack/glance-default-external-api-0" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.509486 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5" (UID: "0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.517950 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.531821 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5q2b\" (UniqueName: \"kubernetes.io/projected/0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5-kube-api-access-g5q2b\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.531862 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5-logs\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.531905 4758 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.536228 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5-config-data" (OuterVolumeSpecName: "config-data") pod "0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5" (UID: "0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.536320 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5" (UID: "0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.571569 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.633633 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.633973 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.770826 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2159289-e740-441a-80f8-0ce0d0806e52","Type":"ContainerStarted","Data":"103fb12477bec16222596de0948f2f4d064fb5c6a85a08995ffed00e255a2246"} Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.773623 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5","Type":"ContainerDied","Data":"45d441cf67290b2260d2d1e41cfbf4b22497910f339e5dc6f86d30b25c60d7dd"} Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.773660 4758 scope.go:117] "RemoveContainer" containerID="811915bfcfccc9a4a5f800579b083a1bf643cbdcda278638ddf797e1bd37b62d" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.773797 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.823381 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c95d79b-1cd4-4f71-9ab9-16081fbc54e5" path="/var/lib/kubelet/pods/9c95d79b-1cd4-4f71-9ab9-16081fbc54e5/volumes" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.824387 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.840676 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.857840 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 22 16:53:34 crc kubenswrapper[4758]: E0122 16:53:34.858403 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5" containerName="watcher-decision-engine" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.858421 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5" containerName="watcher-decision-engine" Jan 22 16:53:34 crc kubenswrapper[4758]: E0122 16:53:34.858457 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5" containerName="watcher-decision-engine" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.858464 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5" containerName="watcher-decision-engine" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.858681 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5" containerName="watcher-decision-engine" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.858695 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5" containerName="watcher-decision-engine" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.858705 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5" containerName="watcher-decision-engine" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.859477 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.865113 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.892047 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.965698 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98nmr\" (UniqueName: \"kubernetes.io/projected/4917bff0-0c03-454c-b1db-416fe2caaf7f-kube-api-access-98nmr\") pod \"watcher-decision-engine-0\" (UID: \"4917bff0-0c03-454c-b1db-416fe2caaf7f\") " pod="openstack/watcher-decision-engine-0" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.965767 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4917bff0-0c03-454c-b1db-416fe2caaf7f-logs\") pod \"watcher-decision-engine-0\" (UID: \"4917bff0-0c03-454c-b1db-416fe2caaf7f\") " pod="openstack/watcher-decision-engine-0" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.965799 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4917bff0-0c03-454c-b1db-416fe2caaf7f-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"4917bff0-0c03-454c-b1db-416fe2caaf7f\") " pod="openstack/watcher-decision-engine-0" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.966067 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4917bff0-0c03-454c-b1db-416fe2caaf7f-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"4917bff0-0c03-454c-b1db-416fe2caaf7f\") " pod="openstack/watcher-decision-engine-0" Jan 22 16:53:34 crc kubenswrapper[4758]: I0122 16:53:34.966172 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4917bff0-0c03-454c-b1db-416fe2caaf7f-config-data\") pod \"watcher-decision-engine-0\" (UID: \"4917bff0-0c03-454c-b1db-416fe2caaf7f\") " pod="openstack/watcher-decision-engine-0" Jan 22 16:53:35 crc kubenswrapper[4758]: I0122 16:53:35.068812 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98nmr\" (UniqueName: \"kubernetes.io/projected/4917bff0-0c03-454c-b1db-416fe2caaf7f-kube-api-access-98nmr\") pod \"watcher-decision-engine-0\" (UID: \"4917bff0-0c03-454c-b1db-416fe2caaf7f\") " pod="openstack/watcher-decision-engine-0" Jan 22 16:53:35 crc kubenswrapper[4758]: I0122 16:53:35.068863 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4917bff0-0c03-454c-b1db-416fe2caaf7f-logs\") pod \"watcher-decision-engine-0\" (UID: \"4917bff0-0c03-454c-b1db-416fe2caaf7f\") " pod="openstack/watcher-decision-engine-0" Jan 22 16:53:35 crc kubenswrapper[4758]: I0122 16:53:35.068929 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4917bff0-0c03-454c-b1db-416fe2caaf7f-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"4917bff0-0c03-454c-b1db-416fe2caaf7f\") " pod="openstack/watcher-decision-engine-0" Jan 22 16:53:35 crc kubenswrapper[4758]: I0122 16:53:35.069126 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4917bff0-0c03-454c-b1db-416fe2caaf7f-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"4917bff0-0c03-454c-b1db-416fe2caaf7f\") " pod="openstack/watcher-decision-engine-0" Jan 22 16:53:35 crc kubenswrapper[4758]: I0122 16:53:35.069182 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4917bff0-0c03-454c-b1db-416fe2caaf7f-config-data\") pod \"watcher-decision-engine-0\" (UID: \"4917bff0-0c03-454c-b1db-416fe2caaf7f\") " pod="openstack/watcher-decision-engine-0" Jan 22 16:53:35 crc kubenswrapper[4758]: I0122 16:53:35.069344 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4917bff0-0c03-454c-b1db-416fe2caaf7f-logs\") pod \"watcher-decision-engine-0\" (UID: \"4917bff0-0c03-454c-b1db-416fe2caaf7f\") " pod="openstack/watcher-decision-engine-0" Jan 22 16:53:35 crc kubenswrapper[4758]: I0122 16:53:35.074305 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4917bff0-0c03-454c-b1db-416fe2caaf7f-config-data\") pod \"watcher-decision-engine-0\" (UID: \"4917bff0-0c03-454c-b1db-416fe2caaf7f\") " pod="openstack/watcher-decision-engine-0" Jan 22 16:53:35 crc kubenswrapper[4758]: I0122 16:53:35.075521 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4917bff0-0c03-454c-b1db-416fe2caaf7f-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"4917bff0-0c03-454c-b1db-416fe2caaf7f\") " pod="openstack/watcher-decision-engine-0" Jan 22 16:53:35 crc kubenswrapper[4758]: I0122 16:53:35.082169 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4917bff0-0c03-454c-b1db-416fe2caaf7f-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"4917bff0-0c03-454c-b1db-416fe2caaf7f\") " pod="openstack/watcher-decision-engine-0" Jan 22 16:53:35 crc kubenswrapper[4758]: I0122 16:53:35.087573 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98nmr\" (UniqueName: \"kubernetes.io/projected/4917bff0-0c03-454c-b1db-416fe2caaf7f-kube-api-access-98nmr\") pod \"watcher-decision-engine-0\" (UID: \"4917bff0-0c03-454c-b1db-416fe2caaf7f\") " pod="openstack/watcher-decision-engine-0" Jan 22 16:53:35 crc kubenswrapper[4758]: I0122 16:53:35.204668 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 22 16:53:35 crc kubenswrapper[4758]: I0122 16:53:35.819519 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 22 16:53:36 crc kubenswrapper[4758]: I0122 16:53:36.553875 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 16:53:36 crc kubenswrapper[4758]: I0122 16:53:36.574425 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 16:53:36 crc kubenswrapper[4758]: I0122 16:53:36.836283 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5" path="/var/lib/kubelet/pods/0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5/volumes" Jan 22 16:53:36 crc kubenswrapper[4758]: I0122 16:53:36.846364 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cbbd5d99-3b1f-4e99-b3f9-a8c39af70665","Type":"ContainerStarted","Data":"dde68c045e24b47f88a4616357e69ae3e81c45fe020679c052a1bb0f7e2472dc"} Jan 22 16:53:36 crc kubenswrapper[4758]: I0122 16:53:36.875139 4758 generic.go:334] "Generic (PLEG): container finished" podID="efc0b77e-57a1-4a76-93ae-c56db1fd3969" containerID="054f06827200d506586945a26f061d13371e9e314fb007b7f47420d84f5a5d27" exitCode=0 Jan 22 16:53:36 crc kubenswrapper[4758]: I0122 16:53:36.875249 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"efc0b77e-57a1-4a76-93ae-c56db1fd3969","Type":"ContainerDied","Data":"054f06827200d506586945a26f061d13371e9e314fb007b7f47420d84f5a5d27"} Jan 22 16:53:36 crc kubenswrapper[4758]: I0122 16:53:36.875282 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"efc0b77e-57a1-4a76-93ae-c56db1fd3969","Type":"ContainerDied","Data":"4d01365d79aedf70c1dc862fa5fa99a13eedb36d4444e68fb8077a2ef5a093dd"} Jan 22 16:53:36 crc kubenswrapper[4758]: I0122 16:53:36.875302 4758 scope.go:117] "RemoveContainer" containerID="054f06827200d506586945a26f061d13371e9e314fb007b7f47420d84f5a5d27" Jan 22 16:53:36 crc kubenswrapper[4758]: I0122 16:53:36.875460 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 16:53:36 crc kubenswrapper[4758]: I0122 16:53:36.895792 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a2159289-e740-441a-80f8-0ce0d0806e52" containerName="ceilometer-central-agent" containerID="cri-o://70ee73a3d29c61fb0b5990cbab94843b27b8971510b0c5c2c45873ae0392f7c0" gracePeriod=30 Jan 22 16:53:36 crc kubenswrapper[4758]: I0122 16:53:36.896217 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 16:53:36 crc kubenswrapper[4758]: I0122 16:53:36.896619 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a2159289-e740-441a-80f8-0ce0d0806e52" containerName="proxy-httpd" containerID="cri-o://d5d69682e0ddbf28aa034c0b2d0a7a43364866d6a47929b54b40e79df8428415" gracePeriod=30 Jan 22 16:53:36 crc kubenswrapper[4758]: I0122 16:53:36.896830 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a2159289-e740-441a-80f8-0ce0d0806e52" containerName="sg-core" containerID="cri-o://103fb12477bec16222596de0948f2f4d064fb5c6a85a08995ffed00e255a2246" gracePeriod=30 Jan 22 16:53:36 crc kubenswrapper[4758]: I0122 16:53:36.896950 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a2159289-e740-441a-80f8-0ce0d0806e52" containerName="ceilometer-notification-agent" containerID="cri-o://40b4be892f1dad1c0a5d0dc146d7263fd345c1f36cb6b5fd1a42eede28379bfc" gracePeriod=30 Jan 22 16:53:36 crc kubenswrapper[4758]: I0122 16:53:36.904254 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"4917bff0-0c03-454c-b1db-416fe2caaf7f","Type":"ContainerStarted","Data":"9cb5ac6424cd8cb6fe2e00690fb250ee8724f37116d1a864646c885aec5d4dec"} Jan 22 16:53:36 crc kubenswrapper[4758]: I0122 16:53:36.904552 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"4917bff0-0c03-454c-b1db-416fe2caaf7f","Type":"ContainerStarted","Data":"a21261604c2cabbd1856393366018af3ffd109157beb1623021bea64f646b3a4"} Jan 22 16:53:36 crc kubenswrapper[4758]: I0122 16:53:36.906473 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efc0b77e-57a1-4a76-93ae-c56db1fd3969-scripts\") pod \"efc0b77e-57a1-4a76-93ae-c56db1fd3969\" (UID: \"efc0b77e-57a1-4a76-93ae-c56db1fd3969\") " Jan 22 16:53:36 crc kubenswrapper[4758]: I0122 16:53:36.906548 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjkps\" (UniqueName: \"kubernetes.io/projected/efc0b77e-57a1-4a76-93ae-c56db1fd3969-kube-api-access-kjkps\") pod \"efc0b77e-57a1-4a76-93ae-c56db1fd3969\" (UID: \"efc0b77e-57a1-4a76-93ae-c56db1fd3969\") " Jan 22 16:53:36 crc kubenswrapper[4758]: I0122 16:53:36.906669 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efc0b77e-57a1-4a76-93ae-c56db1fd3969-combined-ca-bundle\") pod \"efc0b77e-57a1-4a76-93ae-c56db1fd3969\" (UID: \"efc0b77e-57a1-4a76-93ae-c56db1fd3969\") " Jan 22 16:53:36 crc kubenswrapper[4758]: I0122 16:53:36.906696 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/efc0b77e-57a1-4a76-93ae-c56db1fd3969-internal-tls-certs\") pod \"efc0b77e-57a1-4a76-93ae-c56db1fd3969\" (UID: \"efc0b77e-57a1-4a76-93ae-c56db1fd3969\") " Jan 22 16:53:36 crc kubenswrapper[4758]: I0122 16:53:36.906760 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efc0b77e-57a1-4a76-93ae-c56db1fd3969-logs\") pod \"efc0b77e-57a1-4a76-93ae-c56db1fd3969\" (UID: \"efc0b77e-57a1-4a76-93ae-c56db1fd3969\") " Jan 22 16:53:36 crc kubenswrapper[4758]: I0122 16:53:36.906821 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/efc0b77e-57a1-4a76-93ae-c56db1fd3969-httpd-run\") pod \"efc0b77e-57a1-4a76-93ae-c56db1fd3969\" (UID: \"efc0b77e-57a1-4a76-93ae-c56db1fd3969\") " Jan 22 16:53:36 crc kubenswrapper[4758]: I0122 16:53:36.906871 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"efc0b77e-57a1-4a76-93ae-c56db1fd3969\" (UID: \"efc0b77e-57a1-4a76-93ae-c56db1fd3969\") " Jan 22 16:53:36 crc kubenswrapper[4758]: I0122 16:53:36.906924 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efc0b77e-57a1-4a76-93ae-c56db1fd3969-config-data\") pod \"efc0b77e-57a1-4a76-93ae-c56db1fd3969\" (UID: \"efc0b77e-57a1-4a76-93ae-c56db1fd3969\") " Jan 22 16:53:36 crc kubenswrapper[4758]: I0122 16:53:36.908060 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efc0b77e-57a1-4a76-93ae-c56db1fd3969-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "efc0b77e-57a1-4a76-93ae-c56db1fd3969" (UID: "efc0b77e-57a1-4a76-93ae-c56db1fd3969"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:53:36 crc kubenswrapper[4758]: I0122 16:53:36.908149 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efc0b77e-57a1-4a76-93ae-c56db1fd3969-logs" (OuterVolumeSpecName: "logs") pod "efc0b77e-57a1-4a76-93ae-c56db1fd3969" (UID: "efc0b77e-57a1-4a76-93ae-c56db1fd3969"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:53:36 crc kubenswrapper[4758]: I0122 16:53:36.914376 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efc0b77e-57a1-4a76-93ae-c56db1fd3969-scripts" (OuterVolumeSpecName: "scripts") pod "efc0b77e-57a1-4a76-93ae-c56db1fd3969" (UID: "efc0b77e-57a1-4a76-93ae-c56db1fd3969"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:36 crc kubenswrapper[4758]: I0122 16:53:36.914425 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efc0b77e-57a1-4a76-93ae-c56db1fd3969-kube-api-access-kjkps" (OuterVolumeSpecName: "kube-api-access-kjkps") pod "efc0b77e-57a1-4a76-93ae-c56db1fd3969" (UID: "efc0b77e-57a1-4a76-93ae-c56db1fd3969"). InnerVolumeSpecName "kube-api-access-kjkps". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:53:36 crc kubenswrapper[4758]: I0122 16:53:36.932301 4758 scope.go:117] "RemoveContainer" containerID="6689e0cbf87d5b680eb4600e83e72d441be2e8d99698c2c06afc9f56d1e4deeb" Jan 22 16:53:36 crc kubenswrapper[4758]: I0122 16:53:36.936466 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.6301429139999999 podStartE2EDuration="5.936441595s" podCreationTimestamp="2026-01-22 16:53:31 +0000 UTC" firstStartedPulling="2026-01-22 16:53:31.91512574 +0000 UTC m=+1433.398465025" lastFinishedPulling="2026-01-22 16:53:36.221424421 +0000 UTC m=+1437.704763706" observedRunningTime="2026-01-22 16:53:36.931019207 +0000 UTC m=+1438.414358492" watchObservedRunningTime="2026-01-22 16:53:36.936441595 +0000 UTC m=+1438.419780880" Jan 22 16:53:36 crc kubenswrapper[4758]: I0122 16:53:36.945998 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "efc0b77e-57a1-4a76-93ae-c56db1fd3969" (UID: "efc0b77e-57a1-4a76-93ae-c56db1fd3969"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 22 16:53:36 crc kubenswrapper[4758]: I0122 16:53:36.962625 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=2.962597088 podStartE2EDuration="2.962597088s" podCreationTimestamp="2026-01-22 16:53:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:53:36.959443192 +0000 UTC m=+1438.442782477" watchObservedRunningTime="2026-01-22 16:53:36.962597088 +0000 UTC m=+1438.445936373" Jan 22 16:53:36 crc kubenswrapper[4758]: I0122 16:53:36.991056 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efc0b77e-57a1-4a76-93ae-c56db1fd3969-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "efc0b77e-57a1-4a76-93ae-c56db1fd3969" (UID: "efc0b77e-57a1-4a76-93ae-c56db1fd3969"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.009435 4758 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/efc0b77e-57a1-4a76-93ae-c56db1fd3969-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.009484 4758 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.009494 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efc0b77e-57a1-4a76-93ae-c56db1fd3969-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.009507 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjkps\" (UniqueName: \"kubernetes.io/projected/efc0b77e-57a1-4a76-93ae-c56db1fd3969-kube-api-access-kjkps\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.009521 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efc0b77e-57a1-4a76-93ae-c56db1fd3969-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.009531 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efc0b77e-57a1-4a76-93ae-c56db1fd3969-logs\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.040849 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efc0b77e-57a1-4a76-93ae-c56db1fd3969-config-data" (OuterVolumeSpecName: "config-data") pod "efc0b77e-57a1-4a76-93ae-c56db1fd3969" (UID: "efc0b77e-57a1-4a76-93ae-c56db1fd3969"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.051955 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efc0b77e-57a1-4a76-93ae-c56db1fd3969-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "efc0b77e-57a1-4a76-93ae-c56db1fd3969" (UID: "efc0b77e-57a1-4a76-93ae-c56db1fd3969"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.096017 4758 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.110922 4758 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/efc0b77e-57a1-4a76-93ae-c56db1fd3969-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.110947 4758 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.110957 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efc0b77e-57a1-4a76-93ae-c56db1fd3969-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.154208 4758 scope.go:117] "RemoveContainer" containerID="054f06827200d506586945a26f061d13371e9e314fb007b7f47420d84f5a5d27" Jan 22 16:53:37 crc kubenswrapper[4758]: E0122 16:53:37.154574 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"054f06827200d506586945a26f061d13371e9e314fb007b7f47420d84f5a5d27\": container with ID starting with 054f06827200d506586945a26f061d13371e9e314fb007b7f47420d84f5a5d27 not found: ID does not exist" containerID="054f06827200d506586945a26f061d13371e9e314fb007b7f47420d84f5a5d27" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.154606 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"054f06827200d506586945a26f061d13371e9e314fb007b7f47420d84f5a5d27"} err="failed to get container status \"054f06827200d506586945a26f061d13371e9e314fb007b7f47420d84f5a5d27\": rpc error: code = NotFound desc = could not find container \"054f06827200d506586945a26f061d13371e9e314fb007b7f47420d84f5a5d27\": container with ID starting with 054f06827200d506586945a26f061d13371e9e314fb007b7f47420d84f5a5d27 not found: ID does not exist" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.154632 4758 scope.go:117] "RemoveContainer" containerID="6689e0cbf87d5b680eb4600e83e72d441be2e8d99698c2c06afc9f56d1e4deeb" Jan 22 16:53:37 crc kubenswrapper[4758]: E0122 16:53:37.155155 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6689e0cbf87d5b680eb4600e83e72d441be2e8d99698c2c06afc9f56d1e4deeb\": container with ID starting with 6689e0cbf87d5b680eb4600e83e72d441be2e8d99698c2c06afc9f56d1e4deeb not found: ID does not exist" containerID="6689e0cbf87d5b680eb4600e83e72d441be2e8d99698c2c06afc9f56d1e4deeb" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.155177 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6689e0cbf87d5b680eb4600e83e72d441be2e8d99698c2c06afc9f56d1e4deeb"} err="failed to get container status \"6689e0cbf87d5b680eb4600e83e72d441be2e8d99698c2c06afc9f56d1e4deeb\": rpc error: code = NotFound desc = could not find container \"6689e0cbf87d5b680eb4600e83e72d441be2e8d99698c2c06afc9f56d1e4deeb\": container with ID starting with 6689e0cbf87d5b680eb4600e83e72d441be2e8d99698c2c06afc9f56d1e4deeb not found: ID does not exist" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.216145 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.398465 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.417250 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 16:53:37 crc kubenswrapper[4758]: E0122 16:53:37.417653 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efc0b77e-57a1-4a76-93ae-c56db1fd3969" containerName="glance-log" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.417666 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="efc0b77e-57a1-4a76-93ae-c56db1fd3969" containerName="glance-log" Jan 22 16:53:37 crc kubenswrapper[4758]: E0122 16:53:37.417682 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efc0b77e-57a1-4a76-93ae-c56db1fd3969" containerName="glance-httpd" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.417688 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="efc0b77e-57a1-4a76-93ae-c56db1fd3969" containerName="glance-httpd" Jan 22 16:53:37 crc kubenswrapper[4758]: E0122 16:53:37.417708 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5" containerName="watcher-decision-engine" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.417715 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b0ecf47-60c2-42f1-ba2f-a8be9c9bf5c5" containerName="watcher-decision-engine" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.417954 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="efc0b77e-57a1-4a76-93ae-c56db1fd3969" containerName="glance-log" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.417978 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="efc0b77e-57a1-4a76-93ae-c56db1fd3969" containerName="glance-httpd" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.419158 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.424881 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.425051 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.428588 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.600190 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e24622ea-6d08-4bb7-ae62-57d07c5c07aa-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e24622ea-6d08-4bb7-ae62-57d07c5c07aa\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.600828 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e24622ea-6d08-4bb7-ae62-57d07c5c07aa-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e24622ea-6d08-4bb7-ae62-57d07c5c07aa\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.600857 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"e24622ea-6d08-4bb7-ae62-57d07c5c07aa\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.600884 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e24622ea-6d08-4bb7-ae62-57d07c5c07aa-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e24622ea-6d08-4bb7-ae62-57d07c5c07aa\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.601122 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqsnh\" (UniqueName: \"kubernetes.io/projected/e24622ea-6d08-4bb7-ae62-57d07c5c07aa-kube-api-access-vqsnh\") pod \"glance-default-internal-api-0\" (UID: \"e24622ea-6d08-4bb7-ae62-57d07c5c07aa\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.601211 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e24622ea-6d08-4bb7-ae62-57d07c5c07aa-logs\") pod \"glance-default-internal-api-0\" (UID: \"e24622ea-6d08-4bb7-ae62-57d07c5c07aa\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.601230 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e24622ea-6d08-4bb7-ae62-57d07c5c07aa-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e24622ea-6d08-4bb7-ae62-57d07c5c07aa\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.601248 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e24622ea-6d08-4bb7-ae62-57d07c5c07aa-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e24622ea-6d08-4bb7-ae62-57d07c5c07aa\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.703869 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e24622ea-6d08-4bb7-ae62-57d07c5c07aa-logs\") pod \"glance-default-internal-api-0\" (UID: \"e24622ea-6d08-4bb7-ae62-57d07c5c07aa\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.703912 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e24622ea-6d08-4bb7-ae62-57d07c5c07aa-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e24622ea-6d08-4bb7-ae62-57d07c5c07aa\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.703936 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e24622ea-6d08-4bb7-ae62-57d07c5c07aa-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e24622ea-6d08-4bb7-ae62-57d07c5c07aa\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.704058 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e24622ea-6d08-4bb7-ae62-57d07c5c07aa-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e24622ea-6d08-4bb7-ae62-57d07c5c07aa\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.704083 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e24622ea-6d08-4bb7-ae62-57d07c5c07aa-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e24622ea-6d08-4bb7-ae62-57d07c5c07aa\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.704098 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"e24622ea-6d08-4bb7-ae62-57d07c5c07aa\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.704121 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e24622ea-6d08-4bb7-ae62-57d07c5c07aa-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e24622ea-6d08-4bb7-ae62-57d07c5c07aa\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.704154 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqsnh\" (UniqueName: \"kubernetes.io/projected/e24622ea-6d08-4bb7-ae62-57d07c5c07aa-kube-api-access-vqsnh\") pod \"glance-default-internal-api-0\" (UID: \"e24622ea-6d08-4bb7-ae62-57d07c5c07aa\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.705047 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e24622ea-6d08-4bb7-ae62-57d07c5c07aa-logs\") pod \"glance-default-internal-api-0\" (UID: \"e24622ea-6d08-4bb7-ae62-57d07c5c07aa\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.705077 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e24622ea-6d08-4bb7-ae62-57d07c5c07aa-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e24622ea-6d08-4bb7-ae62-57d07c5c07aa\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.705261 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"e24622ea-6d08-4bb7-ae62-57d07c5c07aa\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.712076 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e24622ea-6d08-4bb7-ae62-57d07c5c07aa-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e24622ea-6d08-4bb7-ae62-57d07c5c07aa\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.712236 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e24622ea-6d08-4bb7-ae62-57d07c5c07aa-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e24622ea-6d08-4bb7-ae62-57d07c5c07aa\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.712172 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e24622ea-6d08-4bb7-ae62-57d07c5c07aa-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e24622ea-6d08-4bb7-ae62-57d07c5c07aa\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.714313 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e24622ea-6d08-4bb7-ae62-57d07c5c07aa-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e24622ea-6d08-4bb7-ae62-57d07c5c07aa\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.725578 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqsnh\" (UniqueName: \"kubernetes.io/projected/e24622ea-6d08-4bb7-ae62-57d07c5c07aa-kube-api-access-vqsnh\") pod \"glance-default-internal-api-0\" (UID: \"e24622ea-6d08-4bb7-ae62-57d07c5c07aa\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.741306 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"e24622ea-6d08-4bb7-ae62-57d07c5c07aa\") " pod="openstack/glance-default-internal-api-0" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.769325 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.934207 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cbbd5d99-3b1f-4e99-b3f9-a8c39af70665","Type":"ContainerStarted","Data":"bfb483aafdcd6f87c531292516d8e658e06a70f8adcf485f2f5d54fe0213bc4a"} Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.942517 4758 generic.go:334] "Generic (PLEG): container finished" podID="a2159289-e740-441a-80f8-0ce0d0806e52" containerID="103fb12477bec16222596de0948f2f4d064fb5c6a85a08995ffed00e255a2246" exitCode=2 Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.942556 4758 generic.go:334] "Generic (PLEG): container finished" podID="a2159289-e740-441a-80f8-0ce0d0806e52" containerID="40b4be892f1dad1c0a5d0dc146d7263fd345c1f36cb6b5fd1a42eede28379bfc" exitCode=0 Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.943640 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2159289-e740-441a-80f8-0ce0d0806e52","Type":"ContainerStarted","Data":"d5d69682e0ddbf28aa034c0b2d0a7a43364866d6a47929b54b40e79df8428415"} Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.943680 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2159289-e740-441a-80f8-0ce0d0806e52","Type":"ContainerDied","Data":"103fb12477bec16222596de0948f2f4d064fb5c6a85a08995ffed00e255a2246"} Jan 22 16:53:37 crc kubenswrapper[4758]: I0122 16:53:37.943697 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2159289-e740-441a-80f8-0ce0d0806e52","Type":"ContainerDied","Data":"40b4be892f1dad1c0a5d0dc146d7263fd345c1f36cb6b5fd1a42eede28379bfc"} Jan 22 16:53:38 crc kubenswrapper[4758]: I0122 16:53:38.576653 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 16:53:38 crc kubenswrapper[4758]: I0122 16:53:38.858690 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efc0b77e-57a1-4a76-93ae-c56db1fd3969" path="/var/lib/kubelet/pods/efc0b77e-57a1-4a76-93ae-c56db1fd3969/volumes" Jan 22 16:53:38 crc kubenswrapper[4758]: I0122 16:53:38.955780 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e24622ea-6d08-4bb7-ae62-57d07c5c07aa","Type":"ContainerStarted","Data":"6c80e01521bfc902b75b215604bcf2120d4f3f8e207eacffffd61f31d4a355a9"} Jan 22 16:53:40 crc kubenswrapper[4758]: I0122 16:53:40.009085 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cbbd5d99-3b1f-4e99-b3f9-a8c39af70665","Type":"ContainerStarted","Data":"3d8d2a0dd56b7254a29c7de1885793ed627f997776ee0b792abce1609fb082fd"} Jan 22 16:53:40 crc kubenswrapper[4758]: I0122 16:53:40.026970 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e24622ea-6d08-4bb7-ae62-57d07c5c07aa","Type":"ContainerStarted","Data":"47a94f9f2b8b3793a66312bd68fd55cc9c25e9023c989573c460e663158b7f87"} Jan 22 16:53:40 crc kubenswrapper[4758]: I0122 16:53:40.057698 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.057677002 podStartE2EDuration="6.057677002s" podCreationTimestamp="2026-01-22 16:53:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:53:40.046395674 +0000 UTC m=+1441.529734979" watchObservedRunningTime="2026-01-22 16:53:40.057677002 +0000 UTC m=+1441.541016287" Jan 22 16:53:41 crc kubenswrapper[4758]: I0122 16:53:41.053175 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e24622ea-6d08-4bb7-ae62-57d07c5c07aa","Type":"ContainerStarted","Data":"a5ef2098921c78e7d12feafc597efd4d58967bd946a204f7e4d41f2311dfb966"} Jan 22 16:53:41 crc kubenswrapper[4758]: I0122 16:53:41.084414 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.084391787 podStartE2EDuration="4.084391787s" podCreationTimestamp="2026-01-22 16:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:53:41.07347857 +0000 UTC m=+1442.556817845" watchObservedRunningTime="2026-01-22 16:53:41.084391787 +0000 UTC m=+1442.567731062" Jan 22 16:53:43 crc kubenswrapper[4758]: I0122 16:53:43.837884 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:53:43 crc kubenswrapper[4758]: I0122 16:53:43.838170 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:53:44 crc kubenswrapper[4758]: I0122 16:53:44.572817 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 22 16:53:44 crc kubenswrapper[4758]: I0122 16:53:44.572876 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 22 16:53:44 crc kubenswrapper[4758]: I0122 16:53:44.606069 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 22 16:53:44 crc kubenswrapper[4758]: I0122 16:53:44.635542 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 22 16:53:45 crc kubenswrapper[4758]: I0122 16:53:45.095538 4758 generic.go:334] "Generic (PLEG): container finished" podID="a2159289-e740-441a-80f8-0ce0d0806e52" containerID="70ee73a3d29c61fb0b5990cbab94843b27b8971510b0c5c2c45873ae0392f7c0" exitCode=0 Jan 22 16:53:45 crc kubenswrapper[4758]: I0122 16:53:45.098697 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2159289-e740-441a-80f8-0ce0d0806e52","Type":"ContainerDied","Data":"70ee73a3d29c61fb0b5990cbab94843b27b8971510b0c5c2c45873ae0392f7c0"} Jan 22 16:53:45 crc kubenswrapper[4758]: I0122 16:53:45.098800 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 22 16:53:45 crc kubenswrapper[4758]: I0122 16:53:45.098825 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 22 16:53:45 crc kubenswrapper[4758]: I0122 16:53:45.205838 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 22 16:53:45 crc kubenswrapper[4758]: I0122 16:53:45.275535 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 22 16:53:46 crc kubenswrapper[4758]: I0122 16:53:46.105809 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 22 16:53:46 crc kubenswrapper[4758]: I0122 16:53:46.141997 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 22 16:53:47 crc kubenswrapper[4758]: I0122 16:53:47.120717 4758 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 16:53:47 crc kubenswrapper[4758]: I0122 16:53:47.120773 4758 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 16:53:47 crc kubenswrapper[4758]: I0122 16:53:47.285562 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 22 16:53:47 crc kubenswrapper[4758]: I0122 16:53:47.291622 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 22 16:53:47 crc kubenswrapper[4758]: I0122 16:53:47.769599 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 22 16:53:47 crc kubenswrapper[4758]: I0122 16:53:47.771183 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 22 16:53:47 crc kubenswrapper[4758]: I0122 16:53:47.797140 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 22 16:53:47 crc kubenswrapper[4758]: I0122 16:53:47.812843 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 22 16:53:48 crc kubenswrapper[4758]: I0122 16:53:48.140319 4758 generic.go:334] "Generic (PLEG): container finished" podID="fc06c7d9-b286-48cd-a359-6c18d1cc0e80" containerID="21601da3f1fa3b099a62055dd594476ed77fb3ef4a75505adb0aaba258d9abde" exitCode=0 Jan 22 16:53:48 crc kubenswrapper[4758]: I0122 16:53:48.140451 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qgsmp" event={"ID":"fc06c7d9-b286-48cd-a359-6c18d1cc0e80","Type":"ContainerDied","Data":"21601da3f1fa3b099a62055dd594476ed77fb3ef4a75505adb0aaba258d9abde"} Jan 22 16:53:48 crc kubenswrapper[4758]: I0122 16:53:48.140792 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 22 16:53:48 crc kubenswrapper[4758]: I0122 16:53:48.140971 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 22 16:53:49 crc kubenswrapper[4758]: I0122 16:53:49.628448 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qgsmp" Jan 22 16:53:49 crc kubenswrapper[4758]: I0122 16:53:49.780543 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc06c7d9-b286-48cd-a359-6c18d1cc0e80-config-data\") pod \"fc06c7d9-b286-48cd-a359-6c18d1cc0e80\" (UID: \"fc06c7d9-b286-48cd-a359-6c18d1cc0e80\") " Jan 22 16:53:49 crc kubenswrapper[4758]: I0122 16:53:49.780875 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc06c7d9-b286-48cd-a359-6c18d1cc0e80-combined-ca-bundle\") pod \"fc06c7d9-b286-48cd-a359-6c18d1cc0e80\" (UID: \"fc06c7d9-b286-48cd-a359-6c18d1cc0e80\") " Jan 22 16:53:49 crc kubenswrapper[4758]: I0122 16:53:49.780901 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc06c7d9-b286-48cd-a359-6c18d1cc0e80-scripts\") pod \"fc06c7d9-b286-48cd-a359-6c18d1cc0e80\" (UID: \"fc06c7d9-b286-48cd-a359-6c18d1cc0e80\") " Jan 22 16:53:49 crc kubenswrapper[4758]: I0122 16:53:49.780940 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bhfv\" (UniqueName: \"kubernetes.io/projected/fc06c7d9-b286-48cd-a359-6c18d1cc0e80-kube-api-access-5bhfv\") pod \"fc06c7d9-b286-48cd-a359-6c18d1cc0e80\" (UID: \"fc06c7d9-b286-48cd-a359-6c18d1cc0e80\") " Jan 22 16:53:49 crc kubenswrapper[4758]: I0122 16:53:49.788433 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc06c7d9-b286-48cd-a359-6c18d1cc0e80-kube-api-access-5bhfv" (OuterVolumeSpecName: "kube-api-access-5bhfv") pod "fc06c7d9-b286-48cd-a359-6c18d1cc0e80" (UID: "fc06c7d9-b286-48cd-a359-6c18d1cc0e80"). InnerVolumeSpecName "kube-api-access-5bhfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:53:49 crc kubenswrapper[4758]: I0122 16:53:49.789863 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc06c7d9-b286-48cd-a359-6c18d1cc0e80-scripts" (OuterVolumeSpecName: "scripts") pod "fc06c7d9-b286-48cd-a359-6c18d1cc0e80" (UID: "fc06c7d9-b286-48cd-a359-6c18d1cc0e80"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:49 crc kubenswrapper[4758]: I0122 16:53:49.812403 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc06c7d9-b286-48cd-a359-6c18d1cc0e80-config-data" (OuterVolumeSpecName: "config-data") pod "fc06c7d9-b286-48cd-a359-6c18d1cc0e80" (UID: "fc06c7d9-b286-48cd-a359-6c18d1cc0e80"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:49 crc kubenswrapper[4758]: I0122 16:53:49.849687 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc06c7d9-b286-48cd-a359-6c18d1cc0e80-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fc06c7d9-b286-48cd-a359-6c18d1cc0e80" (UID: "fc06c7d9-b286-48cd-a359-6c18d1cc0e80"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:49 crc kubenswrapper[4758]: I0122 16:53:49.882663 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc06c7d9-b286-48cd-a359-6c18d1cc0e80-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:49 crc kubenswrapper[4758]: I0122 16:53:49.882697 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc06c7d9-b286-48cd-a359-6c18d1cc0e80-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:49 crc kubenswrapper[4758]: I0122 16:53:49.882711 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc06c7d9-b286-48cd-a359-6c18d1cc0e80-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:49 crc kubenswrapper[4758]: I0122 16:53:49.882720 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bhfv\" (UniqueName: \"kubernetes.io/projected/fc06c7d9-b286-48cd-a359-6c18d1cc0e80-kube-api-access-5bhfv\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:50 crc kubenswrapper[4758]: I0122 16:53:50.241174 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qgsmp" event={"ID":"fc06c7d9-b286-48cd-a359-6c18d1cc0e80","Type":"ContainerDied","Data":"5d9f60788a6a31b9064b2981ac5a025e1801a7272a155115eeb23625fa0a0f7c"} Jan 22 16:53:50 crc kubenswrapper[4758]: I0122 16:53:50.241240 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d9f60788a6a31b9064b2981ac5a025e1801a7272a155115eeb23625fa0a0f7c" Jan 22 16:53:50 crc kubenswrapper[4758]: I0122 16:53:50.241304 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qgsmp" Jan 22 16:53:50 crc kubenswrapper[4758]: I0122 16:53:50.322728 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 22 16:53:50 crc kubenswrapper[4758]: E0122 16:53:50.323243 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc06c7d9-b286-48cd-a359-6c18d1cc0e80" containerName="nova-cell0-conductor-db-sync" Jan 22 16:53:50 crc kubenswrapper[4758]: I0122 16:53:50.323270 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc06c7d9-b286-48cd-a359-6c18d1cc0e80" containerName="nova-cell0-conductor-db-sync" Jan 22 16:53:50 crc kubenswrapper[4758]: I0122 16:53:50.323517 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc06c7d9-b286-48cd-a359-6c18d1cc0e80" containerName="nova-cell0-conductor-db-sync" Jan 22 16:53:50 crc kubenswrapper[4758]: I0122 16:53:50.324435 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 22 16:53:50 crc kubenswrapper[4758]: I0122 16:53:50.326162 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-r6mc9" Jan 22 16:53:50 crc kubenswrapper[4758]: I0122 16:53:50.326874 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 22 16:53:50 crc kubenswrapper[4758]: I0122 16:53:50.336433 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 22 16:53:50 crc kubenswrapper[4758]: I0122 16:53:50.423975 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 22 16:53:50 crc kubenswrapper[4758]: I0122 16:53:50.424063 4758 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 16:53:50 crc kubenswrapper[4758]: I0122 16:53:50.426014 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 22 16:53:50 crc kubenswrapper[4758]: I0122 16:53:50.430533 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20c9fbe2-1c90-4beb-9154-094e3fdc87d1-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"20c9fbe2-1c90-4beb-9154-094e3fdc87d1\") " pod="openstack/nova-cell0-conductor-0" Jan 22 16:53:50 crc kubenswrapper[4758]: I0122 16:53:50.430562 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdfst\" (UniqueName: \"kubernetes.io/projected/20c9fbe2-1c90-4beb-9154-094e3fdc87d1-kube-api-access-zdfst\") pod \"nova-cell0-conductor-0\" (UID: \"20c9fbe2-1c90-4beb-9154-094e3fdc87d1\") " pod="openstack/nova-cell0-conductor-0" Jan 22 16:53:50 crc kubenswrapper[4758]: I0122 16:53:50.430693 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20c9fbe2-1c90-4beb-9154-094e3fdc87d1-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"20c9fbe2-1c90-4beb-9154-094e3fdc87d1\") " pod="openstack/nova-cell0-conductor-0" Jan 22 16:53:50 crc kubenswrapper[4758]: I0122 16:53:50.532044 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20c9fbe2-1c90-4beb-9154-094e3fdc87d1-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"20c9fbe2-1c90-4beb-9154-094e3fdc87d1\") " pod="openstack/nova-cell0-conductor-0" Jan 22 16:53:50 crc kubenswrapper[4758]: I0122 16:53:50.532105 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdfst\" (UniqueName: \"kubernetes.io/projected/20c9fbe2-1c90-4beb-9154-094e3fdc87d1-kube-api-access-zdfst\") pod \"nova-cell0-conductor-0\" (UID: \"20c9fbe2-1c90-4beb-9154-094e3fdc87d1\") " pod="openstack/nova-cell0-conductor-0" Jan 22 16:53:50 crc kubenswrapper[4758]: I0122 16:53:50.532253 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20c9fbe2-1c90-4beb-9154-094e3fdc87d1-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"20c9fbe2-1c90-4beb-9154-094e3fdc87d1\") " pod="openstack/nova-cell0-conductor-0" Jan 22 16:53:50 crc kubenswrapper[4758]: I0122 16:53:50.555851 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20c9fbe2-1c90-4beb-9154-094e3fdc87d1-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"20c9fbe2-1c90-4beb-9154-094e3fdc87d1\") " pod="openstack/nova-cell0-conductor-0" Jan 22 16:53:50 crc kubenswrapper[4758]: I0122 16:53:50.561355 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20c9fbe2-1c90-4beb-9154-094e3fdc87d1-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"20c9fbe2-1c90-4beb-9154-094e3fdc87d1\") " pod="openstack/nova-cell0-conductor-0" Jan 22 16:53:50 crc kubenswrapper[4758]: I0122 16:53:50.563283 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdfst\" (UniqueName: \"kubernetes.io/projected/20c9fbe2-1c90-4beb-9154-094e3fdc87d1-kube-api-access-zdfst\") pod \"nova-cell0-conductor-0\" (UID: \"20c9fbe2-1c90-4beb-9154-094e3fdc87d1\") " pod="openstack/nova-cell0-conductor-0" Jan 22 16:53:50 crc kubenswrapper[4758]: I0122 16:53:50.643197 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 22 16:53:51 crc kubenswrapper[4758]: I0122 16:53:51.223327 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 22 16:53:51 crc kubenswrapper[4758]: I0122 16:53:51.386782 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"20c9fbe2-1c90-4beb-9154-094e3fdc87d1","Type":"ContainerStarted","Data":"69f7d595e929b42c5666f9c8b2c669a93e873611e0d285cbb8aff54e81833c46"} Jan 22 16:53:52 crc kubenswrapper[4758]: I0122 16:53:52.398400 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"20c9fbe2-1c90-4beb-9154-094e3fdc87d1","Type":"ContainerStarted","Data":"c7a0427a5b419d714cfb5b03eb677e3c3d4b2989a23d51fe94fc5fd5a6cdf986"} Jan 22 16:53:52 crc kubenswrapper[4758]: I0122 16:53:52.399009 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 22 16:53:52 crc kubenswrapper[4758]: I0122 16:53:52.435275 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.435233111 podStartE2EDuration="2.435233111s" podCreationTimestamp="2026-01-22 16:53:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:53:52.413937791 +0000 UTC m=+1453.897277076" watchObservedRunningTime="2026-01-22 16:53:52.435233111 +0000 UTC m=+1453.918572396" Jan 22 16:54:00 crc kubenswrapper[4758]: I0122 16:54:00.685651 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.303562 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-tzfkb"] Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.304985 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-tzfkb" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.306987 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.307264 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.316266 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-tzfkb"] Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.362550 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18850dee-b495-42e5-87ee-915b6c822255-config-data\") pod \"nova-cell0-cell-mapping-tzfkb\" (UID: \"18850dee-b495-42e5-87ee-915b6c822255\") " pod="openstack/nova-cell0-cell-mapping-tzfkb" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.362608 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18850dee-b495-42e5-87ee-915b6c822255-scripts\") pod \"nova-cell0-cell-mapping-tzfkb\" (UID: \"18850dee-b495-42e5-87ee-915b6c822255\") " pod="openstack/nova-cell0-cell-mapping-tzfkb" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.362630 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62cd6\" (UniqueName: \"kubernetes.io/projected/18850dee-b495-42e5-87ee-915b6c822255-kube-api-access-62cd6\") pod \"nova-cell0-cell-mapping-tzfkb\" (UID: \"18850dee-b495-42e5-87ee-915b6c822255\") " pod="openstack/nova-cell0-cell-mapping-tzfkb" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.362679 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18850dee-b495-42e5-87ee-915b6c822255-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-tzfkb\" (UID: \"18850dee-b495-42e5-87ee-915b6c822255\") " pod="openstack/nova-cell0-cell-mapping-tzfkb" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.390708 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="a2159289-e740-441a-80f8-0ce0d0806e52" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.465095 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18850dee-b495-42e5-87ee-915b6c822255-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-tzfkb\" (UID: \"18850dee-b495-42e5-87ee-915b6c822255\") " pod="openstack/nova-cell0-cell-mapping-tzfkb" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.465534 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18850dee-b495-42e5-87ee-915b6c822255-config-data\") pod \"nova-cell0-cell-mapping-tzfkb\" (UID: \"18850dee-b495-42e5-87ee-915b6c822255\") " pod="openstack/nova-cell0-cell-mapping-tzfkb" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.466180 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18850dee-b495-42e5-87ee-915b6c822255-scripts\") pod \"nova-cell0-cell-mapping-tzfkb\" (UID: \"18850dee-b495-42e5-87ee-915b6c822255\") " pod="openstack/nova-cell0-cell-mapping-tzfkb" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.466458 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62cd6\" (UniqueName: \"kubernetes.io/projected/18850dee-b495-42e5-87ee-915b6c822255-kube-api-access-62cd6\") pod \"nova-cell0-cell-mapping-tzfkb\" (UID: \"18850dee-b495-42e5-87ee-915b6c822255\") " pod="openstack/nova-cell0-cell-mapping-tzfkb" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.472205 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.476536 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18850dee-b495-42e5-87ee-915b6c822255-scripts\") pod \"nova-cell0-cell-mapping-tzfkb\" (UID: \"18850dee-b495-42e5-87ee-915b6c822255\") " pod="openstack/nova-cell0-cell-mapping-tzfkb" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.476603 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18850dee-b495-42e5-87ee-915b6c822255-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-tzfkb\" (UID: \"18850dee-b495-42e5-87ee-915b6c822255\") " pod="openstack/nova-cell0-cell-mapping-tzfkb" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.477875 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.483046 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.488368 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18850dee-b495-42e5-87ee-915b6c822255-config-data\") pod \"nova-cell0-cell-mapping-tzfkb\" (UID: \"18850dee-b495-42e5-87ee-915b6c822255\") " pod="openstack/nova-cell0-cell-mapping-tzfkb" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.504820 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.511289 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62cd6\" (UniqueName: \"kubernetes.io/projected/18850dee-b495-42e5-87ee-915b6c822255-kube-api-access-62cd6\") pod \"nova-cell0-cell-mapping-tzfkb\" (UID: \"18850dee-b495-42e5-87ee-915b6c822255\") " pod="openstack/nova-cell0-cell-mapping-tzfkb" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.569704 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/455e2446-54d3-44f8-8d68-158d62c5f0c7-config-data\") pod \"nova-api-0\" (UID: \"455e2446-54d3-44f8-8d68-158d62c5f0c7\") " pod="openstack/nova-api-0" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.569774 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/455e2446-54d3-44f8-8d68-158d62c5f0c7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"455e2446-54d3-44f8-8d68-158d62c5f0c7\") " pod="openstack/nova-api-0" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.569810 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/455e2446-54d3-44f8-8d68-158d62c5f0c7-logs\") pod \"nova-api-0\" (UID: \"455e2446-54d3-44f8-8d68-158d62c5f0c7\") " pod="openstack/nova-api-0" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.569880 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlmhh\" (UniqueName: \"kubernetes.io/projected/455e2446-54d3-44f8-8d68-158d62c5f0c7-kube-api-access-wlmhh\") pod \"nova-api-0\" (UID: \"455e2446-54d3-44f8-8d68-158d62c5f0c7\") " pod="openstack/nova-api-0" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.584844 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.586472 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.600762 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.629169 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-tzfkb" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.639799 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.641229 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.645339 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.659061 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.671368 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vmcj\" (UniqueName: \"kubernetes.io/projected/f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1-kube-api-access-5vmcj\") pod \"nova-scheduler-0\" (UID: \"f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1\") " pod="openstack/nova-scheduler-0" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.671421 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlmhh\" (UniqueName: \"kubernetes.io/projected/455e2446-54d3-44f8-8d68-158d62c5f0c7-kube-api-access-wlmhh\") pod \"nova-api-0\" (UID: \"455e2446-54d3-44f8-8d68-158d62c5f0c7\") " pod="openstack/nova-api-0" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.671445 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d4d248e-cf33-442f-87c3-53f9be75e3a1-config-data\") pod \"nova-metadata-0\" (UID: \"3d4d248e-cf33-442f-87c3-53f9be75e3a1\") " pod="openstack/nova-metadata-0" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.671463 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d4d248e-cf33-442f-87c3-53f9be75e3a1-logs\") pod \"nova-metadata-0\" (UID: \"3d4d248e-cf33-442f-87c3-53f9be75e3a1\") " pod="openstack/nova-metadata-0" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.671548 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhjl2\" (UniqueName: \"kubernetes.io/projected/3d4d248e-cf33-442f-87c3-53f9be75e3a1-kube-api-access-xhjl2\") pod \"nova-metadata-0\" (UID: \"3d4d248e-cf33-442f-87c3-53f9be75e3a1\") " pod="openstack/nova-metadata-0" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.671568 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/455e2446-54d3-44f8-8d68-158d62c5f0c7-config-data\") pod \"nova-api-0\" (UID: \"455e2446-54d3-44f8-8d68-158d62c5f0c7\") " pod="openstack/nova-api-0" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.671589 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1-config-data\") pod \"nova-scheduler-0\" (UID: \"f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1\") " pod="openstack/nova-scheduler-0" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.671606 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/455e2446-54d3-44f8-8d68-158d62c5f0c7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"455e2446-54d3-44f8-8d68-158d62c5f0c7\") " pod="openstack/nova-api-0" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.671643 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1\") " pod="openstack/nova-scheduler-0" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.671671 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d4d248e-cf33-442f-87c3-53f9be75e3a1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3d4d248e-cf33-442f-87c3-53f9be75e3a1\") " pod="openstack/nova-metadata-0" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.671695 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/455e2446-54d3-44f8-8d68-158d62c5f0c7-logs\") pod \"nova-api-0\" (UID: \"455e2446-54d3-44f8-8d68-158d62c5f0c7\") " pod="openstack/nova-api-0" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.672173 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/455e2446-54d3-44f8-8d68-158d62c5f0c7-logs\") pod \"nova-api-0\" (UID: \"455e2446-54d3-44f8-8d68-158d62c5f0c7\") " pod="openstack/nova-api-0" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.677597 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/455e2446-54d3-44f8-8d68-158d62c5f0c7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"455e2446-54d3-44f8-8d68-158d62c5f0c7\") " pod="openstack/nova-api-0" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.683729 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/455e2446-54d3-44f8-8d68-158d62c5f0c7-config-data\") pod \"nova-api-0\" (UID: \"455e2446-54d3-44f8-8d68-158d62c5f0c7\") " pod="openstack/nova-api-0" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.703233 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlmhh\" (UniqueName: \"kubernetes.io/projected/455e2446-54d3-44f8-8d68-158d62c5f0c7-kube-api-access-wlmhh\") pod \"nova-api-0\" (UID: \"455e2446-54d3-44f8-8d68-158d62c5f0c7\") " pod="openstack/nova-api-0" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.717331 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.773186 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vmcj\" (UniqueName: \"kubernetes.io/projected/f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1-kube-api-access-5vmcj\") pod \"nova-scheduler-0\" (UID: \"f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1\") " pod="openstack/nova-scheduler-0" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.773278 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d4d248e-cf33-442f-87c3-53f9be75e3a1-config-data\") pod \"nova-metadata-0\" (UID: \"3d4d248e-cf33-442f-87c3-53f9be75e3a1\") " pod="openstack/nova-metadata-0" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.773306 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d4d248e-cf33-442f-87c3-53f9be75e3a1-logs\") pod \"nova-metadata-0\" (UID: \"3d4d248e-cf33-442f-87c3-53f9be75e3a1\") " pod="openstack/nova-metadata-0" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.773430 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhjl2\" (UniqueName: \"kubernetes.io/projected/3d4d248e-cf33-442f-87c3-53f9be75e3a1-kube-api-access-xhjl2\") pod \"nova-metadata-0\" (UID: \"3d4d248e-cf33-442f-87c3-53f9be75e3a1\") " pod="openstack/nova-metadata-0" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.773467 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1-config-data\") pod \"nova-scheduler-0\" (UID: \"f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1\") " pod="openstack/nova-scheduler-0" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.773492 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1\") " pod="openstack/nova-scheduler-0" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.773518 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d4d248e-cf33-442f-87c3-53f9be75e3a1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3d4d248e-cf33-442f-87c3-53f9be75e3a1\") " pod="openstack/nova-metadata-0" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.774386 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d4d248e-cf33-442f-87c3-53f9be75e3a1-logs\") pod \"nova-metadata-0\" (UID: \"3d4d248e-cf33-442f-87c3-53f9be75e3a1\") " pod="openstack/nova-metadata-0" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.778258 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1-config-data\") pod \"nova-scheduler-0\" (UID: \"f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1\") " pod="openstack/nova-scheduler-0" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.787401 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1\") " pod="openstack/nova-scheduler-0" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.788206 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d4d248e-cf33-442f-87c3-53f9be75e3a1-config-data\") pod \"nova-metadata-0\" (UID: \"3d4d248e-cf33-442f-87c3-53f9be75e3a1\") " pod="openstack/nova-metadata-0" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.788308 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.788403 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d4d248e-cf33-442f-87c3-53f9be75e3a1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3d4d248e-cf33-442f-87c3-53f9be75e3a1\") " pod="openstack/nova-metadata-0" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.790174 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.802711 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.808833 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.865623 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhjl2\" (UniqueName: \"kubernetes.io/projected/3d4d248e-cf33-442f-87c3-53f9be75e3a1-kube-api-access-xhjl2\") pod \"nova-metadata-0\" (UID: \"3d4d248e-cf33-442f-87c3-53f9be75e3a1\") " pod="openstack/nova-metadata-0" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.888368 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5dc6789cf7-9vznq"] Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.891283 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.891570 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59cb2cdb-5311-43ef-9aa9-ff9294b484da-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"59cb2cdb-5311-43ef-9aa9-ff9294b484da\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.891844 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59cb2cdb-5311-43ef-9aa9-ff9294b484da-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"59cb2cdb-5311-43ef-9aa9-ff9294b484da\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.891952 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w4qh\" (UniqueName: \"kubernetes.io/projected/59cb2cdb-5311-43ef-9aa9-ff9294b484da-kube-api-access-8w4qh\") pod \"nova-cell1-novncproxy-0\" (UID: \"59cb2cdb-5311-43ef-9aa9-ff9294b484da\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.892686 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vmcj\" (UniqueName: \"kubernetes.io/projected/f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1-kube-api-access-5vmcj\") pod \"nova-scheduler-0\" (UID: \"f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1\") " pod="openstack/nova-scheduler-0" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.899135 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc6789cf7-9vznq" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.900650 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.901255 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dc6789cf7-9vznq"] Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.942091 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.994513 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d77dabf3-2031-4c96-a78f-bb704b2f7f84-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc6789cf7-9vznq\" (UID: \"d77dabf3-2031-4c96-a78f-bb704b2f7f84\") " pod="openstack/dnsmasq-dns-5dc6789cf7-9vznq" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.994574 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d77dabf3-2031-4c96-a78f-bb704b2f7f84-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc6789cf7-9vznq\" (UID: \"d77dabf3-2031-4c96-a78f-bb704b2f7f84\") " pod="openstack/dnsmasq-dns-5dc6789cf7-9vznq" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.994707 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59cb2cdb-5311-43ef-9aa9-ff9294b484da-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"59cb2cdb-5311-43ef-9aa9-ff9294b484da\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.994787 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8w4qh\" (UniqueName: \"kubernetes.io/projected/59cb2cdb-5311-43ef-9aa9-ff9294b484da-kube-api-access-8w4qh\") pod \"nova-cell1-novncproxy-0\" (UID: \"59cb2cdb-5311-43ef-9aa9-ff9294b484da\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.994819 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sph6z\" (UniqueName: \"kubernetes.io/projected/d77dabf3-2031-4c96-a78f-bb704b2f7f84-kube-api-access-sph6z\") pod \"dnsmasq-dns-5dc6789cf7-9vznq\" (UID: \"d77dabf3-2031-4c96-a78f-bb704b2f7f84\") " pod="openstack/dnsmasq-dns-5dc6789cf7-9vznq" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.994857 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d77dabf3-2031-4c96-a78f-bb704b2f7f84-dns-svc\") pod \"dnsmasq-dns-5dc6789cf7-9vznq\" (UID: \"d77dabf3-2031-4c96-a78f-bb704b2f7f84\") " pod="openstack/dnsmasq-dns-5dc6789cf7-9vznq" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.994994 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d77dabf3-2031-4c96-a78f-bb704b2f7f84-config\") pod \"dnsmasq-dns-5dc6789cf7-9vznq\" (UID: \"d77dabf3-2031-4c96-a78f-bb704b2f7f84\") " pod="openstack/dnsmasq-dns-5dc6789cf7-9vznq" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.995014 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d77dabf3-2031-4c96-a78f-bb704b2f7f84-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc6789cf7-9vznq\" (UID: \"d77dabf3-2031-4c96-a78f-bb704b2f7f84\") " pod="openstack/dnsmasq-dns-5dc6789cf7-9vznq" Jan 22 16:54:01 crc kubenswrapper[4758]: I0122 16:54:01.995053 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59cb2cdb-5311-43ef-9aa9-ff9294b484da-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"59cb2cdb-5311-43ef-9aa9-ff9294b484da\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 16:54:02 crc kubenswrapper[4758]: I0122 16:54:02.003508 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59cb2cdb-5311-43ef-9aa9-ff9294b484da-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"59cb2cdb-5311-43ef-9aa9-ff9294b484da\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 16:54:02 crc kubenswrapper[4758]: I0122 16:54:02.009230 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59cb2cdb-5311-43ef-9aa9-ff9294b484da-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"59cb2cdb-5311-43ef-9aa9-ff9294b484da\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 16:54:02 crc kubenswrapper[4758]: I0122 16:54:02.022284 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8w4qh\" (UniqueName: \"kubernetes.io/projected/59cb2cdb-5311-43ef-9aa9-ff9294b484da-kube-api-access-8w4qh\") pod \"nova-cell1-novncproxy-0\" (UID: \"59cb2cdb-5311-43ef-9aa9-ff9294b484da\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 16:54:02 crc kubenswrapper[4758]: I0122 16:54:02.097645 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d77dabf3-2031-4c96-a78f-bb704b2f7f84-config\") pod \"dnsmasq-dns-5dc6789cf7-9vznq\" (UID: \"d77dabf3-2031-4c96-a78f-bb704b2f7f84\") " pod="openstack/dnsmasq-dns-5dc6789cf7-9vznq" Jan 22 16:54:02 crc kubenswrapper[4758]: I0122 16:54:02.097711 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d77dabf3-2031-4c96-a78f-bb704b2f7f84-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc6789cf7-9vznq\" (UID: \"d77dabf3-2031-4c96-a78f-bb704b2f7f84\") " pod="openstack/dnsmasq-dns-5dc6789cf7-9vznq" Jan 22 16:54:02 crc kubenswrapper[4758]: I0122 16:54:02.097785 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d77dabf3-2031-4c96-a78f-bb704b2f7f84-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc6789cf7-9vznq\" (UID: \"d77dabf3-2031-4c96-a78f-bb704b2f7f84\") " pod="openstack/dnsmasq-dns-5dc6789cf7-9vznq" Jan 22 16:54:02 crc kubenswrapper[4758]: I0122 16:54:02.097816 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d77dabf3-2031-4c96-a78f-bb704b2f7f84-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc6789cf7-9vznq\" (UID: \"d77dabf3-2031-4c96-a78f-bb704b2f7f84\") " pod="openstack/dnsmasq-dns-5dc6789cf7-9vznq" Jan 22 16:54:02 crc kubenswrapper[4758]: I0122 16:54:02.097911 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sph6z\" (UniqueName: \"kubernetes.io/projected/d77dabf3-2031-4c96-a78f-bb704b2f7f84-kube-api-access-sph6z\") pod \"dnsmasq-dns-5dc6789cf7-9vznq\" (UID: \"d77dabf3-2031-4c96-a78f-bb704b2f7f84\") " pod="openstack/dnsmasq-dns-5dc6789cf7-9vznq" Jan 22 16:54:02 crc kubenswrapper[4758]: I0122 16:54:02.097947 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d77dabf3-2031-4c96-a78f-bb704b2f7f84-dns-svc\") pod \"dnsmasq-dns-5dc6789cf7-9vznq\" (UID: \"d77dabf3-2031-4c96-a78f-bb704b2f7f84\") " pod="openstack/dnsmasq-dns-5dc6789cf7-9vznq" Jan 22 16:54:02 crc kubenswrapper[4758]: I0122 16:54:02.099060 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d77dabf3-2031-4c96-a78f-bb704b2f7f84-dns-svc\") pod \"dnsmasq-dns-5dc6789cf7-9vznq\" (UID: \"d77dabf3-2031-4c96-a78f-bb704b2f7f84\") " pod="openstack/dnsmasq-dns-5dc6789cf7-9vznq" Jan 22 16:54:02 crc kubenswrapper[4758]: I0122 16:54:02.099667 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d77dabf3-2031-4c96-a78f-bb704b2f7f84-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc6789cf7-9vznq\" (UID: \"d77dabf3-2031-4c96-a78f-bb704b2f7f84\") " pod="openstack/dnsmasq-dns-5dc6789cf7-9vznq" Jan 22 16:54:02 crc kubenswrapper[4758]: I0122 16:54:02.099818 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d77dabf3-2031-4c96-a78f-bb704b2f7f84-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc6789cf7-9vznq\" (UID: \"d77dabf3-2031-4c96-a78f-bb704b2f7f84\") " pod="openstack/dnsmasq-dns-5dc6789cf7-9vznq" Jan 22 16:54:02 crc kubenswrapper[4758]: I0122 16:54:02.100374 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d77dabf3-2031-4c96-a78f-bb704b2f7f84-config\") pod \"dnsmasq-dns-5dc6789cf7-9vznq\" (UID: \"d77dabf3-2031-4c96-a78f-bb704b2f7f84\") " pod="openstack/dnsmasq-dns-5dc6789cf7-9vznq" Jan 22 16:54:02 crc kubenswrapper[4758]: I0122 16:54:02.100583 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d77dabf3-2031-4c96-a78f-bb704b2f7f84-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc6789cf7-9vznq\" (UID: \"d77dabf3-2031-4c96-a78f-bb704b2f7f84\") " pod="openstack/dnsmasq-dns-5dc6789cf7-9vznq" Jan 22 16:54:02 crc kubenswrapper[4758]: I0122 16:54:02.123601 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sph6z\" (UniqueName: \"kubernetes.io/projected/d77dabf3-2031-4c96-a78f-bb704b2f7f84-kube-api-access-sph6z\") pod \"dnsmasq-dns-5dc6789cf7-9vznq\" (UID: \"d77dabf3-2031-4c96-a78f-bb704b2f7f84\") " pod="openstack/dnsmasq-dns-5dc6789cf7-9vznq" Jan 22 16:54:02 crc kubenswrapper[4758]: I0122 16:54:02.212094 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 22 16:54:02 crc kubenswrapper[4758]: I0122 16:54:02.285486 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc6789cf7-9vznq" Jan 22 16:54:02 crc kubenswrapper[4758]: I0122 16:54:02.379012 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-tzfkb"] Jan 22 16:54:02 crc kubenswrapper[4758]: I0122 16:54:02.486054 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 16:54:02 crc kubenswrapper[4758]: I0122 16:54:02.547835 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 16:54:02 crc kubenswrapper[4758]: I0122 16:54:02.592612 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 16:54:02 crc kubenswrapper[4758]: I0122 16:54:02.708517 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"455e2446-54d3-44f8-8d68-158d62c5f0c7","Type":"ContainerStarted","Data":"d503a3db138323d34683df5c5e9218a686487e9c4fadc4015a489912a87b76c2"} Jan 22 16:54:02 crc kubenswrapper[4758]: I0122 16:54:02.711530 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3d4d248e-cf33-442f-87c3-53f9be75e3a1","Type":"ContainerStarted","Data":"1789d47933297e600723c4f512799ee3e86a3077f5d6d4a19f1783a3561776b5"} Jan 22 16:54:02 crc kubenswrapper[4758]: I0122 16:54:02.717203 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1","Type":"ContainerStarted","Data":"378d24742f0caa126bc6c2364d63ee61b3eff54757be319013ff5e61125d45ae"} Jan 22 16:54:02 crc kubenswrapper[4758]: I0122 16:54:02.718494 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-tzfkb" event={"ID":"18850dee-b495-42e5-87ee-915b6c822255","Type":"ContainerStarted","Data":"5404ba51f536bb6f32bb8c1d0ba2fe4a0a911d8222acaeb712d866fc5131182f"} Jan 22 16:54:02 crc kubenswrapper[4758]: I0122 16:54:02.802589 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 16:54:02 crc kubenswrapper[4758]: I0122 16:54:02.835259 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-kzc5v"] Jan 22 16:54:02 crc kubenswrapper[4758]: I0122 16:54:02.836521 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-kzc5v" Jan 22 16:54:02 crc kubenswrapper[4758]: I0122 16:54:02.839017 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 22 16:54:02 crc kubenswrapper[4758]: I0122 16:54:02.839396 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 22 16:54:02 crc kubenswrapper[4758]: I0122 16:54:02.843116 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-kzc5v"] Jan 22 16:54:02 crc kubenswrapper[4758]: I0122 16:54:02.932454 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1c17792-1219-46ca-9587-380fbaced23b-config-data\") pod \"nova-cell1-conductor-db-sync-kzc5v\" (UID: \"a1c17792-1219-46ca-9587-380fbaced23b\") " pod="openstack/nova-cell1-conductor-db-sync-kzc5v" Jan 22 16:54:02 crc kubenswrapper[4758]: I0122 16:54:02.932528 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bbb7\" (UniqueName: \"kubernetes.io/projected/a1c17792-1219-46ca-9587-380fbaced23b-kube-api-access-2bbb7\") pod \"nova-cell1-conductor-db-sync-kzc5v\" (UID: \"a1c17792-1219-46ca-9587-380fbaced23b\") " pod="openstack/nova-cell1-conductor-db-sync-kzc5v" Jan 22 16:54:02 crc kubenswrapper[4758]: I0122 16:54:02.932580 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1c17792-1219-46ca-9587-380fbaced23b-scripts\") pod \"nova-cell1-conductor-db-sync-kzc5v\" (UID: \"a1c17792-1219-46ca-9587-380fbaced23b\") " pod="openstack/nova-cell1-conductor-db-sync-kzc5v" Jan 22 16:54:02 crc kubenswrapper[4758]: I0122 16:54:02.932875 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1c17792-1219-46ca-9587-380fbaced23b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-kzc5v\" (UID: \"a1c17792-1219-46ca-9587-380fbaced23b\") " pod="openstack/nova-cell1-conductor-db-sync-kzc5v" Jan 22 16:54:02 crc kubenswrapper[4758]: I0122 16:54:02.944998 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dc6789cf7-9vznq"] Jan 22 16:54:03 crc kubenswrapper[4758]: I0122 16:54:03.038335 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1c17792-1219-46ca-9587-380fbaced23b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-kzc5v\" (UID: \"a1c17792-1219-46ca-9587-380fbaced23b\") " pod="openstack/nova-cell1-conductor-db-sync-kzc5v" Jan 22 16:54:03 crc kubenswrapper[4758]: I0122 16:54:03.038424 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1c17792-1219-46ca-9587-380fbaced23b-config-data\") pod \"nova-cell1-conductor-db-sync-kzc5v\" (UID: \"a1c17792-1219-46ca-9587-380fbaced23b\") " pod="openstack/nova-cell1-conductor-db-sync-kzc5v" Jan 22 16:54:03 crc kubenswrapper[4758]: I0122 16:54:03.038472 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bbb7\" (UniqueName: \"kubernetes.io/projected/a1c17792-1219-46ca-9587-380fbaced23b-kube-api-access-2bbb7\") pod \"nova-cell1-conductor-db-sync-kzc5v\" (UID: \"a1c17792-1219-46ca-9587-380fbaced23b\") " pod="openstack/nova-cell1-conductor-db-sync-kzc5v" Jan 22 16:54:03 crc kubenswrapper[4758]: I0122 16:54:03.038516 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1c17792-1219-46ca-9587-380fbaced23b-scripts\") pod \"nova-cell1-conductor-db-sync-kzc5v\" (UID: \"a1c17792-1219-46ca-9587-380fbaced23b\") " pod="openstack/nova-cell1-conductor-db-sync-kzc5v" Jan 22 16:54:03 crc kubenswrapper[4758]: I0122 16:54:03.045623 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1c17792-1219-46ca-9587-380fbaced23b-scripts\") pod \"nova-cell1-conductor-db-sync-kzc5v\" (UID: \"a1c17792-1219-46ca-9587-380fbaced23b\") " pod="openstack/nova-cell1-conductor-db-sync-kzc5v" Jan 22 16:54:03 crc kubenswrapper[4758]: I0122 16:54:03.049538 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1c17792-1219-46ca-9587-380fbaced23b-config-data\") pod \"nova-cell1-conductor-db-sync-kzc5v\" (UID: \"a1c17792-1219-46ca-9587-380fbaced23b\") " pod="openstack/nova-cell1-conductor-db-sync-kzc5v" Jan 22 16:54:03 crc kubenswrapper[4758]: I0122 16:54:03.058337 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1c17792-1219-46ca-9587-380fbaced23b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-kzc5v\" (UID: \"a1c17792-1219-46ca-9587-380fbaced23b\") " pod="openstack/nova-cell1-conductor-db-sync-kzc5v" Jan 22 16:54:03 crc kubenswrapper[4758]: I0122 16:54:03.076264 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bbb7\" (UniqueName: \"kubernetes.io/projected/a1c17792-1219-46ca-9587-380fbaced23b-kube-api-access-2bbb7\") pod \"nova-cell1-conductor-db-sync-kzc5v\" (UID: \"a1c17792-1219-46ca-9587-380fbaced23b\") " pod="openstack/nova-cell1-conductor-db-sync-kzc5v" Jan 22 16:54:03 crc kubenswrapper[4758]: I0122 16:54:03.199244 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-kzc5v" Jan 22 16:54:03 crc kubenswrapper[4758]: I0122 16:54:03.747688 4758 generic.go:334] "Generic (PLEG): container finished" podID="d77dabf3-2031-4c96-a78f-bb704b2f7f84" containerID="2770bbe396420b96e5e9eda38817ae7e24c154e62dff2e75867809a6bf7d7a60" exitCode=0 Jan 22 16:54:03 crc kubenswrapper[4758]: I0122 16:54:03.747789 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc6789cf7-9vznq" event={"ID":"d77dabf3-2031-4c96-a78f-bb704b2f7f84","Type":"ContainerDied","Data":"2770bbe396420b96e5e9eda38817ae7e24c154e62dff2e75867809a6bf7d7a60"} Jan 22 16:54:03 crc kubenswrapper[4758]: I0122 16:54:03.748602 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc6789cf7-9vznq" event={"ID":"d77dabf3-2031-4c96-a78f-bb704b2f7f84","Type":"ContainerStarted","Data":"96bf60d67cd296d39707b47c7b19a427b5bc030d58d9a59296239b217c29c0a6"} Jan 22 16:54:03 crc kubenswrapper[4758]: I0122 16:54:03.750210 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"59cb2cdb-5311-43ef-9aa9-ff9294b484da","Type":"ContainerStarted","Data":"726c1ef5c1a312b03be4cb38282fdf86cfff3b3bbe7283e51dd4a724e5e820a5"} Jan 22 16:54:03 crc kubenswrapper[4758]: I0122 16:54:03.754326 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-tzfkb" event={"ID":"18850dee-b495-42e5-87ee-915b6c822255","Type":"ContainerStarted","Data":"a0cbc6b1d72c487c50e3e6c601ea461a183173eebd3edecaf941ac8870947bbe"} Jan 22 16:54:03 crc kubenswrapper[4758]: I0122 16:54:03.793219 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-kzc5v"] Jan 22 16:54:03 crc kubenswrapper[4758]: I0122 16:54:03.805673 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-tzfkb" podStartSLOduration=2.805654448 podStartE2EDuration="2.805654448s" podCreationTimestamp="2026-01-22 16:54:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:54:03.791467242 +0000 UTC m=+1465.274806527" watchObservedRunningTime="2026-01-22 16:54:03.805654448 +0000 UTC m=+1465.288993733" Jan 22 16:54:05 crc kubenswrapper[4758]: I0122 16:54:05.772972 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-kzc5v" event={"ID":"a1c17792-1219-46ca-9587-380fbaced23b","Type":"ContainerStarted","Data":"8eb6b0b9f3462722f0c9cfc9cb51ddb8453af25d6bc44329224c9703534f542e"} Jan 22 16:54:06 crc kubenswrapper[4758]: I0122 16:54:06.668959 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 16:54:06 crc kubenswrapper[4758]: I0122 16:54:06.678765 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 16:54:07 crc kubenswrapper[4758]: E0122 16:54:07.227921 4758 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2159289_e740_441a_80f8_0ce0d0806e52.slice/crio-d5d69682e0ddbf28aa034c0b2d0a7a43364866d6a47929b54b40e79df8428415.scope\": RecentStats: unable to find data in memory cache]" Jan 22 16:54:07 crc kubenswrapper[4758]: I0122 16:54:07.732307 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 16:54:07 crc kubenswrapper[4758]: I0122 16:54:07.794993 4758 generic.go:334] "Generic (PLEG): container finished" podID="a2159289-e740-441a-80f8-0ce0d0806e52" containerID="d5d69682e0ddbf28aa034c0b2d0a7a43364866d6a47929b54b40e79df8428415" exitCode=137 Jan 22 16:54:07 crc kubenswrapper[4758]: I0122 16:54:07.795035 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2159289-e740-441a-80f8-0ce0d0806e52","Type":"ContainerDied","Data":"d5d69682e0ddbf28aa034c0b2d0a7a43364866d6a47929b54b40e79df8428415"} Jan 22 16:54:07 crc kubenswrapper[4758]: I0122 16:54:07.795063 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 16:54:07 crc kubenswrapper[4758]: I0122 16:54:07.795078 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2159289-e740-441a-80f8-0ce0d0806e52","Type":"ContainerDied","Data":"302a40d42a80394d7604f6ce7b72d1227d5adbaedfbcff235667ede3f1edc4b7"} Jan 22 16:54:07 crc kubenswrapper[4758]: I0122 16:54:07.795098 4758 scope.go:117] "RemoveContainer" containerID="d5d69682e0ddbf28aa034c0b2d0a7a43364866d6a47929b54b40e79df8428415" Jan 22 16:54:07 crc kubenswrapper[4758]: I0122 16:54:07.816594 4758 scope.go:117] "RemoveContainer" containerID="103fb12477bec16222596de0948f2f4d064fb5c6a85a08995ffed00e255a2246" Jan 22 16:54:07 crc kubenswrapper[4758]: I0122 16:54:07.834692 4758 scope.go:117] "RemoveContainer" containerID="40b4be892f1dad1c0a5d0dc146d7263fd345c1f36cb6b5fd1a42eede28379bfc" Jan 22 16:54:07 crc kubenswrapper[4758]: I0122 16:54:07.865022 4758 scope.go:117] "RemoveContainer" containerID="70ee73a3d29c61fb0b5990cbab94843b27b8971510b0c5c2c45873ae0392f7c0" Jan 22 16:54:07 crc kubenswrapper[4758]: I0122 16:54:07.889480 4758 scope.go:117] "RemoveContainer" containerID="d5d69682e0ddbf28aa034c0b2d0a7a43364866d6a47929b54b40e79df8428415" Jan 22 16:54:07 crc kubenswrapper[4758]: E0122 16:54:07.890063 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5d69682e0ddbf28aa034c0b2d0a7a43364866d6a47929b54b40e79df8428415\": container with ID starting with d5d69682e0ddbf28aa034c0b2d0a7a43364866d6a47929b54b40e79df8428415 not found: ID does not exist" containerID="d5d69682e0ddbf28aa034c0b2d0a7a43364866d6a47929b54b40e79df8428415" Jan 22 16:54:07 crc kubenswrapper[4758]: I0122 16:54:07.890100 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5d69682e0ddbf28aa034c0b2d0a7a43364866d6a47929b54b40e79df8428415"} err="failed to get container status \"d5d69682e0ddbf28aa034c0b2d0a7a43364866d6a47929b54b40e79df8428415\": rpc error: code = NotFound desc = could not find container \"d5d69682e0ddbf28aa034c0b2d0a7a43364866d6a47929b54b40e79df8428415\": container with ID starting with d5d69682e0ddbf28aa034c0b2d0a7a43364866d6a47929b54b40e79df8428415 not found: ID does not exist" Jan 22 16:54:07 crc kubenswrapper[4758]: I0122 16:54:07.890129 4758 scope.go:117] "RemoveContainer" containerID="103fb12477bec16222596de0948f2f4d064fb5c6a85a08995ffed00e255a2246" Jan 22 16:54:07 crc kubenswrapper[4758]: E0122 16:54:07.891638 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"103fb12477bec16222596de0948f2f4d064fb5c6a85a08995ffed00e255a2246\": container with ID starting with 103fb12477bec16222596de0948f2f4d064fb5c6a85a08995ffed00e255a2246 not found: ID does not exist" containerID="103fb12477bec16222596de0948f2f4d064fb5c6a85a08995ffed00e255a2246" Jan 22 16:54:07 crc kubenswrapper[4758]: I0122 16:54:07.891686 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"103fb12477bec16222596de0948f2f4d064fb5c6a85a08995ffed00e255a2246"} err="failed to get container status \"103fb12477bec16222596de0948f2f4d064fb5c6a85a08995ffed00e255a2246\": rpc error: code = NotFound desc = could not find container \"103fb12477bec16222596de0948f2f4d064fb5c6a85a08995ffed00e255a2246\": container with ID starting with 103fb12477bec16222596de0948f2f4d064fb5c6a85a08995ffed00e255a2246 not found: ID does not exist" Jan 22 16:54:07 crc kubenswrapper[4758]: I0122 16:54:07.891715 4758 scope.go:117] "RemoveContainer" containerID="40b4be892f1dad1c0a5d0dc146d7263fd345c1f36cb6b5fd1a42eede28379bfc" Jan 22 16:54:07 crc kubenswrapper[4758]: E0122 16:54:07.893257 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40b4be892f1dad1c0a5d0dc146d7263fd345c1f36cb6b5fd1a42eede28379bfc\": container with ID starting with 40b4be892f1dad1c0a5d0dc146d7263fd345c1f36cb6b5fd1a42eede28379bfc not found: ID does not exist" containerID="40b4be892f1dad1c0a5d0dc146d7263fd345c1f36cb6b5fd1a42eede28379bfc" Jan 22 16:54:07 crc kubenswrapper[4758]: I0122 16:54:07.893290 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40b4be892f1dad1c0a5d0dc146d7263fd345c1f36cb6b5fd1a42eede28379bfc"} err="failed to get container status \"40b4be892f1dad1c0a5d0dc146d7263fd345c1f36cb6b5fd1a42eede28379bfc\": rpc error: code = NotFound desc = could not find container \"40b4be892f1dad1c0a5d0dc146d7263fd345c1f36cb6b5fd1a42eede28379bfc\": container with ID starting with 40b4be892f1dad1c0a5d0dc146d7263fd345c1f36cb6b5fd1a42eede28379bfc not found: ID does not exist" Jan 22 16:54:07 crc kubenswrapper[4758]: I0122 16:54:07.893311 4758 scope.go:117] "RemoveContainer" containerID="70ee73a3d29c61fb0b5990cbab94843b27b8971510b0c5c2c45873ae0392f7c0" Jan 22 16:54:07 crc kubenswrapper[4758]: E0122 16:54:07.895916 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70ee73a3d29c61fb0b5990cbab94843b27b8971510b0c5c2c45873ae0392f7c0\": container with ID starting with 70ee73a3d29c61fb0b5990cbab94843b27b8971510b0c5c2c45873ae0392f7c0 not found: ID does not exist" containerID="70ee73a3d29c61fb0b5990cbab94843b27b8971510b0c5c2c45873ae0392f7c0" Jan 22 16:54:07 crc kubenswrapper[4758]: I0122 16:54:07.895954 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70ee73a3d29c61fb0b5990cbab94843b27b8971510b0c5c2c45873ae0392f7c0"} err="failed to get container status \"70ee73a3d29c61fb0b5990cbab94843b27b8971510b0c5c2c45873ae0392f7c0\": rpc error: code = NotFound desc = could not find container \"70ee73a3d29c61fb0b5990cbab94843b27b8971510b0c5c2c45873ae0392f7c0\": container with ID starting with 70ee73a3d29c61fb0b5990cbab94843b27b8971510b0c5c2c45873ae0392f7c0 not found: ID does not exist" Jan 22 16:54:07 crc kubenswrapper[4758]: I0122 16:54:07.898842 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a2159289-e740-441a-80f8-0ce0d0806e52-sg-core-conf-yaml\") pod \"a2159289-e740-441a-80f8-0ce0d0806e52\" (UID: \"a2159289-e740-441a-80f8-0ce0d0806e52\") " Jan 22 16:54:07 crc kubenswrapper[4758]: I0122 16:54:07.898963 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2159289-e740-441a-80f8-0ce0d0806e52-combined-ca-bundle\") pod \"a2159289-e740-441a-80f8-0ce0d0806e52\" (UID: \"a2159289-e740-441a-80f8-0ce0d0806e52\") " Jan 22 16:54:07 crc kubenswrapper[4758]: I0122 16:54:07.899038 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2159289-e740-441a-80f8-0ce0d0806e52-scripts\") pod \"a2159289-e740-441a-80f8-0ce0d0806e52\" (UID: \"a2159289-e740-441a-80f8-0ce0d0806e52\") " Jan 22 16:54:07 crc kubenswrapper[4758]: I0122 16:54:07.899071 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2159289-e740-441a-80f8-0ce0d0806e52-run-httpd\") pod \"a2159289-e740-441a-80f8-0ce0d0806e52\" (UID: \"a2159289-e740-441a-80f8-0ce0d0806e52\") " Jan 22 16:54:07 crc kubenswrapper[4758]: I0122 16:54:07.899123 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2159289-e740-441a-80f8-0ce0d0806e52-log-httpd\") pod \"a2159289-e740-441a-80f8-0ce0d0806e52\" (UID: \"a2159289-e740-441a-80f8-0ce0d0806e52\") " Jan 22 16:54:07 crc kubenswrapper[4758]: I0122 16:54:07.899142 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2159289-e740-441a-80f8-0ce0d0806e52-config-data\") pod \"a2159289-e740-441a-80f8-0ce0d0806e52\" (UID: \"a2159289-e740-441a-80f8-0ce0d0806e52\") " Jan 22 16:54:07 crc kubenswrapper[4758]: I0122 16:54:07.899239 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bplws\" (UniqueName: \"kubernetes.io/projected/a2159289-e740-441a-80f8-0ce0d0806e52-kube-api-access-bplws\") pod \"a2159289-e740-441a-80f8-0ce0d0806e52\" (UID: \"a2159289-e740-441a-80f8-0ce0d0806e52\") " Jan 22 16:54:07 crc kubenswrapper[4758]: I0122 16:54:07.899900 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2159289-e740-441a-80f8-0ce0d0806e52-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a2159289-e740-441a-80f8-0ce0d0806e52" (UID: "a2159289-e740-441a-80f8-0ce0d0806e52"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:54:07 crc kubenswrapper[4758]: I0122 16:54:07.900940 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2159289-e740-441a-80f8-0ce0d0806e52-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a2159289-e740-441a-80f8-0ce0d0806e52" (UID: "a2159289-e740-441a-80f8-0ce0d0806e52"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:54:07 crc kubenswrapper[4758]: I0122 16:54:07.904275 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2159289-e740-441a-80f8-0ce0d0806e52-kube-api-access-bplws" (OuterVolumeSpecName: "kube-api-access-bplws") pod "a2159289-e740-441a-80f8-0ce0d0806e52" (UID: "a2159289-e740-441a-80f8-0ce0d0806e52"). InnerVolumeSpecName "kube-api-access-bplws". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:54:07 crc kubenswrapper[4758]: I0122 16:54:07.904651 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2159289-e740-441a-80f8-0ce0d0806e52-scripts" (OuterVolumeSpecName: "scripts") pod "a2159289-e740-441a-80f8-0ce0d0806e52" (UID: "a2159289-e740-441a-80f8-0ce0d0806e52"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:54:07 crc kubenswrapper[4758]: I0122 16:54:07.930579 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2159289-e740-441a-80f8-0ce0d0806e52-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a2159289-e740-441a-80f8-0ce0d0806e52" (UID: "a2159289-e740-441a-80f8-0ce0d0806e52"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.001817 4758 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a2159289-e740-441a-80f8-0ce0d0806e52-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.002146 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2159289-e740-441a-80f8-0ce0d0806e52-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.002156 4758 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2159289-e740-441a-80f8-0ce0d0806e52-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.002164 4758 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2159289-e740-441a-80f8-0ce0d0806e52-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.002173 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bplws\" (UniqueName: \"kubernetes.io/projected/a2159289-e740-441a-80f8-0ce0d0806e52-kube-api-access-bplws\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.015937 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2159289-e740-441a-80f8-0ce0d0806e52-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2159289-e740-441a-80f8-0ce0d0806e52" (UID: "a2159289-e740-441a-80f8-0ce0d0806e52"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.038372 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2159289-e740-441a-80f8-0ce0d0806e52-config-data" (OuterVolumeSpecName: "config-data") pod "a2159289-e740-441a-80f8-0ce0d0806e52" (UID: "a2159289-e740-441a-80f8-0ce0d0806e52"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.103528 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2159289-e740-441a-80f8-0ce0d0806e52-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.103735 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2159289-e740-441a-80f8-0ce0d0806e52-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.148373 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.196063 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.215223 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 16:54:08 crc kubenswrapper[4758]: E0122 16:54:08.215650 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2159289-e740-441a-80f8-0ce0d0806e52" containerName="sg-core" Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.215664 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2159289-e740-441a-80f8-0ce0d0806e52" containerName="sg-core" Jan 22 16:54:08 crc kubenswrapper[4758]: E0122 16:54:08.215675 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2159289-e740-441a-80f8-0ce0d0806e52" containerName="proxy-httpd" Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.215682 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2159289-e740-441a-80f8-0ce0d0806e52" containerName="proxy-httpd" Jan 22 16:54:08 crc kubenswrapper[4758]: E0122 16:54:08.215702 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2159289-e740-441a-80f8-0ce0d0806e52" containerName="ceilometer-notification-agent" Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.215708 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2159289-e740-441a-80f8-0ce0d0806e52" containerName="ceilometer-notification-agent" Jan 22 16:54:08 crc kubenswrapper[4758]: E0122 16:54:08.215722 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2159289-e740-441a-80f8-0ce0d0806e52" containerName="ceilometer-central-agent" Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.215730 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2159289-e740-441a-80f8-0ce0d0806e52" containerName="ceilometer-central-agent" Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.215915 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2159289-e740-441a-80f8-0ce0d0806e52" containerName="ceilometer-central-agent" Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.215934 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2159289-e740-441a-80f8-0ce0d0806e52" containerName="ceilometer-notification-agent" Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.215952 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2159289-e740-441a-80f8-0ce0d0806e52" containerName="sg-core" Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.215959 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2159289-e740-441a-80f8-0ce0d0806e52" containerName="proxy-httpd" Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.217803 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.230524 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.231056 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.236018 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.319335 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zncrr\" (UniqueName: \"kubernetes.io/projected/e5ced7f7-a89e-41c1-82b7-9fa15533621e-kube-api-access-zncrr\") pod \"ceilometer-0\" (UID: \"e5ced7f7-a89e-41c1-82b7-9fa15533621e\") " pod="openstack/ceilometer-0" Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.319393 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e5ced7f7-a89e-41c1-82b7-9fa15533621e-log-httpd\") pod \"ceilometer-0\" (UID: \"e5ced7f7-a89e-41c1-82b7-9fa15533621e\") " pod="openstack/ceilometer-0" Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.319487 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5ced7f7-a89e-41c1-82b7-9fa15533621e-scripts\") pod \"ceilometer-0\" (UID: \"e5ced7f7-a89e-41c1-82b7-9fa15533621e\") " pod="openstack/ceilometer-0" Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.319525 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5ced7f7-a89e-41c1-82b7-9fa15533621e-config-data\") pod \"ceilometer-0\" (UID: \"e5ced7f7-a89e-41c1-82b7-9fa15533621e\") " pod="openstack/ceilometer-0" Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.319547 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5ced7f7-a89e-41c1-82b7-9fa15533621e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e5ced7f7-a89e-41c1-82b7-9fa15533621e\") " pod="openstack/ceilometer-0" Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.319591 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e5ced7f7-a89e-41c1-82b7-9fa15533621e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e5ced7f7-a89e-41c1-82b7-9fa15533621e\") " pod="openstack/ceilometer-0" Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.319672 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e5ced7f7-a89e-41c1-82b7-9fa15533621e-run-httpd\") pod \"ceilometer-0\" (UID: \"e5ced7f7-a89e-41c1-82b7-9fa15533621e\") " pod="openstack/ceilometer-0" Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.421456 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5ced7f7-a89e-41c1-82b7-9fa15533621e-scripts\") pod \"ceilometer-0\" (UID: \"e5ced7f7-a89e-41c1-82b7-9fa15533621e\") " pod="openstack/ceilometer-0" Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.421513 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5ced7f7-a89e-41c1-82b7-9fa15533621e-config-data\") pod \"ceilometer-0\" (UID: \"e5ced7f7-a89e-41c1-82b7-9fa15533621e\") " pod="openstack/ceilometer-0" Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.421535 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5ced7f7-a89e-41c1-82b7-9fa15533621e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e5ced7f7-a89e-41c1-82b7-9fa15533621e\") " pod="openstack/ceilometer-0" Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.421570 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e5ced7f7-a89e-41c1-82b7-9fa15533621e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e5ced7f7-a89e-41c1-82b7-9fa15533621e\") " pod="openstack/ceilometer-0" Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.421625 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e5ced7f7-a89e-41c1-82b7-9fa15533621e-run-httpd\") pod \"ceilometer-0\" (UID: \"e5ced7f7-a89e-41c1-82b7-9fa15533621e\") " pod="openstack/ceilometer-0" Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.421686 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zncrr\" (UniqueName: \"kubernetes.io/projected/e5ced7f7-a89e-41c1-82b7-9fa15533621e-kube-api-access-zncrr\") pod \"ceilometer-0\" (UID: \"e5ced7f7-a89e-41c1-82b7-9fa15533621e\") " pod="openstack/ceilometer-0" Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.421704 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e5ced7f7-a89e-41c1-82b7-9fa15533621e-log-httpd\") pod \"ceilometer-0\" (UID: \"e5ced7f7-a89e-41c1-82b7-9fa15533621e\") " pod="openstack/ceilometer-0" Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.422182 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e5ced7f7-a89e-41c1-82b7-9fa15533621e-log-httpd\") pod \"ceilometer-0\" (UID: \"e5ced7f7-a89e-41c1-82b7-9fa15533621e\") " pod="openstack/ceilometer-0" Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.514697 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e5ced7f7-a89e-41c1-82b7-9fa15533621e-run-httpd\") pod \"ceilometer-0\" (UID: \"e5ced7f7-a89e-41c1-82b7-9fa15533621e\") " pod="openstack/ceilometer-0" Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.515374 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5ced7f7-a89e-41c1-82b7-9fa15533621e-scripts\") pod \"ceilometer-0\" (UID: \"e5ced7f7-a89e-41c1-82b7-9fa15533621e\") " pod="openstack/ceilometer-0" Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.518656 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zncrr\" (UniqueName: \"kubernetes.io/projected/e5ced7f7-a89e-41c1-82b7-9fa15533621e-kube-api-access-zncrr\") pod \"ceilometer-0\" (UID: \"e5ced7f7-a89e-41c1-82b7-9fa15533621e\") " pod="openstack/ceilometer-0" Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.519339 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e5ced7f7-a89e-41c1-82b7-9fa15533621e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e5ced7f7-a89e-41c1-82b7-9fa15533621e\") " pod="openstack/ceilometer-0" Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.520543 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5ced7f7-a89e-41c1-82b7-9fa15533621e-config-data\") pod \"ceilometer-0\" (UID: \"e5ced7f7-a89e-41c1-82b7-9fa15533621e\") " pod="openstack/ceilometer-0" Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.521140 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5ced7f7-a89e-41c1-82b7-9fa15533621e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e5ced7f7-a89e-41c1-82b7-9fa15533621e\") " pod="openstack/ceilometer-0" Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.574138 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.830663 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2159289-e740-441a-80f8-0ce0d0806e52" path="/var/lib/kubelet/pods/a2159289-e740-441a-80f8-0ce0d0806e52/volumes" Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.831728 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1","Type":"ContainerStarted","Data":"1736b81f30170002dfa31a54db1fb1ea56c05c0449c86aa77768793149b629b8"} Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.831838 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"455e2446-54d3-44f8-8d68-158d62c5f0c7","Type":"ContainerStarted","Data":"a8f0d65e587c195ffe3cc39ff80d47226d74bf4a5b307185fcdd74c40429f2b2"} Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.831931 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"455e2446-54d3-44f8-8d68-158d62c5f0c7","Type":"ContainerStarted","Data":"be4548a9e4640a3cb2f2397951246c9d7f1f7cfaf16f424cd58738806c9e6d4d"} Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.840376 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3d4d248e-cf33-442f-87c3-53f9be75e3a1" containerName="nova-metadata-log" containerID="cri-o://e78674579dee7cb7e323ebfbebb7514b6c8f6793a6172c159e94bf380677f30c" gracePeriod=30 Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.840570 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3d4d248e-cf33-442f-87c3-53f9be75e3a1","Type":"ContainerStarted","Data":"6d5bec96d91c31d2f155437407b01885aa60062585062804eba2d817d8a82c17"} Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.840924 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3d4d248e-cf33-442f-87c3-53f9be75e3a1","Type":"ContainerStarted","Data":"e78674579dee7cb7e323ebfbebb7514b6c8f6793a6172c159e94bf380677f30c"} Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.840616 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3d4d248e-cf33-442f-87c3-53f9be75e3a1" containerName="nova-metadata-metadata" containerID="cri-o://6d5bec96d91c31d2f155437407b01885aa60062585062804eba2d817d8a82c17" gracePeriod=30 Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.844174 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-kzc5v" event={"ID":"a1c17792-1219-46ca-9587-380fbaced23b","Type":"ContainerStarted","Data":"ea0f7187d9eceffdb826c1735026e3192b78e7d0a69aaa42cbed685c89cb0cd6"} Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.850582 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc6789cf7-9vznq" event={"ID":"d77dabf3-2031-4c96-a78f-bb704b2f7f84","Type":"ContainerStarted","Data":"44260ebc26f36a8331b9b638f599fbfd97bc4759eda893c7d39cec227d149b20"} Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.851001 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5dc6789cf7-9vznq" Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.853123 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"59cb2cdb-5311-43ef-9aa9-ff9294b484da","Type":"ContainerStarted","Data":"f5f54603cad078c29e2cfcc26685371110394352b58929ad41e485ee7cfaa985"} Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.853233 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="59cb2cdb-5311-43ef-9aa9-ff9294b484da" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://f5f54603cad078c29e2cfcc26685371110394352b58929ad41e485ee7cfaa985" gracePeriod=30 Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.963215 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.102421853 podStartE2EDuration="7.963198379s" podCreationTimestamp="2026-01-22 16:54:01 +0000 UTC" firstStartedPulling="2026-01-22 16:54:02.569598423 +0000 UTC m=+1464.052937698" lastFinishedPulling="2026-01-22 16:54:07.430374939 +0000 UTC m=+1468.913714224" observedRunningTime="2026-01-22 16:54:08.938234418 +0000 UTC m=+1470.421573703" watchObservedRunningTime="2026-01-22 16:54:08.963198379 +0000 UTC m=+1470.446537664" Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.967833 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.158690739 podStartE2EDuration="7.967819365s" podCreationTimestamp="2026-01-22 16:54:01 +0000 UTC" firstStartedPulling="2026-01-22 16:54:02.621764536 +0000 UTC m=+1464.105103821" lastFinishedPulling="2026-01-22 16:54:07.430893162 +0000 UTC m=+1468.914232447" observedRunningTime="2026-01-22 16:54:08.953347591 +0000 UTC m=+1470.436686876" watchObservedRunningTime="2026-01-22 16:54:08.967819365 +0000 UTC m=+1470.451158650" Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.983922 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5dc6789cf7-9vznq" podStartSLOduration=7.983907083 podStartE2EDuration="7.983907083s" podCreationTimestamp="2026-01-22 16:54:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:54:08.972728088 +0000 UTC m=+1470.456067373" watchObservedRunningTime="2026-01-22 16:54:08.983907083 +0000 UTC m=+1470.467246368" Jan 22 16:54:08 crc kubenswrapper[4758]: I0122 16:54:08.997943 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.383467339 podStartE2EDuration="7.997920286s" podCreationTimestamp="2026-01-22 16:54:01 +0000 UTC" firstStartedPulling="2026-01-22 16:54:02.821018411 +0000 UTC m=+1464.304357696" lastFinishedPulling="2026-01-22 16:54:07.435471358 +0000 UTC m=+1468.918810643" observedRunningTime="2026-01-22 16:54:08.987137892 +0000 UTC m=+1470.470477177" watchObservedRunningTime="2026-01-22 16:54:08.997920286 +0000 UTC m=+1470.481259571" Jan 22 16:54:09 crc kubenswrapper[4758]: I0122 16:54:09.045091 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-kzc5v" podStartSLOduration=7.045068862 podStartE2EDuration="7.045068862s" podCreationTimestamp="2026-01-22 16:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:54:09.004897226 +0000 UTC m=+1470.488236501" watchObservedRunningTime="2026-01-22 16:54:09.045068862 +0000 UTC m=+1470.528408147" Jan 22 16:54:09 crc kubenswrapper[4758]: I0122 16:54:09.071135 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.165944967 podStartE2EDuration="8.071108393s" podCreationTimestamp="2026-01-22 16:54:01 +0000 UTC" firstStartedPulling="2026-01-22 16:54:02.535342959 +0000 UTC m=+1464.018682254" lastFinishedPulling="2026-01-22 16:54:07.440506395 +0000 UTC m=+1468.923845680" observedRunningTime="2026-01-22 16:54:09.038665668 +0000 UTC m=+1470.522004953" watchObservedRunningTime="2026-01-22 16:54:09.071108393 +0000 UTC m=+1470.554447678" Jan 22 16:54:09 crc kubenswrapper[4758]: I0122 16:54:09.082092 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 16:54:09 crc kubenswrapper[4758]: I0122 16:54:09.746842 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 16:54:09 crc kubenswrapper[4758]: I0122 16:54:09.859679 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d4d248e-cf33-442f-87c3-53f9be75e3a1-combined-ca-bundle\") pod \"3d4d248e-cf33-442f-87c3-53f9be75e3a1\" (UID: \"3d4d248e-cf33-442f-87c3-53f9be75e3a1\") " Jan 22 16:54:09 crc kubenswrapper[4758]: I0122 16:54:09.859798 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d4d248e-cf33-442f-87c3-53f9be75e3a1-logs\") pod \"3d4d248e-cf33-442f-87c3-53f9be75e3a1\" (UID: \"3d4d248e-cf33-442f-87c3-53f9be75e3a1\") " Jan 22 16:54:09 crc kubenswrapper[4758]: I0122 16:54:09.859823 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhjl2\" (UniqueName: \"kubernetes.io/projected/3d4d248e-cf33-442f-87c3-53f9be75e3a1-kube-api-access-xhjl2\") pod \"3d4d248e-cf33-442f-87c3-53f9be75e3a1\" (UID: \"3d4d248e-cf33-442f-87c3-53f9be75e3a1\") " Jan 22 16:54:09 crc kubenswrapper[4758]: I0122 16:54:09.859939 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d4d248e-cf33-442f-87c3-53f9be75e3a1-config-data\") pod \"3d4d248e-cf33-442f-87c3-53f9be75e3a1\" (UID: \"3d4d248e-cf33-442f-87c3-53f9be75e3a1\") " Jan 22 16:54:09 crc kubenswrapper[4758]: I0122 16:54:09.860686 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d4d248e-cf33-442f-87c3-53f9be75e3a1-logs" (OuterVolumeSpecName: "logs") pod "3d4d248e-cf33-442f-87c3-53f9be75e3a1" (UID: "3d4d248e-cf33-442f-87c3-53f9be75e3a1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:54:09 crc kubenswrapper[4758]: I0122 16:54:09.864986 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d4d248e-cf33-442f-87c3-53f9be75e3a1-kube-api-access-xhjl2" (OuterVolumeSpecName: "kube-api-access-xhjl2") pod "3d4d248e-cf33-442f-87c3-53f9be75e3a1" (UID: "3d4d248e-cf33-442f-87c3-53f9be75e3a1"). InnerVolumeSpecName "kube-api-access-xhjl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:54:09 crc kubenswrapper[4758]: I0122 16:54:09.871014 4758 generic.go:334] "Generic (PLEG): container finished" podID="3d4d248e-cf33-442f-87c3-53f9be75e3a1" containerID="6d5bec96d91c31d2f155437407b01885aa60062585062804eba2d817d8a82c17" exitCode=0 Jan 22 16:54:09 crc kubenswrapper[4758]: I0122 16:54:09.871046 4758 generic.go:334] "Generic (PLEG): container finished" podID="3d4d248e-cf33-442f-87c3-53f9be75e3a1" containerID="e78674579dee7cb7e323ebfbebb7514b6c8f6793a6172c159e94bf380677f30c" exitCode=143 Jan 22 16:54:09 crc kubenswrapper[4758]: I0122 16:54:09.871086 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3d4d248e-cf33-442f-87c3-53f9be75e3a1","Type":"ContainerDied","Data":"6d5bec96d91c31d2f155437407b01885aa60062585062804eba2d817d8a82c17"} Jan 22 16:54:09 crc kubenswrapper[4758]: I0122 16:54:09.871128 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3d4d248e-cf33-442f-87c3-53f9be75e3a1","Type":"ContainerDied","Data":"e78674579dee7cb7e323ebfbebb7514b6c8f6793a6172c159e94bf380677f30c"} Jan 22 16:54:09 crc kubenswrapper[4758]: I0122 16:54:09.871138 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3d4d248e-cf33-442f-87c3-53f9be75e3a1","Type":"ContainerDied","Data":"1789d47933297e600723c4f512799ee3e86a3077f5d6d4a19f1783a3561776b5"} Jan 22 16:54:09 crc kubenswrapper[4758]: I0122 16:54:09.871153 4758 scope.go:117] "RemoveContainer" containerID="6d5bec96d91c31d2f155437407b01885aa60062585062804eba2d817d8a82c17" Jan 22 16:54:09 crc kubenswrapper[4758]: I0122 16:54:09.871291 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 16:54:09 crc kubenswrapper[4758]: I0122 16:54:09.877966 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e5ced7f7-a89e-41c1-82b7-9fa15533621e","Type":"ContainerStarted","Data":"e39dbebd02e22afd1643ed05975ad545e87632dd242581db414641f90b67b1b8"} Jan 22 16:54:09 crc kubenswrapper[4758]: I0122 16:54:09.878029 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e5ced7f7-a89e-41c1-82b7-9fa15533621e","Type":"ContainerStarted","Data":"4b8c7c2883c90cf404ece1037564f60b06c4d9c1158a94e230aa682e257e3715"} Jan 22 16:54:09 crc kubenswrapper[4758]: I0122 16:54:09.878040 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e5ced7f7-a89e-41c1-82b7-9fa15533621e","Type":"ContainerStarted","Data":"327d33055c3d767282e1fb1d6af6cb1c3bff2dd2e5f7f68d637b34929bfd6cad"} Jan 22 16:54:09 crc kubenswrapper[4758]: I0122 16:54:09.902885 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d4d248e-cf33-442f-87c3-53f9be75e3a1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d4d248e-cf33-442f-87c3-53f9be75e3a1" (UID: "3d4d248e-cf33-442f-87c3-53f9be75e3a1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:54:09 crc kubenswrapper[4758]: I0122 16:54:09.905803 4758 scope.go:117] "RemoveContainer" containerID="e78674579dee7cb7e323ebfbebb7514b6c8f6793a6172c159e94bf380677f30c" Jan 22 16:54:09 crc kubenswrapper[4758]: I0122 16:54:09.905892 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d4d248e-cf33-442f-87c3-53f9be75e3a1-config-data" (OuterVolumeSpecName: "config-data") pod "3d4d248e-cf33-442f-87c3-53f9be75e3a1" (UID: "3d4d248e-cf33-442f-87c3-53f9be75e3a1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:54:09 crc kubenswrapper[4758]: I0122 16:54:09.946061 4758 scope.go:117] "RemoveContainer" containerID="6d5bec96d91c31d2f155437407b01885aa60062585062804eba2d817d8a82c17" Jan 22 16:54:09 crc kubenswrapper[4758]: E0122 16:54:09.946612 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d5bec96d91c31d2f155437407b01885aa60062585062804eba2d817d8a82c17\": container with ID starting with 6d5bec96d91c31d2f155437407b01885aa60062585062804eba2d817d8a82c17 not found: ID does not exist" containerID="6d5bec96d91c31d2f155437407b01885aa60062585062804eba2d817d8a82c17" Jan 22 16:54:09 crc kubenswrapper[4758]: I0122 16:54:09.946645 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d5bec96d91c31d2f155437407b01885aa60062585062804eba2d817d8a82c17"} err="failed to get container status \"6d5bec96d91c31d2f155437407b01885aa60062585062804eba2d817d8a82c17\": rpc error: code = NotFound desc = could not find container \"6d5bec96d91c31d2f155437407b01885aa60062585062804eba2d817d8a82c17\": container with ID starting with 6d5bec96d91c31d2f155437407b01885aa60062585062804eba2d817d8a82c17 not found: ID does not exist" Jan 22 16:54:09 crc kubenswrapper[4758]: I0122 16:54:09.946666 4758 scope.go:117] "RemoveContainer" containerID="e78674579dee7cb7e323ebfbebb7514b6c8f6793a6172c159e94bf380677f30c" Jan 22 16:54:09 crc kubenswrapper[4758]: E0122 16:54:09.947167 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e78674579dee7cb7e323ebfbebb7514b6c8f6793a6172c159e94bf380677f30c\": container with ID starting with e78674579dee7cb7e323ebfbebb7514b6c8f6793a6172c159e94bf380677f30c not found: ID does not exist" containerID="e78674579dee7cb7e323ebfbebb7514b6c8f6793a6172c159e94bf380677f30c" Jan 22 16:54:09 crc kubenswrapper[4758]: I0122 16:54:09.947192 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e78674579dee7cb7e323ebfbebb7514b6c8f6793a6172c159e94bf380677f30c"} err="failed to get container status \"e78674579dee7cb7e323ebfbebb7514b6c8f6793a6172c159e94bf380677f30c\": rpc error: code = NotFound desc = could not find container \"e78674579dee7cb7e323ebfbebb7514b6c8f6793a6172c159e94bf380677f30c\": container with ID starting with e78674579dee7cb7e323ebfbebb7514b6c8f6793a6172c159e94bf380677f30c not found: ID does not exist" Jan 22 16:54:09 crc kubenswrapper[4758]: I0122 16:54:09.947206 4758 scope.go:117] "RemoveContainer" containerID="6d5bec96d91c31d2f155437407b01885aa60062585062804eba2d817d8a82c17" Jan 22 16:54:09 crc kubenswrapper[4758]: I0122 16:54:09.947919 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d5bec96d91c31d2f155437407b01885aa60062585062804eba2d817d8a82c17"} err="failed to get container status \"6d5bec96d91c31d2f155437407b01885aa60062585062804eba2d817d8a82c17\": rpc error: code = NotFound desc = could not find container \"6d5bec96d91c31d2f155437407b01885aa60062585062804eba2d817d8a82c17\": container with ID starting with 6d5bec96d91c31d2f155437407b01885aa60062585062804eba2d817d8a82c17 not found: ID does not exist" Jan 22 16:54:09 crc kubenswrapper[4758]: I0122 16:54:09.947958 4758 scope.go:117] "RemoveContainer" containerID="e78674579dee7cb7e323ebfbebb7514b6c8f6793a6172c159e94bf380677f30c" Jan 22 16:54:09 crc kubenswrapper[4758]: I0122 16:54:09.952127 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e78674579dee7cb7e323ebfbebb7514b6c8f6793a6172c159e94bf380677f30c"} err="failed to get container status \"e78674579dee7cb7e323ebfbebb7514b6c8f6793a6172c159e94bf380677f30c\": rpc error: code = NotFound desc = could not find container \"e78674579dee7cb7e323ebfbebb7514b6c8f6793a6172c159e94bf380677f30c\": container with ID starting with e78674579dee7cb7e323ebfbebb7514b6c8f6793a6172c159e94bf380677f30c not found: ID does not exist" Jan 22 16:54:09 crc kubenswrapper[4758]: I0122 16:54:09.964570 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d4d248e-cf33-442f-87c3-53f9be75e3a1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:09 crc kubenswrapper[4758]: I0122 16:54:09.964613 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d4d248e-cf33-442f-87c3-53f9be75e3a1-logs\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:09 crc kubenswrapper[4758]: I0122 16:54:09.964625 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhjl2\" (UniqueName: \"kubernetes.io/projected/3d4d248e-cf33-442f-87c3-53f9be75e3a1-kube-api-access-xhjl2\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:09 crc kubenswrapper[4758]: I0122 16:54:09.964634 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d4d248e-cf33-442f-87c3-53f9be75e3a1-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:10 crc kubenswrapper[4758]: I0122 16:54:10.330404 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 16:54:10 crc kubenswrapper[4758]: I0122 16:54:10.343802 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 16:54:10 crc kubenswrapper[4758]: I0122 16:54:10.358876 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 22 16:54:10 crc kubenswrapper[4758]: E0122 16:54:10.359334 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d4d248e-cf33-442f-87c3-53f9be75e3a1" containerName="nova-metadata-log" Jan 22 16:54:10 crc kubenswrapper[4758]: I0122 16:54:10.359355 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d4d248e-cf33-442f-87c3-53f9be75e3a1" containerName="nova-metadata-log" Jan 22 16:54:10 crc kubenswrapper[4758]: E0122 16:54:10.359385 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d4d248e-cf33-442f-87c3-53f9be75e3a1" containerName="nova-metadata-metadata" Jan 22 16:54:10 crc kubenswrapper[4758]: I0122 16:54:10.359392 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d4d248e-cf33-442f-87c3-53f9be75e3a1" containerName="nova-metadata-metadata" Jan 22 16:54:10 crc kubenswrapper[4758]: I0122 16:54:10.359575 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d4d248e-cf33-442f-87c3-53f9be75e3a1" containerName="nova-metadata-metadata" Jan 22 16:54:10 crc kubenswrapper[4758]: I0122 16:54:10.359604 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d4d248e-cf33-442f-87c3-53f9be75e3a1" containerName="nova-metadata-log" Jan 22 16:54:10 crc kubenswrapper[4758]: I0122 16:54:10.378994 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 16:54:10 crc kubenswrapper[4758]: I0122 16:54:10.383282 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 22 16:54:10 crc kubenswrapper[4758]: I0122 16:54:10.384241 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 22 16:54:10 crc kubenswrapper[4758]: I0122 16:54:10.388105 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 16:54:10 crc kubenswrapper[4758]: I0122 16:54:10.482188 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa14567-c268-46dc-bd38-56eec14f0b95-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2aa14567-c268-46dc-bd38-56eec14f0b95\") " pod="openstack/nova-metadata-0" Jan 22 16:54:10 crc kubenswrapper[4758]: I0122 16:54:10.482238 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z54j9\" (UniqueName: \"kubernetes.io/projected/2aa14567-c268-46dc-bd38-56eec14f0b95-kube-api-access-z54j9\") pod \"nova-metadata-0\" (UID: \"2aa14567-c268-46dc-bd38-56eec14f0b95\") " pod="openstack/nova-metadata-0" Jan 22 16:54:10 crc kubenswrapper[4758]: I0122 16:54:10.482543 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2aa14567-c268-46dc-bd38-56eec14f0b95-logs\") pod \"nova-metadata-0\" (UID: \"2aa14567-c268-46dc-bd38-56eec14f0b95\") " pod="openstack/nova-metadata-0" Jan 22 16:54:10 crc kubenswrapper[4758]: I0122 16:54:10.482626 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2aa14567-c268-46dc-bd38-56eec14f0b95-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2aa14567-c268-46dc-bd38-56eec14f0b95\") " pod="openstack/nova-metadata-0" Jan 22 16:54:10 crc kubenswrapper[4758]: I0122 16:54:10.482864 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aa14567-c268-46dc-bd38-56eec14f0b95-config-data\") pod \"nova-metadata-0\" (UID: \"2aa14567-c268-46dc-bd38-56eec14f0b95\") " pod="openstack/nova-metadata-0" Jan 22 16:54:10 crc kubenswrapper[4758]: I0122 16:54:10.585411 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa14567-c268-46dc-bd38-56eec14f0b95-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2aa14567-c268-46dc-bd38-56eec14f0b95\") " pod="openstack/nova-metadata-0" Jan 22 16:54:10 crc kubenswrapper[4758]: I0122 16:54:10.585551 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z54j9\" (UniqueName: \"kubernetes.io/projected/2aa14567-c268-46dc-bd38-56eec14f0b95-kube-api-access-z54j9\") pod \"nova-metadata-0\" (UID: \"2aa14567-c268-46dc-bd38-56eec14f0b95\") " pod="openstack/nova-metadata-0" Jan 22 16:54:10 crc kubenswrapper[4758]: I0122 16:54:10.585653 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2aa14567-c268-46dc-bd38-56eec14f0b95-logs\") pod \"nova-metadata-0\" (UID: \"2aa14567-c268-46dc-bd38-56eec14f0b95\") " pod="openstack/nova-metadata-0" Jan 22 16:54:10 crc kubenswrapper[4758]: I0122 16:54:10.585684 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2aa14567-c268-46dc-bd38-56eec14f0b95-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2aa14567-c268-46dc-bd38-56eec14f0b95\") " pod="openstack/nova-metadata-0" Jan 22 16:54:10 crc kubenswrapper[4758]: I0122 16:54:10.585796 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aa14567-c268-46dc-bd38-56eec14f0b95-config-data\") pod \"nova-metadata-0\" (UID: \"2aa14567-c268-46dc-bd38-56eec14f0b95\") " pod="openstack/nova-metadata-0" Jan 22 16:54:10 crc kubenswrapper[4758]: I0122 16:54:10.586122 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2aa14567-c268-46dc-bd38-56eec14f0b95-logs\") pod \"nova-metadata-0\" (UID: \"2aa14567-c268-46dc-bd38-56eec14f0b95\") " pod="openstack/nova-metadata-0" Jan 22 16:54:10 crc kubenswrapper[4758]: I0122 16:54:10.592217 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2aa14567-c268-46dc-bd38-56eec14f0b95-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2aa14567-c268-46dc-bd38-56eec14f0b95\") " pod="openstack/nova-metadata-0" Jan 22 16:54:10 crc kubenswrapper[4758]: I0122 16:54:10.594226 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa14567-c268-46dc-bd38-56eec14f0b95-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2aa14567-c268-46dc-bd38-56eec14f0b95\") " pod="openstack/nova-metadata-0" Jan 22 16:54:10 crc kubenswrapper[4758]: I0122 16:54:10.607310 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z54j9\" (UniqueName: \"kubernetes.io/projected/2aa14567-c268-46dc-bd38-56eec14f0b95-kube-api-access-z54j9\") pod \"nova-metadata-0\" (UID: \"2aa14567-c268-46dc-bd38-56eec14f0b95\") " pod="openstack/nova-metadata-0" Jan 22 16:54:10 crc kubenswrapper[4758]: I0122 16:54:10.609290 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aa14567-c268-46dc-bd38-56eec14f0b95-config-data\") pod \"nova-metadata-0\" (UID: \"2aa14567-c268-46dc-bd38-56eec14f0b95\") " pod="openstack/nova-metadata-0" Jan 22 16:54:10 crc kubenswrapper[4758]: I0122 16:54:10.700065 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 16:54:10 crc kubenswrapper[4758]: I0122 16:54:10.829082 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d4d248e-cf33-442f-87c3-53f9be75e3a1" path="/var/lib/kubelet/pods/3d4d248e-cf33-442f-87c3-53f9be75e3a1/volumes" Jan 22 16:54:10 crc kubenswrapper[4758]: I0122 16:54:10.898890 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e5ced7f7-a89e-41c1-82b7-9fa15533621e","Type":"ContainerStarted","Data":"84b988ab26b2cbaf00e534569b818aefd880af8ecf2497770a497c586f27f20e"} Jan 22 16:54:11 crc kubenswrapper[4758]: I0122 16:54:11.189216 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 16:54:11 crc kubenswrapper[4758]: I0122 16:54:11.901717 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 22 16:54:11 crc kubenswrapper[4758]: I0122 16:54:11.902311 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 22 16:54:11 crc kubenswrapper[4758]: I0122 16:54:11.910714 4758 generic.go:334] "Generic (PLEG): container finished" podID="18850dee-b495-42e5-87ee-915b6c822255" containerID="a0cbc6b1d72c487c50e3e6c601ea461a183173eebd3edecaf941ac8870947bbe" exitCode=0 Jan 22 16:54:11 crc kubenswrapper[4758]: I0122 16:54:11.910790 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-tzfkb" event={"ID":"18850dee-b495-42e5-87ee-915b6c822255","Type":"ContainerDied","Data":"a0cbc6b1d72c487c50e3e6c601ea461a183173eebd3edecaf941ac8870947bbe"} Jan 22 16:54:11 crc kubenswrapper[4758]: I0122 16:54:11.912673 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2aa14567-c268-46dc-bd38-56eec14f0b95","Type":"ContainerStarted","Data":"63bbd328655627cb2fdbabc3e2cca7557d7ba50759a9060a8e31245a379787f1"} Jan 22 16:54:11 crc kubenswrapper[4758]: I0122 16:54:11.912728 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2aa14567-c268-46dc-bd38-56eec14f0b95","Type":"ContainerStarted","Data":"60a3e36e668981375c061beb49bfdb0cf12dfb0fc50c21be902cc06d2e95e6f9"} Jan 22 16:54:11 crc kubenswrapper[4758]: I0122 16:54:11.943377 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 16:54:11 crc kubenswrapper[4758]: I0122 16:54:11.943689 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 16:54:11 crc kubenswrapper[4758]: I0122 16:54:11.954518 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 22 16:54:12 crc kubenswrapper[4758]: I0122 16:54:12.213417 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 22 16:54:12 crc kubenswrapper[4758]: I0122 16:54:12.287903 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5dc6789cf7-9vznq" Jan 22 16:54:12 crc kubenswrapper[4758]: I0122 16:54:12.349829 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f7466dcbf-g984f"] Jan 22 16:54:12 crc kubenswrapper[4758]: I0122 16:54:12.350082 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f7466dcbf-g984f" podUID="89b54d64-9045-40b1-a7fc-49d4dce849e6" containerName="dnsmasq-dns" containerID="cri-o://04709b65415b5ce55c5e501fd59e6359307278c8ee978a585a593c53c836b627" gracePeriod=10 Jan 22 16:54:12 crc kubenswrapper[4758]: I0122 16:54:12.923594 4758 generic.go:334] "Generic (PLEG): container finished" podID="89b54d64-9045-40b1-a7fc-49d4dce849e6" containerID="04709b65415b5ce55c5e501fd59e6359307278c8ee978a585a593c53c836b627" exitCode=0 Jan 22 16:54:12 crc kubenswrapper[4758]: I0122 16:54:12.923675 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f7466dcbf-g984f" event={"ID":"89b54d64-9045-40b1-a7fc-49d4dce849e6","Type":"ContainerDied","Data":"04709b65415b5ce55c5e501fd59e6359307278c8ee978a585a593c53c836b627"} Jan 22 16:54:12 crc kubenswrapper[4758]: I0122 16:54:12.923992 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f7466dcbf-g984f" event={"ID":"89b54d64-9045-40b1-a7fc-49d4dce849e6","Type":"ContainerDied","Data":"0e95c4607190c9ad512fa391d634eb3edd6661d06281150232fe008a9c2ec9a8"} Jan 22 16:54:12 crc kubenswrapper[4758]: I0122 16:54:12.924004 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e95c4607190c9ad512fa391d634eb3edd6661d06281150232fe008a9c2ec9a8" Jan 22 16:54:12 crc kubenswrapper[4758]: I0122 16:54:12.926723 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2aa14567-c268-46dc-bd38-56eec14f0b95","Type":"ContainerStarted","Data":"7bbaa7e4f4163b648c41f6b7129a03c93514764318a566d61903ca51cb053711"} Jan 22 16:54:12 crc kubenswrapper[4758]: I0122 16:54:12.950479 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e5ced7f7-a89e-41c1-82b7-9fa15533621e","Type":"ContainerStarted","Data":"930e6afaf134c86ac2a51e1859aee3271aa996166f460663d9d26704a982000a"} Jan 22 16:54:12 crc kubenswrapper[4758]: I0122 16:54:12.950715 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f7466dcbf-g984f" Jan 22 16:54:12 crc kubenswrapper[4758]: I0122 16:54:12.952008 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 16:54:12 crc kubenswrapper[4758]: I0122 16:54:12.953401 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.953387039 podStartE2EDuration="2.953387039s" podCreationTimestamp="2026-01-22 16:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:54:12.944787623 +0000 UTC m=+1474.428126919" watchObservedRunningTime="2026-01-22 16:54:12.953387039 +0000 UTC m=+1474.436726354" Jan 22 16:54:12 crc kubenswrapper[4758]: I0122 16:54:12.991349 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 22 16:54:12 crc kubenswrapper[4758]: I0122 16:54:12.991922 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.355535757 podStartE2EDuration="4.991904809s" podCreationTimestamp="2026-01-22 16:54:08 +0000 UTC" firstStartedPulling="2026-01-22 16:54:09.130598785 +0000 UTC m=+1470.613938070" lastFinishedPulling="2026-01-22 16:54:11.766967837 +0000 UTC m=+1473.250307122" observedRunningTime="2026-01-22 16:54:12.972021477 +0000 UTC m=+1474.455360762" watchObservedRunningTime="2026-01-22 16:54:12.991904809 +0000 UTC m=+1474.475244094" Jan 22 16:54:13 crc kubenswrapper[4758]: I0122 16:54:13.034918 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="455e2446-54d3-44f8-8d68-158d62c5f0c7" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.206:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 16:54:13 crc kubenswrapper[4758]: I0122 16:54:13.035314 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="455e2446-54d3-44f8-8d68-158d62c5f0c7" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.206:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 16:54:13 crc kubenswrapper[4758]: I0122 16:54:13.044841 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89b54d64-9045-40b1-a7fc-49d4dce849e6-config\") pod \"89b54d64-9045-40b1-a7fc-49d4dce849e6\" (UID: \"89b54d64-9045-40b1-a7fc-49d4dce849e6\") " Jan 22 16:54:13 crc kubenswrapper[4758]: I0122 16:54:13.044907 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89b54d64-9045-40b1-a7fc-49d4dce849e6-ovsdbserver-sb\") pod \"89b54d64-9045-40b1-a7fc-49d4dce849e6\" (UID: \"89b54d64-9045-40b1-a7fc-49d4dce849e6\") " Jan 22 16:54:13 crc kubenswrapper[4758]: I0122 16:54:13.044995 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89b54d64-9045-40b1-a7fc-49d4dce849e6-ovsdbserver-nb\") pod \"89b54d64-9045-40b1-a7fc-49d4dce849e6\" (UID: \"89b54d64-9045-40b1-a7fc-49d4dce849e6\") " Jan 22 16:54:13 crc kubenswrapper[4758]: I0122 16:54:13.045056 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/89b54d64-9045-40b1-a7fc-49d4dce849e6-dns-swift-storage-0\") pod \"89b54d64-9045-40b1-a7fc-49d4dce849e6\" (UID: \"89b54d64-9045-40b1-a7fc-49d4dce849e6\") " Jan 22 16:54:13 crc kubenswrapper[4758]: I0122 16:54:13.045186 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lp85r\" (UniqueName: \"kubernetes.io/projected/89b54d64-9045-40b1-a7fc-49d4dce849e6-kube-api-access-lp85r\") pod \"89b54d64-9045-40b1-a7fc-49d4dce849e6\" (UID: \"89b54d64-9045-40b1-a7fc-49d4dce849e6\") " Jan 22 16:54:13 crc kubenswrapper[4758]: I0122 16:54:13.045220 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89b54d64-9045-40b1-a7fc-49d4dce849e6-dns-svc\") pod \"89b54d64-9045-40b1-a7fc-49d4dce849e6\" (UID: \"89b54d64-9045-40b1-a7fc-49d4dce849e6\") " Jan 22 16:54:13 crc kubenswrapper[4758]: I0122 16:54:13.090969 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89b54d64-9045-40b1-a7fc-49d4dce849e6-kube-api-access-lp85r" (OuterVolumeSpecName: "kube-api-access-lp85r") pod "89b54d64-9045-40b1-a7fc-49d4dce849e6" (UID: "89b54d64-9045-40b1-a7fc-49d4dce849e6"). InnerVolumeSpecName "kube-api-access-lp85r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:54:13 crc kubenswrapper[4758]: I0122 16:54:13.142283 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89b54d64-9045-40b1-a7fc-49d4dce849e6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "89b54d64-9045-40b1-a7fc-49d4dce849e6" (UID: "89b54d64-9045-40b1-a7fc-49d4dce849e6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:54:13 crc kubenswrapper[4758]: I0122 16:54:13.149245 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lp85r\" (UniqueName: \"kubernetes.io/projected/89b54d64-9045-40b1-a7fc-49d4dce849e6-kube-api-access-lp85r\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:13 crc kubenswrapper[4758]: I0122 16:54:13.149290 4758 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/89b54d64-9045-40b1-a7fc-49d4dce849e6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:13 crc kubenswrapper[4758]: I0122 16:54:13.178293 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89b54d64-9045-40b1-a7fc-49d4dce849e6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "89b54d64-9045-40b1-a7fc-49d4dce849e6" (UID: "89b54d64-9045-40b1-a7fc-49d4dce849e6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:54:13 crc kubenswrapper[4758]: I0122 16:54:13.196376 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89b54d64-9045-40b1-a7fc-49d4dce849e6-config" (OuterVolumeSpecName: "config") pod "89b54d64-9045-40b1-a7fc-49d4dce849e6" (UID: "89b54d64-9045-40b1-a7fc-49d4dce849e6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:54:13 crc kubenswrapper[4758]: I0122 16:54:13.204967 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89b54d64-9045-40b1-a7fc-49d4dce849e6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "89b54d64-9045-40b1-a7fc-49d4dce849e6" (UID: "89b54d64-9045-40b1-a7fc-49d4dce849e6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:54:13 crc kubenswrapper[4758]: I0122 16:54:13.206137 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89b54d64-9045-40b1-a7fc-49d4dce849e6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "89b54d64-9045-40b1-a7fc-49d4dce849e6" (UID: "89b54d64-9045-40b1-a7fc-49d4dce849e6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:54:13 crc kubenswrapper[4758]: I0122 16:54:13.251557 4758 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89b54d64-9045-40b1-a7fc-49d4dce849e6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:13 crc kubenswrapper[4758]: I0122 16:54:13.251594 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89b54d64-9045-40b1-a7fc-49d4dce849e6-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:13 crc kubenswrapper[4758]: I0122 16:54:13.251603 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89b54d64-9045-40b1-a7fc-49d4dce849e6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:13 crc kubenswrapper[4758]: I0122 16:54:13.251613 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89b54d64-9045-40b1-a7fc-49d4dce849e6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:13 crc kubenswrapper[4758]: I0122 16:54:13.508928 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-tzfkb" Jan 22 16:54:13 crc kubenswrapper[4758]: I0122 16:54:13.659040 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18850dee-b495-42e5-87ee-915b6c822255-scripts\") pod \"18850dee-b495-42e5-87ee-915b6c822255\" (UID: \"18850dee-b495-42e5-87ee-915b6c822255\") " Jan 22 16:54:13 crc kubenswrapper[4758]: I0122 16:54:13.659505 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62cd6\" (UniqueName: \"kubernetes.io/projected/18850dee-b495-42e5-87ee-915b6c822255-kube-api-access-62cd6\") pod \"18850dee-b495-42e5-87ee-915b6c822255\" (UID: \"18850dee-b495-42e5-87ee-915b6c822255\") " Jan 22 16:54:13 crc kubenswrapper[4758]: I0122 16:54:13.659612 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18850dee-b495-42e5-87ee-915b6c822255-combined-ca-bundle\") pod \"18850dee-b495-42e5-87ee-915b6c822255\" (UID: \"18850dee-b495-42e5-87ee-915b6c822255\") " Jan 22 16:54:13 crc kubenswrapper[4758]: I0122 16:54:13.659721 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18850dee-b495-42e5-87ee-915b6c822255-config-data\") pod \"18850dee-b495-42e5-87ee-915b6c822255\" (UID: \"18850dee-b495-42e5-87ee-915b6c822255\") " Jan 22 16:54:13 crc kubenswrapper[4758]: I0122 16:54:13.668900 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18850dee-b495-42e5-87ee-915b6c822255-scripts" (OuterVolumeSpecName: "scripts") pod "18850dee-b495-42e5-87ee-915b6c822255" (UID: "18850dee-b495-42e5-87ee-915b6c822255"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:54:13 crc kubenswrapper[4758]: I0122 16:54:13.669214 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18850dee-b495-42e5-87ee-915b6c822255-kube-api-access-62cd6" (OuterVolumeSpecName: "kube-api-access-62cd6") pod "18850dee-b495-42e5-87ee-915b6c822255" (UID: "18850dee-b495-42e5-87ee-915b6c822255"). InnerVolumeSpecName "kube-api-access-62cd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:54:13 crc kubenswrapper[4758]: I0122 16:54:13.701842 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18850dee-b495-42e5-87ee-915b6c822255-config-data" (OuterVolumeSpecName: "config-data") pod "18850dee-b495-42e5-87ee-915b6c822255" (UID: "18850dee-b495-42e5-87ee-915b6c822255"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:54:13 crc kubenswrapper[4758]: I0122 16:54:13.713434 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18850dee-b495-42e5-87ee-915b6c822255-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "18850dee-b495-42e5-87ee-915b6c822255" (UID: "18850dee-b495-42e5-87ee-915b6c822255"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:54:13 crc kubenswrapper[4758]: I0122 16:54:13.762208 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18850dee-b495-42e5-87ee-915b6c822255-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:13 crc kubenswrapper[4758]: I0122 16:54:13.762465 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62cd6\" (UniqueName: \"kubernetes.io/projected/18850dee-b495-42e5-87ee-915b6c822255-kube-api-access-62cd6\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:13 crc kubenswrapper[4758]: I0122 16:54:13.762544 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18850dee-b495-42e5-87ee-915b6c822255-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:13 crc kubenswrapper[4758]: I0122 16:54:13.762605 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18850dee-b495-42e5-87ee-915b6c822255-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:13 crc kubenswrapper[4758]: I0122 16:54:13.837793 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:54:13 crc kubenswrapper[4758]: I0122 16:54:13.837871 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:54:13 crc kubenswrapper[4758]: I0122 16:54:13.837934 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" Jan 22 16:54:13 crc kubenswrapper[4758]: I0122 16:54:13.838895 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"199c6be88db26753015fa9e30b754aa271b4aa087623fd5be9e93878eddbc087"} pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 16:54:13 crc kubenswrapper[4758]: I0122 16:54:13.838969 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" containerID="cri-o://199c6be88db26753015fa9e30b754aa271b4aa087623fd5be9e93878eddbc087" gracePeriod=600 Jan 22 16:54:13 crc kubenswrapper[4758]: I0122 16:54:13.963214 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-tzfkb" Jan 22 16:54:13 crc kubenswrapper[4758]: I0122 16:54:13.964467 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f7466dcbf-g984f" Jan 22 16:54:13 crc kubenswrapper[4758]: I0122 16:54:13.965205 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-tzfkb" event={"ID":"18850dee-b495-42e5-87ee-915b6c822255","Type":"ContainerDied","Data":"5404ba51f536bb6f32bb8c1d0ba2fe4a0a911d8222acaeb712d866fc5131182f"} Jan 22 16:54:13 crc kubenswrapper[4758]: I0122 16:54:13.965246 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5404ba51f536bb6f32bb8c1d0ba2fe4a0a911d8222acaeb712d866fc5131182f" Jan 22 16:54:14 crc kubenswrapper[4758]: I0122 16:54:14.016515 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f7466dcbf-g984f"] Jan 22 16:54:14 crc kubenswrapper[4758]: I0122 16:54:14.054789 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f7466dcbf-g984f"] Jan 22 16:54:14 crc kubenswrapper[4758]: I0122 16:54:14.124416 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 22 16:54:14 crc kubenswrapper[4758]: I0122 16:54:14.124670 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="455e2446-54d3-44f8-8d68-158d62c5f0c7" containerName="nova-api-log" containerID="cri-o://be4548a9e4640a3cb2f2397951246c9d7f1f7cfaf16f424cd58738806c9e6d4d" gracePeriod=30 Jan 22 16:54:14 crc kubenswrapper[4758]: I0122 16:54:14.124784 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="455e2446-54d3-44f8-8d68-158d62c5f0c7" containerName="nova-api-api" containerID="cri-o://a8f0d65e587c195ffe3cc39ff80d47226d74bf4a5b307185fcdd74c40429f2b2" gracePeriod=30 Jan 22 16:54:14 crc kubenswrapper[4758]: I0122 16:54:14.149928 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 16:54:14 crc kubenswrapper[4758]: I0122 16:54:14.263774 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 16:54:14 crc kubenswrapper[4758]: I0122 16:54:14.820542 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89b54d64-9045-40b1-a7fc-49d4dce849e6" path="/var/lib/kubelet/pods/89b54d64-9045-40b1-a7fc-49d4dce849e6/volumes" Jan 22 16:54:14 crc kubenswrapper[4758]: I0122 16:54:14.975855 4758 generic.go:334] "Generic (PLEG): container finished" podID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerID="199c6be88db26753015fa9e30b754aa271b4aa087623fd5be9e93878eddbc087" exitCode=0 Jan 22 16:54:14 crc kubenswrapper[4758]: I0122 16:54:14.976043 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerDied","Data":"199c6be88db26753015fa9e30b754aa271b4aa087623fd5be9e93878eddbc087"} Jan 22 16:54:14 crc kubenswrapper[4758]: I0122 16:54:14.976305 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerStarted","Data":"9fbb4e4b642afb97b44eb564377795a5aede8a06f9d628acf1dc7fd06d2240ab"} Jan 22 16:54:14 crc kubenswrapper[4758]: I0122 16:54:14.976332 4758 scope.go:117] "RemoveContainer" containerID="b601f6fca756de859a726aaa8ab0d3554a8d02de3dc2055608cf851a04506590" Jan 22 16:54:14 crc kubenswrapper[4758]: I0122 16:54:14.980078 4758 generic.go:334] "Generic (PLEG): container finished" podID="455e2446-54d3-44f8-8d68-158d62c5f0c7" containerID="be4548a9e4640a3cb2f2397951246c9d7f1f7cfaf16f424cd58738806c9e6d4d" exitCode=143 Jan 22 16:54:14 crc kubenswrapper[4758]: I0122 16:54:14.980316 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2aa14567-c268-46dc-bd38-56eec14f0b95" containerName="nova-metadata-log" containerID="cri-o://63bbd328655627cb2fdbabc3e2cca7557d7ba50759a9060a8e31245a379787f1" gracePeriod=30 Jan 22 16:54:14 crc kubenswrapper[4758]: I0122 16:54:14.980412 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"455e2446-54d3-44f8-8d68-158d62c5f0c7","Type":"ContainerDied","Data":"be4548a9e4640a3cb2f2397951246c9d7f1f7cfaf16f424cd58738806c9e6d4d"} Jan 22 16:54:14 crc kubenswrapper[4758]: I0122 16:54:14.980547 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1" containerName="nova-scheduler-scheduler" containerID="cri-o://1736b81f30170002dfa31a54db1fb1ea56c05c0449c86aa77768793149b629b8" gracePeriod=30 Jan 22 16:54:14 crc kubenswrapper[4758]: I0122 16:54:14.981066 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2aa14567-c268-46dc-bd38-56eec14f0b95" containerName="nova-metadata-metadata" containerID="cri-o://7bbaa7e4f4163b648c41f6b7129a03c93514764318a566d61903ca51cb053711" gracePeriod=30 Jan 22 16:54:15 crc kubenswrapper[4758]: I0122 16:54:15.700496 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 22 16:54:15 crc kubenswrapper[4758]: I0122 16:54:15.700817 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 22 16:54:16 crc kubenswrapper[4758]: I0122 16:54:16.006148 4758 generic.go:334] "Generic (PLEG): container finished" podID="2aa14567-c268-46dc-bd38-56eec14f0b95" containerID="7bbaa7e4f4163b648c41f6b7129a03c93514764318a566d61903ca51cb053711" exitCode=0 Jan 22 16:54:16 crc kubenswrapper[4758]: I0122 16:54:16.006180 4758 generic.go:334] "Generic (PLEG): container finished" podID="2aa14567-c268-46dc-bd38-56eec14f0b95" containerID="63bbd328655627cb2fdbabc3e2cca7557d7ba50759a9060a8e31245a379787f1" exitCode=143 Jan 22 16:54:16 crc kubenswrapper[4758]: I0122 16:54:16.006232 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2aa14567-c268-46dc-bd38-56eec14f0b95","Type":"ContainerDied","Data":"7bbaa7e4f4163b648c41f6b7129a03c93514764318a566d61903ca51cb053711"} Jan 22 16:54:16 crc kubenswrapper[4758]: I0122 16:54:16.006292 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2aa14567-c268-46dc-bd38-56eec14f0b95","Type":"ContainerDied","Data":"63bbd328655627cb2fdbabc3e2cca7557d7ba50759a9060a8e31245a379787f1"} Jan 22 16:54:16 crc kubenswrapper[4758]: I0122 16:54:16.116615 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 16:54:16 crc kubenswrapper[4758]: I0122 16:54:16.215409 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z54j9\" (UniqueName: \"kubernetes.io/projected/2aa14567-c268-46dc-bd38-56eec14f0b95-kube-api-access-z54j9\") pod \"2aa14567-c268-46dc-bd38-56eec14f0b95\" (UID: \"2aa14567-c268-46dc-bd38-56eec14f0b95\") " Jan 22 16:54:16 crc kubenswrapper[4758]: I0122 16:54:16.215465 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2aa14567-c268-46dc-bd38-56eec14f0b95-logs\") pod \"2aa14567-c268-46dc-bd38-56eec14f0b95\" (UID: \"2aa14567-c268-46dc-bd38-56eec14f0b95\") " Jan 22 16:54:16 crc kubenswrapper[4758]: I0122 16:54:16.215538 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2aa14567-c268-46dc-bd38-56eec14f0b95-nova-metadata-tls-certs\") pod \"2aa14567-c268-46dc-bd38-56eec14f0b95\" (UID: \"2aa14567-c268-46dc-bd38-56eec14f0b95\") " Jan 22 16:54:16 crc kubenswrapper[4758]: I0122 16:54:16.215630 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa14567-c268-46dc-bd38-56eec14f0b95-combined-ca-bundle\") pod \"2aa14567-c268-46dc-bd38-56eec14f0b95\" (UID: \"2aa14567-c268-46dc-bd38-56eec14f0b95\") " Jan 22 16:54:16 crc kubenswrapper[4758]: I0122 16:54:16.215755 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aa14567-c268-46dc-bd38-56eec14f0b95-config-data\") pod \"2aa14567-c268-46dc-bd38-56eec14f0b95\" (UID: \"2aa14567-c268-46dc-bd38-56eec14f0b95\") " Jan 22 16:54:16 crc kubenswrapper[4758]: I0122 16:54:16.216038 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2aa14567-c268-46dc-bd38-56eec14f0b95-logs" (OuterVolumeSpecName: "logs") pod "2aa14567-c268-46dc-bd38-56eec14f0b95" (UID: "2aa14567-c268-46dc-bd38-56eec14f0b95"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:54:16 crc kubenswrapper[4758]: I0122 16:54:16.216316 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2aa14567-c268-46dc-bd38-56eec14f0b95-logs\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:16 crc kubenswrapper[4758]: I0122 16:54:16.228524 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2aa14567-c268-46dc-bd38-56eec14f0b95-kube-api-access-z54j9" (OuterVolumeSpecName: "kube-api-access-z54j9") pod "2aa14567-c268-46dc-bd38-56eec14f0b95" (UID: "2aa14567-c268-46dc-bd38-56eec14f0b95"). InnerVolumeSpecName "kube-api-access-z54j9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:54:16 crc kubenswrapper[4758]: I0122 16:54:16.255409 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aa14567-c268-46dc-bd38-56eec14f0b95-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2aa14567-c268-46dc-bd38-56eec14f0b95" (UID: "2aa14567-c268-46dc-bd38-56eec14f0b95"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:54:16 crc kubenswrapper[4758]: I0122 16:54:16.256580 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aa14567-c268-46dc-bd38-56eec14f0b95-config-data" (OuterVolumeSpecName: "config-data") pod "2aa14567-c268-46dc-bd38-56eec14f0b95" (UID: "2aa14567-c268-46dc-bd38-56eec14f0b95"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:54:16 crc kubenswrapper[4758]: I0122 16:54:16.290366 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aa14567-c268-46dc-bd38-56eec14f0b95-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "2aa14567-c268-46dc-bd38-56eec14f0b95" (UID: "2aa14567-c268-46dc-bd38-56eec14f0b95"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:54:16 crc kubenswrapper[4758]: I0122 16:54:16.318351 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z54j9\" (UniqueName: \"kubernetes.io/projected/2aa14567-c268-46dc-bd38-56eec14f0b95-kube-api-access-z54j9\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:16 crc kubenswrapper[4758]: I0122 16:54:16.318390 4758 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2aa14567-c268-46dc-bd38-56eec14f0b95-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:16 crc kubenswrapper[4758]: I0122 16:54:16.318403 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa14567-c268-46dc-bd38-56eec14f0b95-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:16 crc kubenswrapper[4758]: I0122 16:54:16.318414 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aa14567-c268-46dc-bd38-56eec14f0b95-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:16 crc kubenswrapper[4758]: E0122 16:54:16.902620 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1736b81f30170002dfa31a54db1fb1ea56c05c0449c86aa77768793149b629b8 is running failed: container process not found" containerID="1736b81f30170002dfa31a54db1fb1ea56c05c0449c86aa77768793149b629b8" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 22 16:54:16 crc kubenswrapper[4758]: E0122 16:54:16.903399 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1736b81f30170002dfa31a54db1fb1ea56c05c0449c86aa77768793149b629b8 is running failed: container process not found" containerID="1736b81f30170002dfa31a54db1fb1ea56c05c0449c86aa77768793149b629b8" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 22 16:54:16 crc kubenswrapper[4758]: E0122 16:54:16.903900 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1736b81f30170002dfa31a54db1fb1ea56c05c0449c86aa77768793149b629b8 is running failed: container process not found" containerID="1736b81f30170002dfa31a54db1fb1ea56c05c0449c86aa77768793149b629b8" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 22 16:54:16 crc kubenswrapper[4758]: E0122 16:54:16.903944 4758 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1736b81f30170002dfa31a54db1fb1ea56c05c0449c86aa77768793149b629b8 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1" containerName="nova-scheduler-scheduler" Jan 22 16:54:17 crc kubenswrapper[4758]: I0122 16:54:17.026651 4758 generic.go:334] "Generic (PLEG): container finished" podID="f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1" containerID="1736b81f30170002dfa31a54db1fb1ea56c05c0449c86aa77768793149b629b8" exitCode=0 Jan 22 16:54:17 crc kubenswrapper[4758]: I0122 16:54:17.026827 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1","Type":"ContainerDied","Data":"1736b81f30170002dfa31a54db1fb1ea56c05c0449c86aa77768793149b629b8"} Jan 22 16:54:17 crc kubenswrapper[4758]: I0122 16:54:17.036113 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2aa14567-c268-46dc-bd38-56eec14f0b95","Type":"ContainerDied","Data":"60a3e36e668981375c061beb49bfdb0cf12dfb0fc50c21be902cc06d2e95e6f9"} Jan 22 16:54:17 crc kubenswrapper[4758]: I0122 16:54:17.036172 4758 scope.go:117] "RemoveContainer" containerID="7bbaa7e4f4163b648c41f6b7129a03c93514764318a566d61903ca51cb053711" Jan 22 16:54:17 crc kubenswrapper[4758]: I0122 16:54:17.036322 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 16:54:17 crc kubenswrapper[4758]: I0122 16:54:17.070536 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 16:54:17 crc kubenswrapper[4758]: I0122 16:54:17.095353 4758 scope.go:117] "RemoveContainer" containerID="63bbd328655627cb2fdbabc3e2cca7557d7ba50759a9060a8e31245a379787f1" Jan 22 16:54:17 crc kubenswrapper[4758]: I0122 16:54:17.100834 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 16:54:17 crc kubenswrapper[4758]: I0122 16:54:17.138179 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 22 16:54:17 crc kubenswrapper[4758]: E0122 16:54:17.138689 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aa14567-c268-46dc-bd38-56eec14f0b95" containerName="nova-metadata-log" Jan 22 16:54:17 crc kubenswrapper[4758]: I0122 16:54:17.138708 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aa14567-c268-46dc-bd38-56eec14f0b95" containerName="nova-metadata-log" Jan 22 16:54:17 crc kubenswrapper[4758]: E0122 16:54:17.138722 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aa14567-c268-46dc-bd38-56eec14f0b95" containerName="nova-metadata-metadata" Jan 22 16:54:17 crc kubenswrapper[4758]: I0122 16:54:17.138730 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aa14567-c268-46dc-bd38-56eec14f0b95" containerName="nova-metadata-metadata" Jan 22 16:54:17 crc kubenswrapper[4758]: E0122 16:54:17.138769 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89b54d64-9045-40b1-a7fc-49d4dce849e6" containerName="dnsmasq-dns" Jan 22 16:54:17 crc kubenswrapper[4758]: I0122 16:54:17.138779 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="89b54d64-9045-40b1-a7fc-49d4dce849e6" containerName="dnsmasq-dns" Jan 22 16:54:17 crc kubenswrapper[4758]: E0122 16:54:17.138811 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18850dee-b495-42e5-87ee-915b6c822255" containerName="nova-manage" Jan 22 16:54:17 crc kubenswrapper[4758]: I0122 16:54:17.138817 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="18850dee-b495-42e5-87ee-915b6c822255" containerName="nova-manage" Jan 22 16:54:17 crc kubenswrapper[4758]: E0122 16:54:17.138830 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89b54d64-9045-40b1-a7fc-49d4dce849e6" containerName="init" Jan 22 16:54:17 crc kubenswrapper[4758]: I0122 16:54:17.138836 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="89b54d64-9045-40b1-a7fc-49d4dce849e6" containerName="init" Jan 22 16:54:17 crc kubenswrapper[4758]: I0122 16:54:17.139068 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="2aa14567-c268-46dc-bd38-56eec14f0b95" containerName="nova-metadata-metadata" Jan 22 16:54:17 crc kubenswrapper[4758]: I0122 16:54:17.139110 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="2aa14567-c268-46dc-bd38-56eec14f0b95" containerName="nova-metadata-log" Jan 22 16:54:17 crc kubenswrapper[4758]: I0122 16:54:17.139120 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="18850dee-b495-42e5-87ee-915b6c822255" containerName="nova-manage" Jan 22 16:54:17 crc kubenswrapper[4758]: I0122 16:54:17.139145 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="89b54d64-9045-40b1-a7fc-49d4dce849e6" containerName="dnsmasq-dns" Jan 22 16:54:17 crc kubenswrapper[4758]: I0122 16:54:17.140363 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 16:54:17 crc kubenswrapper[4758]: I0122 16:54:17.143231 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 22 16:54:17 crc kubenswrapper[4758]: I0122 16:54:17.143517 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 22 16:54:17 crc kubenswrapper[4758]: I0122 16:54:17.155167 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 16:54:17 crc kubenswrapper[4758]: I0122 16:54:17.249142 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef732e48-f2b4-48cf-822b-c1dabb02ec5c-config-data\") pod \"nova-metadata-0\" (UID: \"ef732e48-f2b4-48cf-822b-c1dabb02ec5c\") " pod="openstack/nova-metadata-0" Jan 22 16:54:17 crc kubenswrapper[4758]: I0122 16:54:17.249545 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef732e48-f2b4-48cf-822b-c1dabb02ec5c-logs\") pod \"nova-metadata-0\" (UID: \"ef732e48-f2b4-48cf-822b-c1dabb02ec5c\") " pod="openstack/nova-metadata-0" Jan 22 16:54:17 crc kubenswrapper[4758]: I0122 16:54:17.249863 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpncv\" (UniqueName: \"kubernetes.io/projected/ef732e48-f2b4-48cf-822b-c1dabb02ec5c-kube-api-access-cpncv\") pod \"nova-metadata-0\" (UID: \"ef732e48-f2b4-48cf-822b-c1dabb02ec5c\") " pod="openstack/nova-metadata-0" Jan 22 16:54:17 crc kubenswrapper[4758]: I0122 16:54:17.249888 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef732e48-f2b4-48cf-822b-c1dabb02ec5c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ef732e48-f2b4-48cf-822b-c1dabb02ec5c\") " pod="openstack/nova-metadata-0" Jan 22 16:54:17 crc kubenswrapper[4758]: I0122 16:54:17.249987 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef732e48-f2b4-48cf-822b-c1dabb02ec5c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ef732e48-f2b4-48cf-822b-c1dabb02ec5c\") " pod="openstack/nova-metadata-0" Jan 22 16:54:17 crc kubenswrapper[4758]: I0122 16:54:17.351860 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef732e48-f2b4-48cf-822b-c1dabb02ec5c-logs\") pod \"nova-metadata-0\" (UID: \"ef732e48-f2b4-48cf-822b-c1dabb02ec5c\") " pod="openstack/nova-metadata-0" Jan 22 16:54:17 crc kubenswrapper[4758]: I0122 16:54:17.351912 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef732e48-f2b4-48cf-822b-c1dabb02ec5c-config-data\") pod \"nova-metadata-0\" (UID: \"ef732e48-f2b4-48cf-822b-c1dabb02ec5c\") " pod="openstack/nova-metadata-0" Jan 22 16:54:17 crc kubenswrapper[4758]: I0122 16:54:17.352033 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpncv\" (UniqueName: \"kubernetes.io/projected/ef732e48-f2b4-48cf-822b-c1dabb02ec5c-kube-api-access-cpncv\") pod \"nova-metadata-0\" (UID: \"ef732e48-f2b4-48cf-822b-c1dabb02ec5c\") " pod="openstack/nova-metadata-0" Jan 22 16:54:17 crc kubenswrapper[4758]: I0122 16:54:17.352056 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef732e48-f2b4-48cf-822b-c1dabb02ec5c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ef732e48-f2b4-48cf-822b-c1dabb02ec5c\") " pod="openstack/nova-metadata-0" Jan 22 16:54:17 crc kubenswrapper[4758]: I0122 16:54:17.352112 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef732e48-f2b4-48cf-822b-c1dabb02ec5c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ef732e48-f2b4-48cf-822b-c1dabb02ec5c\") " pod="openstack/nova-metadata-0" Jan 22 16:54:17 crc kubenswrapper[4758]: I0122 16:54:17.352224 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef732e48-f2b4-48cf-822b-c1dabb02ec5c-logs\") pod \"nova-metadata-0\" (UID: \"ef732e48-f2b4-48cf-822b-c1dabb02ec5c\") " pod="openstack/nova-metadata-0" Jan 22 16:54:17 crc kubenswrapper[4758]: I0122 16:54:17.358578 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef732e48-f2b4-48cf-822b-c1dabb02ec5c-config-data\") pod \"nova-metadata-0\" (UID: \"ef732e48-f2b4-48cf-822b-c1dabb02ec5c\") " pod="openstack/nova-metadata-0" Jan 22 16:54:17 crc kubenswrapper[4758]: I0122 16:54:17.358852 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef732e48-f2b4-48cf-822b-c1dabb02ec5c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ef732e48-f2b4-48cf-822b-c1dabb02ec5c\") " pod="openstack/nova-metadata-0" Jan 22 16:54:17 crc kubenswrapper[4758]: I0122 16:54:17.360582 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef732e48-f2b4-48cf-822b-c1dabb02ec5c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ef732e48-f2b4-48cf-822b-c1dabb02ec5c\") " pod="openstack/nova-metadata-0" Jan 22 16:54:17 crc kubenswrapper[4758]: I0122 16:54:17.368407 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpncv\" (UniqueName: \"kubernetes.io/projected/ef732e48-f2b4-48cf-822b-c1dabb02ec5c-kube-api-access-cpncv\") pod \"nova-metadata-0\" (UID: \"ef732e48-f2b4-48cf-822b-c1dabb02ec5c\") " pod="openstack/nova-metadata-0" Jan 22 16:54:17 crc kubenswrapper[4758]: I0122 16:54:17.436166 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 16:54:17 crc kubenswrapper[4758]: I0122 16:54:17.471586 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 16:54:17 crc kubenswrapper[4758]: I0122 16:54:17.554668 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vmcj\" (UniqueName: \"kubernetes.io/projected/f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1-kube-api-access-5vmcj\") pod \"f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1\" (UID: \"f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1\") " Jan 22 16:54:17 crc kubenswrapper[4758]: I0122 16:54:17.554818 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1-combined-ca-bundle\") pod \"f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1\" (UID: \"f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1\") " Jan 22 16:54:17 crc kubenswrapper[4758]: I0122 16:54:17.554865 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1-config-data\") pod \"f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1\" (UID: \"f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1\") " Jan 22 16:54:17 crc kubenswrapper[4758]: I0122 16:54:17.560709 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1-kube-api-access-5vmcj" (OuterVolumeSpecName: "kube-api-access-5vmcj") pod "f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1" (UID: "f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1"). InnerVolumeSpecName "kube-api-access-5vmcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:54:17 crc kubenswrapper[4758]: I0122 16:54:17.600349 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1-config-data" (OuterVolumeSpecName: "config-data") pod "f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1" (UID: "f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:54:17 crc kubenswrapper[4758]: I0122 16:54:17.602838 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1" (UID: "f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:54:17 crc kubenswrapper[4758]: I0122 16:54:17.657289 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vmcj\" (UniqueName: \"kubernetes.io/projected/f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1-kube-api-access-5vmcj\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:17 crc kubenswrapper[4758]: I0122 16:54:17.657365 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:17 crc kubenswrapper[4758]: I0122 16:54:17.657382 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:17 crc kubenswrapper[4758]: I0122 16:54:17.976559 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 16:54:17 crc kubenswrapper[4758]: W0122 16:54:17.979376 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef732e48_f2b4_48cf_822b_c1dabb02ec5c.slice/crio-91f6d072baae84b549b7be4709fb0477872d828e583d3ac2b36d20e8a806af74 WatchSource:0}: Error finding container 91f6d072baae84b549b7be4709fb0477872d828e583d3ac2b36d20e8a806af74: Status 404 returned error can't find the container with id 91f6d072baae84b549b7be4709fb0477872d828e583d3ac2b36d20e8a806af74 Jan 22 16:54:18 crc kubenswrapper[4758]: I0122 16:54:18.053417 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ef732e48-f2b4-48cf-822b-c1dabb02ec5c","Type":"ContainerStarted","Data":"91f6d072baae84b549b7be4709fb0477872d828e583d3ac2b36d20e8a806af74"} Jan 22 16:54:18 crc kubenswrapper[4758]: I0122 16:54:18.057380 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1","Type":"ContainerDied","Data":"378d24742f0caa126bc6c2364d63ee61b3eff54757be319013ff5e61125d45ae"} Jan 22 16:54:18 crc kubenswrapper[4758]: I0122 16:54:18.057434 4758 scope.go:117] "RemoveContainer" containerID="1736b81f30170002dfa31a54db1fb1ea56c05c0449c86aa77768793149b629b8" Jan 22 16:54:18 crc kubenswrapper[4758]: I0122 16:54:18.057435 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 16:54:18 crc kubenswrapper[4758]: I0122 16:54:18.121104 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 16:54:18 crc kubenswrapper[4758]: I0122 16:54:18.137356 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 16:54:18 crc kubenswrapper[4758]: I0122 16:54:18.146634 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 16:54:18 crc kubenswrapper[4758]: E0122 16:54:18.147252 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1" containerName="nova-scheduler-scheduler" Jan 22 16:54:18 crc kubenswrapper[4758]: I0122 16:54:18.147281 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1" containerName="nova-scheduler-scheduler" Jan 22 16:54:18 crc kubenswrapper[4758]: I0122 16:54:18.147582 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1" containerName="nova-scheduler-scheduler" Jan 22 16:54:18 crc kubenswrapper[4758]: I0122 16:54:18.148405 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 16:54:18 crc kubenswrapper[4758]: I0122 16:54:18.150369 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 22 16:54:18 crc kubenswrapper[4758]: I0122 16:54:18.158543 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 16:54:18 crc kubenswrapper[4758]: I0122 16:54:18.281643 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c26bc28-4e84-4218-9bfb-7d7cc6206cac-config-data\") pod \"nova-scheduler-0\" (UID: \"8c26bc28-4e84-4218-9bfb-7d7cc6206cac\") " pod="openstack/nova-scheduler-0" Jan 22 16:54:18 crc kubenswrapper[4758]: I0122 16:54:18.281760 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4hnx\" (UniqueName: \"kubernetes.io/projected/8c26bc28-4e84-4218-9bfb-7d7cc6206cac-kube-api-access-q4hnx\") pod \"nova-scheduler-0\" (UID: \"8c26bc28-4e84-4218-9bfb-7d7cc6206cac\") " pod="openstack/nova-scheduler-0" Jan 22 16:54:18 crc kubenswrapper[4758]: I0122 16:54:18.281828 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c26bc28-4e84-4218-9bfb-7d7cc6206cac-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8c26bc28-4e84-4218-9bfb-7d7cc6206cac\") " pod="openstack/nova-scheduler-0" Jan 22 16:54:18 crc kubenswrapper[4758]: I0122 16:54:18.384044 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c26bc28-4e84-4218-9bfb-7d7cc6206cac-config-data\") pod \"nova-scheduler-0\" (UID: \"8c26bc28-4e84-4218-9bfb-7d7cc6206cac\") " pod="openstack/nova-scheduler-0" Jan 22 16:54:18 crc kubenswrapper[4758]: I0122 16:54:18.384160 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4hnx\" (UniqueName: \"kubernetes.io/projected/8c26bc28-4e84-4218-9bfb-7d7cc6206cac-kube-api-access-q4hnx\") pod \"nova-scheduler-0\" (UID: \"8c26bc28-4e84-4218-9bfb-7d7cc6206cac\") " pod="openstack/nova-scheduler-0" Jan 22 16:54:18 crc kubenswrapper[4758]: I0122 16:54:18.384224 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c26bc28-4e84-4218-9bfb-7d7cc6206cac-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8c26bc28-4e84-4218-9bfb-7d7cc6206cac\") " pod="openstack/nova-scheduler-0" Jan 22 16:54:18 crc kubenswrapper[4758]: I0122 16:54:18.388217 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c26bc28-4e84-4218-9bfb-7d7cc6206cac-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8c26bc28-4e84-4218-9bfb-7d7cc6206cac\") " pod="openstack/nova-scheduler-0" Jan 22 16:54:18 crc kubenswrapper[4758]: I0122 16:54:18.388383 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c26bc28-4e84-4218-9bfb-7d7cc6206cac-config-data\") pod \"nova-scheduler-0\" (UID: \"8c26bc28-4e84-4218-9bfb-7d7cc6206cac\") " pod="openstack/nova-scheduler-0" Jan 22 16:54:18 crc kubenswrapper[4758]: I0122 16:54:18.405718 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4hnx\" (UniqueName: \"kubernetes.io/projected/8c26bc28-4e84-4218-9bfb-7d7cc6206cac-kube-api-access-q4hnx\") pod \"nova-scheduler-0\" (UID: \"8c26bc28-4e84-4218-9bfb-7d7cc6206cac\") " pod="openstack/nova-scheduler-0" Jan 22 16:54:18 crc kubenswrapper[4758]: I0122 16:54:18.506237 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 16:54:18 crc kubenswrapper[4758]: I0122 16:54:18.822006 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2aa14567-c268-46dc-bd38-56eec14f0b95" path="/var/lib/kubelet/pods/2aa14567-c268-46dc-bd38-56eec14f0b95/volumes" Jan 22 16:54:18 crc kubenswrapper[4758]: I0122 16:54:18.823617 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1" path="/var/lib/kubelet/pods/f1f8ee88-8859-4ef7-a94b-bc75ad2de6d1/volumes" Jan 22 16:54:18 crc kubenswrapper[4758]: I0122 16:54:18.985140 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 16:54:19 crc kubenswrapper[4758]: I0122 16:54:19.075288 4758 generic.go:334] "Generic (PLEG): container finished" podID="455e2446-54d3-44f8-8d68-158d62c5f0c7" containerID="a8f0d65e587c195ffe3cc39ff80d47226d74bf4a5b307185fcdd74c40429f2b2" exitCode=0 Jan 22 16:54:19 crc kubenswrapper[4758]: I0122 16:54:19.075369 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"455e2446-54d3-44f8-8d68-158d62c5f0c7","Type":"ContainerDied","Data":"a8f0d65e587c195ffe3cc39ff80d47226d74bf4a5b307185fcdd74c40429f2b2"} Jan 22 16:54:19 crc kubenswrapper[4758]: I0122 16:54:19.076893 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ef732e48-f2b4-48cf-822b-c1dabb02ec5c","Type":"ContainerStarted","Data":"2db3fc968b5303b0f720ed6af61aa5662cda312bcb35a9c1d16660eb5ab4418a"} Jan 22 16:54:19 crc kubenswrapper[4758]: I0122 16:54:19.076912 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ef732e48-f2b4-48cf-822b-c1dabb02ec5c","Type":"ContainerStarted","Data":"97898b437d0b252168fdc2ceed1cdc4f24c936263623060876488966a3107070"} Jan 22 16:54:19 crc kubenswrapper[4758]: I0122 16:54:19.081566 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8c26bc28-4e84-4218-9bfb-7d7cc6206cac","Type":"ContainerStarted","Data":"4488de1456c7f767e5b8b24caa22cf78e7cf04698fa3a350844526acead4f3bc"} Jan 22 16:54:19 crc kubenswrapper[4758]: I0122 16:54:19.086675 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 16:54:19 crc kubenswrapper[4758]: I0122 16:54:19.107504 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.107483542 podStartE2EDuration="2.107483542s" podCreationTimestamp="2026-01-22 16:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:54:19.095596968 +0000 UTC m=+1480.578936253" watchObservedRunningTime="2026-01-22 16:54:19.107483542 +0000 UTC m=+1480.590822827" Jan 22 16:54:19 crc kubenswrapper[4758]: I0122 16:54:19.199428 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/455e2446-54d3-44f8-8d68-158d62c5f0c7-config-data\") pod \"455e2446-54d3-44f8-8d68-158d62c5f0c7\" (UID: \"455e2446-54d3-44f8-8d68-158d62c5f0c7\") " Jan 22 16:54:19 crc kubenswrapper[4758]: I0122 16:54:19.199867 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/455e2446-54d3-44f8-8d68-158d62c5f0c7-combined-ca-bundle\") pod \"455e2446-54d3-44f8-8d68-158d62c5f0c7\" (UID: \"455e2446-54d3-44f8-8d68-158d62c5f0c7\") " Jan 22 16:54:19 crc kubenswrapper[4758]: I0122 16:54:19.199969 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/455e2446-54d3-44f8-8d68-158d62c5f0c7-logs\") pod \"455e2446-54d3-44f8-8d68-158d62c5f0c7\" (UID: \"455e2446-54d3-44f8-8d68-158d62c5f0c7\") " Jan 22 16:54:19 crc kubenswrapper[4758]: I0122 16:54:19.200010 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlmhh\" (UniqueName: \"kubernetes.io/projected/455e2446-54d3-44f8-8d68-158d62c5f0c7-kube-api-access-wlmhh\") pod \"455e2446-54d3-44f8-8d68-158d62c5f0c7\" (UID: \"455e2446-54d3-44f8-8d68-158d62c5f0c7\") " Jan 22 16:54:19 crc kubenswrapper[4758]: I0122 16:54:19.208135 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/455e2446-54d3-44f8-8d68-158d62c5f0c7-logs" (OuterVolumeSpecName: "logs") pod "455e2446-54d3-44f8-8d68-158d62c5f0c7" (UID: "455e2446-54d3-44f8-8d68-158d62c5f0c7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:54:19 crc kubenswrapper[4758]: I0122 16:54:19.223048 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/455e2446-54d3-44f8-8d68-158d62c5f0c7-kube-api-access-wlmhh" (OuterVolumeSpecName: "kube-api-access-wlmhh") pod "455e2446-54d3-44f8-8d68-158d62c5f0c7" (UID: "455e2446-54d3-44f8-8d68-158d62c5f0c7"). InnerVolumeSpecName "kube-api-access-wlmhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:54:19 crc kubenswrapper[4758]: I0122 16:54:19.240929 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/455e2446-54d3-44f8-8d68-158d62c5f0c7-config-data" (OuterVolumeSpecName: "config-data") pod "455e2446-54d3-44f8-8d68-158d62c5f0c7" (UID: "455e2446-54d3-44f8-8d68-158d62c5f0c7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:54:19 crc kubenswrapper[4758]: I0122 16:54:19.303589 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/455e2446-54d3-44f8-8d68-158d62c5f0c7-logs\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:19 crc kubenswrapper[4758]: I0122 16:54:19.303647 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlmhh\" (UniqueName: \"kubernetes.io/projected/455e2446-54d3-44f8-8d68-158d62c5f0c7-kube-api-access-wlmhh\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:19 crc kubenswrapper[4758]: I0122 16:54:19.303665 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/455e2446-54d3-44f8-8d68-158d62c5f0c7-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:19 crc kubenswrapper[4758]: I0122 16:54:19.311599 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/455e2446-54d3-44f8-8d68-158d62c5f0c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "455e2446-54d3-44f8-8d68-158d62c5f0c7" (UID: "455e2446-54d3-44f8-8d68-158d62c5f0c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:54:19 crc kubenswrapper[4758]: I0122 16:54:19.405398 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/455e2446-54d3-44f8-8d68-158d62c5f0c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:20 crc kubenswrapper[4758]: I0122 16:54:20.094122 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8c26bc28-4e84-4218-9bfb-7d7cc6206cac","Type":"ContainerStarted","Data":"e15f171d4c44505a373a958aea50becf54aa8f12667b4062e81c229ace225892"} Jan 22 16:54:20 crc kubenswrapper[4758]: I0122 16:54:20.097970 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 16:54:20 crc kubenswrapper[4758]: I0122 16:54:20.100857 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"455e2446-54d3-44f8-8d68-158d62c5f0c7","Type":"ContainerDied","Data":"d503a3db138323d34683df5c5e9218a686487e9c4fadc4015a489912a87b76c2"} Jan 22 16:54:20 crc kubenswrapper[4758]: I0122 16:54:20.100941 4758 scope.go:117] "RemoveContainer" containerID="a8f0d65e587c195ffe3cc39ff80d47226d74bf4a5b307185fcdd74c40429f2b2" Jan 22 16:54:20 crc kubenswrapper[4758]: I0122 16:54:20.118687 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.118666614 podStartE2EDuration="2.118666614s" podCreationTimestamp="2026-01-22 16:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:54:20.117163193 +0000 UTC m=+1481.600502478" watchObservedRunningTime="2026-01-22 16:54:20.118666614 +0000 UTC m=+1481.602005899" Jan 22 16:54:20 crc kubenswrapper[4758]: I0122 16:54:20.128672 4758 scope.go:117] "RemoveContainer" containerID="be4548a9e4640a3cb2f2397951246c9d7f1f7cfaf16f424cd58738806c9e6d4d" Jan 22 16:54:20 crc kubenswrapper[4758]: I0122 16:54:20.153064 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 22 16:54:20 crc kubenswrapper[4758]: I0122 16:54:20.174979 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 22 16:54:20 crc kubenswrapper[4758]: I0122 16:54:20.184985 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 22 16:54:20 crc kubenswrapper[4758]: E0122 16:54:20.185540 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="455e2446-54d3-44f8-8d68-158d62c5f0c7" containerName="nova-api-log" Jan 22 16:54:20 crc kubenswrapper[4758]: I0122 16:54:20.185564 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="455e2446-54d3-44f8-8d68-158d62c5f0c7" containerName="nova-api-log" Jan 22 16:54:20 crc kubenswrapper[4758]: E0122 16:54:20.185590 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="455e2446-54d3-44f8-8d68-158d62c5f0c7" containerName="nova-api-api" Jan 22 16:54:20 crc kubenswrapper[4758]: I0122 16:54:20.185596 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="455e2446-54d3-44f8-8d68-158d62c5f0c7" containerName="nova-api-api" Jan 22 16:54:20 crc kubenswrapper[4758]: I0122 16:54:20.185845 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="455e2446-54d3-44f8-8d68-158d62c5f0c7" containerName="nova-api-log" Jan 22 16:54:20 crc kubenswrapper[4758]: I0122 16:54:20.185867 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="455e2446-54d3-44f8-8d68-158d62c5f0c7" containerName="nova-api-api" Jan 22 16:54:20 crc kubenswrapper[4758]: I0122 16:54:20.187109 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 16:54:20 crc kubenswrapper[4758]: I0122 16:54:20.190717 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 22 16:54:20 crc kubenswrapper[4758]: I0122 16:54:20.194110 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 16:54:20 crc kubenswrapper[4758]: I0122 16:54:20.335932 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3278110-ce90-4374-9bf3-ae452ca7747f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c3278110-ce90-4374-9bf3-ae452ca7747f\") " pod="openstack/nova-api-0" Jan 22 16:54:20 crc kubenswrapper[4758]: I0122 16:54:20.336140 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3278110-ce90-4374-9bf3-ae452ca7747f-config-data\") pod \"nova-api-0\" (UID: \"c3278110-ce90-4374-9bf3-ae452ca7747f\") " pod="openstack/nova-api-0" Jan 22 16:54:20 crc kubenswrapper[4758]: I0122 16:54:20.336195 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3278110-ce90-4374-9bf3-ae452ca7747f-logs\") pod \"nova-api-0\" (UID: \"c3278110-ce90-4374-9bf3-ae452ca7747f\") " pod="openstack/nova-api-0" Jan 22 16:54:20 crc kubenswrapper[4758]: I0122 16:54:20.336221 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvs4l\" (UniqueName: \"kubernetes.io/projected/c3278110-ce90-4374-9bf3-ae452ca7747f-kube-api-access-gvs4l\") pod \"nova-api-0\" (UID: \"c3278110-ce90-4374-9bf3-ae452ca7747f\") " pod="openstack/nova-api-0" Jan 22 16:54:20 crc kubenswrapper[4758]: I0122 16:54:20.437828 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3278110-ce90-4374-9bf3-ae452ca7747f-logs\") pod \"nova-api-0\" (UID: \"c3278110-ce90-4374-9bf3-ae452ca7747f\") " pod="openstack/nova-api-0" Jan 22 16:54:20 crc kubenswrapper[4758]: I0122 16:54:20.437865 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvs4l\" (UniqueName: \"kubernetes.io/projected/c3278110-ce90-4374-9bf3-ae452ca7747f-kube-api-access-gvs4l\") pod \"nova-api-0\" (UID: \"c3278110-ce90-4374-9bf3-ae452ca7747f\") " pod="openstack/nova-api-0" Jan 22 16:54:20 crc kubenswrapper[4758]: I0122 16:54:20.437905 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3278110-ce90-4374-9bf3-ae452ca7747f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c3278110-ce90-4374-9bf3-ae452ca7747f\") " pod="openstack/nova-api-0" Jan 22 16:54:20 crc kubenswrapper[4758]: I0122 16:54:20.438046 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3278110-ce90-4374-9bf3-ae452ca7747f-config-data\") pod \"nova-api-0\" (UID: \"c3278110-ce90-4374-9bf3-ae452ca7747f\") " pod="openstack/nova-api-0" Jan 22 16:54:20 crc kubenswrapper[4758]: I0122 16:54:20.439336 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3278110-ce90-4374-9bf3-ae452ca7747f-logs\") pod \"nova-api-0\" (UID: \"c3278110-ce90-4374-9bf3-ae452ca7747f\") " pod="openstack/nova-api-0" Jan 22 16:54:20 crc kubenswrapper[4758]: I0122 16:54:20.445144 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3278110-ce90-4374-9bf3-ae452ca7747f-config-data\") pod \"nova-api-0\" (UID: \"c3278110-ce90-4374-9bf3-ae452ca7747f\") " pod="openstack/nova-api-0" Jan 22 16:54:20 crc kubenswrapper[4758]: I0122 16:54:20.475305 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvs4l\" (UniqueName: \"kubernetes.io/projected/c3278110-ce90-4374-9bf3-ae452ca7747f-kube-api-access-gvs4l\") pod \"nova-api-0\" (UID: \"c3278110-ce90-4374-9bf3-ae452ca7747f\") " pod="openstack/nova-api-0" Jan 22 16:54:20 crc kubenswrapper[4758]: I0122 16:54:20.478685 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3278110-ce90-4374-9bf3-ae452ca7747f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c3278110-ce90-4374-9bf3-ae452ca7747f\") " pod="openstack/nova-api-0" Jan 22 16:54:20 crc kubenswrapper[4758]: I0122 16:54:20.513564 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 16:54:20 crc kubenswrapper[4758]: I0122 16:54:20.824080 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="455e2446-54d3-44f8-8d68-158d62c5f0c7" path="/var/lib/kubelet/pods/455e2446-54d3-44f8-8d68-158d62c5f0c7/volumes" Jan 22 16:54:21 crc kubenswrapper[4758]: W0122 16:54:21.017898 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3278110_ce90_4374_9bf3_ae452ca7747f.slice/crio-b68d5edb87dd2034bf83872f5d0bef0f1e46f153f5573ce0065307d570dc3269 WatchSource:0}: Error finding container b68d5edb87dd2034bf83872f5d0bef0f1e46f153f5573ce0065307d570dc3269: Status 404 returned error can't find the container with id b68d5edb87dd2034bf83872f5d0bef0f1e46f153f5573ce0065307d570dc3269 Jan 22 16:54:21 crc kubenswrapper[4758]: I0122 16:54:21.018570 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 16:54:21 crc kubenswrapper[4758]: I0122 16:54:21.124322 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c3278110-ce90-4374-9bf3-ae452ca7747f","Type":"ContainerStarted","Data":"b68d5edb87dd2034bf83872f5d0bef0f1e46f153f5573ce0065307d570dc3269"} Jan 22 16:54:22 crc kubenswrapper[4758]: I0122 16:54:22.142697 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c3278110-ce90-4374-9bf3-ae452ca7747f","Type":"ContainerStarted","Data":"89bec1a70570814c7675707bce6758ddf91569ba773a130bf89f61be71929adf"} Jan 22 16:54:22 crc kubenswrapper[4758]: I0122 16:54:22.143216 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c3278110-ce90-4374-9bf3-ae452ca7747f","Type":"ContainerStarted","Data":"69f9e28a308bba8979c053b9a79ca099de485a987a8bd8a1c09174f989f5dae4"} Jan 22 16:54:22 crc kubenswrapper[4758]: I0122 16:54:22.170907 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.170884952 podStartE2EDuration="2.170884952s" podCreationTimestamp="2026-01-22 16:54:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:54:22.168425405 +0000 UTC m=+1483.651764700" watchObservedRunningTime="2026-01-22 16:54:22.170884952 +0000 UTC m=+1483.654224247" Jan 22 16:54:22 crc kubenswrapper[4758]: I0122 16:54:22.471857 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 22 16:54:22 crc kubenswrapper[4758]: I0122 16:54:22.472231 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 22 16:54:23 crc kubenswrapper[4758]: I0122 16:54:23.507672 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 22 16:54:24 crc kubenswrapper[4758]: I0122 16:54:24.171622 4758 generic.go:334] "Generic (PLEG): container finished" podID="a1c17792-1219-46ca-9587-380fbaced23b" containerID="ea0f7187d9eceffdb826c1735026e3192b78e7d0a69aaa42cbed685c89cb0cd6" exitCode=0 Jan 22 16:54:24 crc kubenswrapper[4758]: I0122 16:54:24.171669 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-kzc5v" event={"ID":"a1c17792-1219-46ca-9587-380fbaced23b","Type":"ContainerDied","Data":"ea0f7187d9eceffdb826c1735026e3192b78e7d0a69aaa42cbed685c89cb0cd6"} Jan 22 16:54:25 crc kubenswrapper[4758]: I0122 16:54:25.517579 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-kzc5v" Jan 22 16:54:25 crc kubenswrapper[4758]: I0122 16:54:25.644514 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bbb7\" (UniqueName: \"kubernetes.io/projected/a1c17792-1219-46ca-9587-380fbaced23b-kube-api-access-2bbb7\") pod \"a1c17792-1219-46ca-9587-380fbaced23b\" (UID: \"a1c17792-1219-46ca-9587-380fbaced23b\") " Jan 22 16:54:25 crc kubenswrapper[4758]: I0122 16:54:25.644625 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1c17792-1219-46ca-9587-380fbaced23b-combined-ca-bundle\") pod \"a1c17792-1219-46ca-9587-380fbaced23b\" (UID: \"a1c17792-1219-46ca-9587-380fbaced23b\") " Jan 22 16:54:25 crc kubenswrapper[4758]: I0122 16:54:25.644844 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1c17792-1219-46ca-9587-380fbaced23b-config-data\") pod \"a1c17792-1219-46ca-9587-380fbaced23b\" (UID: \"a1c17792-1219-46ca-9587-380fbaced23b\") " Jan 22 16:54:25 crc kubenswrapper[4758]: I0122 16:54:25.644881 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1c17792-1219-46ca-9587-380fbaced23b-scripts\") pod \"a1c17792-1219-46ca-9587-380fbaced23b\" (UID: \"a1c17792-1219-46ca-9587-380fbaced23b\") " Jan 22 16:54:25 crc kubenswrapper[4758]: I0122 16:54:25.651241 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1c17792-1219-46ca-9587-380fbaced23b-kube-api-access-2bbb7" (OuterVolumeSpecName: "kube-api-access-2bbb7") pod "a1c17792-1219-46ca-9587-380fbaced23b" (UID: "a1c17792-1219-46ca-9587-380fbaced23b"). InnerVolumeSpecName "kube-api-access-2bbb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:54:25 crc kubenswrapper[4758]: I0122 16:54:25.652145 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1c17792-1219-46ca-9587-380fbaced23b-scripts" (OuterVolumeSpecName: "scripts") pod "a1c17792-1219-46ca-9587-380fbaced23b" (UID: "a1c17792-1219-46ca-9587-380fbaced23b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:54:25 crc kubenswrapper[4758]: I0122 16:54:25.682223 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1c17792-1219-46ca-9587-380fbaced23b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1c17792-1219-46ca-9587-380fbaced23b" (UID: "a1c17792-1219-46ca-9587-380fbaced23b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:54:25 crc kubenswrapper[4758]: I0122 16:54:25.684733 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1c17792-1219-46ca-9587-380fbaced23b-config-data" (OuterVolumeSpecName: "config-data") pod "a1c17792-1219-46ca-9587-380fbaced23b" (UID: "a1c17792-1219-46ca-9587-380fbaced23b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:54:25 crc kubenswrapper[4758]: I0122 16:54:25.747034 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bbb7\" (UniqueName: \"kubernetes.io/projected/a1c17792-1219-46ca-9587-380fbaced23b-kube-api-access-2bbb7\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:25 crc kubenswrapper[4758]: I0122 16:54:25.747072 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1c17792-1219-46ca-9587-380fbaced23b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:25 crc kubenswrapper[4758]: I0122 16:54:25.747086 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1c17792-1219-46ca-9587-380fbaced23b-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:25 crc kubenswrapper[4758]: I0122 16:54:25.747097 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1c17792-1219-46ca-9587-380fbaced23b-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:26 crc kubenswrapper[4758]: I0122 16:54:26.195706 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-kzc5v" event={"ID":"a1c17792-1219-46ca-9587-380fbaced23b","Type":"ContainerDied","Data":"8eb6b0b9f3462722f0c9cfc9cb51ddb8453af25d6bc44329224c9703534f542e"} Jan 22 16:54:26 crc kubenswrapper[4758]: I0122 16:54:26.195756 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8eb6b0b9f3462722f0c9cfc9cb51ddb8453af25d6bc44329224c9703534f542e" Jan 22 16:54:26 crc kubenswrapper[4758]: I0122 16:54:26.195817 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-kzc5v" Jan 22 16:54:26 crc kubenswrapper[4758]: I0122 16:54:26.289435 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 22 16:54:26 crc kubenswrapper[4758]: E0122 16:54:26.290252 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1c17792-1219-46ca-9587-380fbaced23b" containerName="nova-cell1-conductor-db-sync" Jan 22 16:54:26 crc kubenswrapper[4758]: I0122 16:54:26.290281 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1c17792-1219-46ca-9587-380fbaced23b" containerName="nova-cell1-conductor-db-sync" Jan 22 16:54:26 crc kubenswrapper[4758]: I0122 16:54:26.291447 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1c17792-1219-46ca-9587-380fbaced23b" containerName="nova-cell1-conductor-db-sync" Jan 22 16:54:26 crc kubenswrapper[4758]: I0122 16:54:26.293712 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 22 16:54:26 crc kubenswrapper[4758]: I0122 16:54:26.297704 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 22 16:54:26 crc kubenswrapper[4758]: I0122 16:54:26.305947 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 22 16:54:26 crc kubenswrapper[4758]: I0122 16:54:26.361077 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftmqb\" (UniqueName: \"kubernetes.io/projected/56eabbf1-f0f4-4d6d-8839-47dee8e04278-kube-api-access-ftmqb\") pod \"nova-cell1-conductor-0\" (UID: \"56eabbf1-f0f4-4d6d-8839-47dee8e04278\") " pod="openstack/nova-cell1-conductor-0" Jan 22 16:54:26 crc kubenswrapper[4758]: I0122 16:54:26.361149 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56eabbf1-f0f4-4d6d-8839-47dee8e04278-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"56eabbf1-f0f4-4d6d-8839-47dee8e04278\") " pod="openstack/nova-cell1-conductor-0" Jan 22 16:54:26 crc kubenswrapper[4758]: I0122 16:54:26.361173 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56eabbf1-f0f4-4d6d-8839-47dee8e04278-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"56eabbf1-f0f4-4d6d-8839-47dee8e04278\") " pod="openstack/nova-cell1-conductor-0" Jan 22 16:54:26 crc kubenswrapper[4758]: I0122 16:54:26.463230 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftmqb\" (UniqueName: \"kubernetes.io/projected/56eabbf1-f0f4-4d6d-8839-47dee8e04278-kube-api-access-ftmqb\") pod \"nova-cell1-conductor-0\" (UID: \"56eabbf1-f0f4-4d6d-8839-47dee8e04278\") " pod="openstack/nova-cell1-conductor-0" Jan 22 16:54:26 crc kubenswrapper[4758]: I0122 16:54:26.463286 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56eabbf1-f0f4-4d6d-8839-47dee8e04278-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"56eabbf1-f0f4-4d6d-8839-47dee8e04278\") " pod="openstack/nova-cell1-conductor-0" Jan 22 16:54:26 crc kubenswrapper[4758]: I0122 16:54:26.463308 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56eabbf1-f0f4-4d6d-8839-47dee8e04278-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"56eabbf1-f0f4-4d6d-8839-47dee8e04278\") " pod="openstack/nova-cell1-conductor-0" Jan 22 16:54:26 crc kubenswrapper[4758]: I0122 16:54:26.468778 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56eabbf1-f0f4-4d6d-8839-47dee8e04278-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"56eabbf1-f0f4-4d6d-8839-47dee8e04278\") " pod="openstack/nova-cell1-conductor-0" Jan 22 16:54:26 crc kubenswrapper[4758]: I0122 16:54:26.474437 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56eabbf1-f0f4-4d6d-8839-47dee8e04278-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"56eabbf1-f0f4-4d6d-8839-47dee8e04278\") " pod="openstack/nova-cell1-conductor-0" Jan 22 16:54:26 crc kubenswrapper[4758]: I0122 16:54:26.483705 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftmqb\" (UniqueName: \"kubernetes.io/projected/56eabbf1-f0f4-4d6d-8839-47dee8e04278-kube-api-access-ftmqb\") pod \"nova-cell1-conductor-0\" (UID: \"56eabbf1-f0f4-4d6d-8839-47dee8e04278\") " pod="openstack/nova-cell1-conductor-0" Jan 22 16:54:26 crc kubenswrapper[4758]: I0122 16:54:26.613135 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 22 16:54:27 crc kubenswrapper[4758]: W0122 16:54:27.061534 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56eabbf1_f0f4_4d6d_8839_47dee8e04278.slice/crio-1c4098781cbf21de2739e4abb62ed67eca9243a54c0778d02bc1a2ec10155e1a WatchSource:0}: Error finding container 1c4098781cbf21de2739e4abb62ed67eca9243a54c0778d02bc1a2ec10155e1a: Status 404 returned error can't find the container with id 1c4098781cbf21de2739e4abb62ed67eca9243a54c0778d02bc1a2ec10155e1a Jan 22 16:54:27 crc kubenswrapper[4758]: I0122 16:54:27.064293 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 22 16:54:27 crc kubenswrapper[4758]: I0122 16:54:27.205441 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"56eabbf1-f0f4-4d6d-8839-47dee8e04278","Type":"ContainerStarted","Data":"1c4098781cbf21de2739e4abb62ed67eca9243a54c0778d02bc1a2ec10155e1a"} Jan 22 16:54:27 crc kubenswrapper[4758]: I0122 16:54:27.471942 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 22 16:54:27 crc kubenswrapper[4758]: I0122 16:54:27.472311 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 22 16:54:28 crc kubenswrapper[4758]: I0122 16:54:28.217119 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"56eabbf1-f0f4-4d6d-8839-47dee8e04278","Type":"ContainerStarted","Data":"62485330bd4681c621b245105c1099f5c6082bfcd29138e2b115d3685154793c"} Jan 22 16:54:28 crc kubenswrapper[4758]: I0122 16:54:28.217585 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 22 16:54:28 crc kubenswrapper[4758]: I0122 16:54:28.240021 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.239995857 podStartE2EDuration="2.239995857s" podCreationTimestamp="2026-01-22 16:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:54:28.234608991 +0000 UTC m=+1489.717948286" watchObservedRunningTime="2026-01-22 16:54:28.239995857 +0000 UTC m=+1489.723335142" Jan 22 16:54:28 crc kubenswrapper[4758]: I0122 16:54:28.488092 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ef732e48-f2b4-48cf-822b-c1dabb02ec5c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.214:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 16:54:28 crc kubenswrapper[4758]: I0122 16:54:28.488076 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ef732e48-f2b4-48cf-822b-c1dabb02ec5c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.214:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 16:54:28 crc kubenswrapper[4758]: I0122 16:54:28.507431 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 22 16:54:28 crc kubenswrapper[4758]: I0122 16:54:28.541591 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 22 16:54:29 crc kubenswrapper[4758]: I0122 16:54:29.255725 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 22 16:54:30 crc kubenswrapper[4758]: I0122 16:54:30.515023 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 16:54:30 crc kubenswrapper[4758]: I0122 16:54:30.515705 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 16:54:31 crc kubenswrapper[4758]: I0122 16:54:31.596897 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c3278110-ce90-4374-9bf3-ae452ca7747f" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.216:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 16:54:31 crc kubenswrapper[4758]: I0122 16:54:31.598402 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c3278110-ce90-4374-9bf3-ae452ca7747f" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.216:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 16:54:36 crc kubenswrapper[4758]: I0122 16:54:36.650373 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 22 16:54:37 crc kubenswrapper[4758]: I0122 16:54:37.479788 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 22 16:54:37 crc kubenswrapper[4758]: I0122 16:54:37.481709 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 22 16:54:37 crc kubenswrapper[4758]: I0122 16:54:37.486458 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 22 16:54:38 crc kubenswrapper[4758]: I0122 16:54:38.329512 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 22 16:54:38 crc kubenswrapper[4758]: I0122 16:54:38.667722 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 22 16:54:39 crc kubenswrapper[4758]: I0122 16:54:39.333655 4758 generic.go:334] "Generic (PLEG): container finished" podID="59cb2cdb-5311-43ef-9aa9-ff9294b484da" containerID="f5f54603cad078c29e2cfcc26685371110394352b58929ad41e485ee7cfaa985" exitCode=137 Jan 22 16:54:39 crc kubenswrapper[4758]: I0122 16:54:39.333860 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"59cb2cdb-5311-43ef-9aa9-ff9294b484da","Type":"ContainerDied","Data":"f5f54603cad078c29e2cfcc26685371110394352b58929ad41e485ee7cfaa985"} Jan 22 16:54:39 crc kubenswrapper[4758]: I0122 16:54:39.813196 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 22 16:54:39 crc kubenswrapper[4758]: I0122 16:54:39.856368 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59cb2cdb-5311-43ef-9aa9-ff9294b484da-combined-ca-bundle\") pod \"59cb2cdb-5311-43ef-9aa9-ff9294b484da\" (UID: \"59cb2cdb-5311-43ef-9aa9-ff9294b484da\") " Jan 22 16:54:39 crc kubenswrapper[4758]: I0122 16:54:39.856430 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59cb2cdb-5311-43ef-9aa9-ff9294b484da-config-data\") pod \"59cb2cdb-5311-43ef-9aa9-ff9294b484da\" (UID: \"59cb2cdb-5311-43ef-9aa9-ff9294b484da\") " Jan 22 16:54:39 crc kubenswrapper[4758]: I0122 16:54:39.856493 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8w4qh\" (UniqueName: \"kubernetes.io/projected/59cb2cdb-5311-43ef-9aa9-ff9294b484da-kube-api-access-8w4qh\") pod \"59cb2cdb-5311-43ef-9aa9-ff9294b484da\" (UID: \"59cb2cdb-5311-43ef-9aa9-ff9294b484da\") " Jan 22 16:54:39 crc kubenswrapper[4758]: I0122 16:54:39.864153 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59cb2cdb-5311-43ef-9aa9-ff9294b484da-kube-api-access-8w4qh" (OuterVolumeSpecName: "kube-api-access-8w4qh") pod "59cb2cdb-5311-43ef-9aa9-ff9294b484da" (UID: "59cb2cdb-5311-43ef-9aa9-ff9294b484da"). InnerVolumeSpecName "kube-api-access-8w4qh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:54:39 crc kubenswrapper[4758]: I0122 16:54:39.887064 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59cb2cdb-5311-43ef-9aa9-ff9294b484da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59cb2cdb-5311-43ef-9aa9-ff9294b484da" (UID: "59cb2cdb-5311-43ef-9aa9-ff9294b484da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:54:39 crc kubenswrapper[4758]: I0122 16:54:39.888136 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59cb2cdb-5311-43ef-9aa9-ff9294b484da-config-data" (OuterVolumeSpecName: "config-data") pod "59cb2cdb-5311-43ef-9aa9-ff9294b484da" (UID: "59cb2cdb-5311-43ef-9aa9-ff9294b484da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:54:39 crc kubenswrapper[4758]: I0122 16:54:39.959176 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8w4qh\" (UniqueName: \"kubernetes.io/projected/59cb2cdb-5311-43ef-9aa9-ff9294b484da-kube-api-access-8w4qh\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:39 crc kubenswrapper[4758]: I0122 16:54:39.959216 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59cb2cdb-5311-43ef-9aa9-ff9294b484da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:39 crc kubenswrapper[4758]: I0122 16:54:39.959226 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59cb2cdb-5311-43ef-9aa9-ff9294b484da-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:40 crc kubenswrapper[4758]: I0122 16:54:40.348278 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"59cb2cdb-5311-43ef-9aa9-ff9294b484da","Type":"ContainerDied","Data":"726c1ef5c1a312b03be4cb38282fdf86cfff3b3bbe7283e51dd4a724e5e820a5"} Jan 22 16:54:40 crc kubenswrapper[4758]: I0122 16:54:40.348640 4758 scope.go:117] "RemoveContainer" containerID="f5f54603cad078c29e2cfcc26685371110394352b58929ad41e485ee7cfaa985" Jan 22 16:54:40 crc kubenswrapper[4758]: I0122 16:54:40.348360 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 22 16:54:40 crc kubenswrapper[4758]: I0122 16:54:40.388720 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 16:54:40 crc kubenswrapper[4758]: I0122 16:54:40.404550 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 16:54:40 crc kubenswrapper[4758]: I0122 16:54:40.419884 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 16:54:40 crc kubenswrapper[4758]: E0122 16:54:40.420430 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59cb2cdb-5311-43ef-9aa9-ff9294b484da" containerName="nova-cell1-novncproxy-novncproxy" Jan 22 16:54:40 crc kubenswrapper[4758]: I0122 16:54:40.420455 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="59cb2cdb-5311-43ef-9aa9-ff9294b484da" containerName="nova-cell1-novncproxy-novncproxy" Jan 22 16:54:40 crc kubenswrapper[4758]: I0122 16:54:40.420714 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="59cb2cdb-5311-43ef-9aa9-ff9294b484da" containerName="nova-cell1-novncproxy-novncproxy" Jan 22 16:54:40 crc kubenswrapper[4758]: I0122 16:54:40.421570 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 22 16:54:40 crc kubenswrapper[4758]: I0122 16:54:40.430793 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 16:54:40 crc kubenswrapper[4758]: I0122 16:54:40.430872 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 22 16:54:40 crc kubenswrapper[4758]: I0122 16:54:40.431112 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 22 16:54:40 crc kubenswrapper[4758]: I0122 16:54:40.431263 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 22 16:54:40 crc kubenswrapper[4758]: I0122 16:54:40.520514 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 22 16:54:40 crc kubenswrapper[4758]: I0122 16:54:40.521184 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 22 16:54:40 crc kubenswrapper[4758]: I0122 16:54:40.522782 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 22 16:54:40 crc kubenswrapper[4758]: I0122 16:54:40.527038 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 22 16:54:40 crc kubenswrapper[4758]: I0122 16:54:40.569442 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d192e57-5d00-4cbb-a380-db73a28f70f1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d192e57-5d00-4cbb-a380-db73a28f70f1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 16:54:40 crc kubenswrapper[4758]: I0122 16:54:40.569503 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f66zt\" (UniqueName: \"kubernetes.io/projected/6d192e57-5d00-4cbb-a380-db73a28f70f1-kube-api-access-f66zt\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d192e57-5d00-4cbb-a380-db73a28f70f1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 16:54:40 crc kubenswrapper[4758]: I0122 16:54:40.569566 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d192e57-5d00-4cbb-a380-db73a28f70f1-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d192e57-5d00-4cbb-a380-db73a28f70f1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 16:54:40 crc kubenswrapper[4758]: I0122 16:54:40.570103 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d192e57-5d00-4cbb-a380-db73a28f70f1-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d192e57-5d00-4cbb-a380-db73a28f70f1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 16:54:40 crc kubenswrapper[4758]: I0122 16:54:40.570209 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d192e57-5d00-4cbb-a380-db73a28f70f1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d192e57-5d00-4cbb-a380-db73a28f70f1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 16:54:40 crc kubenswrapper[4758]: I0122 16:54:40.672342 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d192e57-5d00-4cbb-a380-db73a28f70f1-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d192e57-5d00-4cbb-a380-db73a28f70f1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 16:54:40 crc kubenswrapper[4758]: I0122 16:54:40.672411 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d192e57-5d00-4cbb-a380-db73a28f70f1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d192e57-5d00-4cbb-a380-db73a28f70f1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 16:54:40 crc kubenswrapper[4758]: I0122 16:54:40.672505 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d192e57-5d00-4cbb-a380-db73a28f70f1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d192e57-5d00-4cbb-a380-db73a28f70f1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 16:54:40 crc kubenswrapper[4758]: I0122 16:54:40.672537 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f66zt\" (UniqueName: \"kubernetes.io/projected/6d192e57-5d00-4cbb-a380-db73a28f70f1-kube-api-access-f66zt\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d192e57-5d00-4cbb-a380-db73a28f70f1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 16:54:40 crc kubenswrapper[4758]: I0122 16:54:40.672620 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d192e57-5d00-4cbb-a380-db73a28f70f1-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d192e57-5d00-4cbb-a380-db73a28f70f1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 16:54:40 crc kubenswrapper[4758]: I0122 16:54:40.680280 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d192e57-5d00-4cbb-a380-db73a28f70f1-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d192e57-5d00-4cbb-a380-db73a28f70f1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 16:54:40 crc kubenswrapper[4758]: I0122 16:54:40.680401 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d192e57-5d00-4cbb-a380-db73a28f70f1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d192e57-5d00-4cbb-a380-db73a28f70f1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 16:54:40 crc kubenswrapper[4758]: I0122 16:54:40.681151 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d192e57-5d00-4cbb-a380-db73a28f70f1-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d192e57-5d00-4cbb-a380-db73a28f70f1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 16:54:40 crc kubenswrapper[4758]: I0122 16:54:40.691722 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d192e57-5d00-4cbb-a380-db73a28f70f1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d192e57-5d00-4cbb-a380-db73a28f70f1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 16:54:40 crc kubenswrapper[4758]: I0122 16:54:40.693657 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f66zt\" (UniqueName: \"kubernetes.io/projected/6d192e57-5d00-4cbb-a380-db73a28f70f1-kube-api-access-f66zt\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d192e57-5d00-4cbb-a380-db73a28f70f1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 16:54:40 crc kubenswrapper[4758]: I0122 16:54:40.741380 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 22 16:54:40 crc kubenswrapper[4758]: I0122 16:54:40.822970 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59cb2cdb-5311-43ef-9aa9-ff9294b484da" path="/var/lib/kubelet/pods/59cb2cdb-5311-43ef-9aa9-ff9294b484da/volumes" Jan 22 16:54:41 crc kubenswrapper[4758]: I0122 16:54:41.224497 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 16:54:41 crc kubenswrapper[4758]: I0122 16:54:41.360241 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6d192e57-5d00-4cbb-a380-db73a28f70f1","Type":"ContainerStarted","Data":"070931d8ccc60b1f5924aa031926486f98147d684a259d0ac69bd3e807869a9a"} Jan 22 16:54:41 crc kubenswrapper[4758]: I0122 16:54:41.363056 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 22 16:54:41 crc kubenswrapper[4758]: I0122 16:54:41.372778 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 22 16:54:41 crc kubenswrapper[4758]: I0122 16:54:41.569276 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-578cd76f49-qt7ds"] Jan 22 16:54:41 crc kubenswrapper[4758]: I0122 16:54:41.572016 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578cd76f49-qt7ds" Jan 22 16:54:41 crc kubenswrapper[4758]: I0122 16:54:41.607911 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-578cd76f49-qt7ds"] Jan 22 16:54:41 crc kubenswrapper[4758]: I0122 16:54:41.695577 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a23c56d2-baa4-4aac-b2d2-25da6724e3b1-dns-swift-storage-0\") pod \"dnsmasq-dns-578cd76f49-qt7ds\" (UID: \"a23c56d2-baa4-4aac-b2d2-25da6724e3b1\") " pod="openstack/dnsmasq-dns-578cd76f49-qt7ds" Jan 22 16:54:41 crc kubenswrapper[4758]: I0122 16:54:41.695668 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a23c56d2-baa4-4aac-b2d2-25da6724e3b1-config\") pod \"dnsmasq-dns-578cd76f49-qt7ds\" (UID: \"a23c56d2-baa4-4aac-b2d2-25da6724e3b1\") " pod="openstack/dnsmasq-dns-578cd76f49-qt7ds" Jan 22 16:54:41 crc kubenswrapper[4758]: I0122 16:54:41.695736 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a23c56d2-baa4-4aac-b2d2-25da6724e3b1-ovsdbserver-sb\") pod \"dnsmasq-dns-578cd76f49-qt7ds\" (UID: \"a23c56d2-baa4-4aac-b2d2-25da6724e3b1\") " pod="openstack/dnsmasq-dns-578cd76f49-qt7ds" Jan 22 16:54:41 crc kubenswrapper[4758]: I0122 16:54:41.695847 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcv77\" (UniqueName: \"kubernetes.io/projected/a23c56d2-baa4-4aac-b2d2-25da6724e3b1-kube-api-access-pcv77\") pod \"dnsmasq-dns-578cd76f49-qt7ds\" (UID: \"a23c56d2-baa4-4aac-b2d2-25da6724e3b1\") " pod="openstack/dnsmasq-dns-578cd76f49-qt7ds" Jan 22 16:54:41 crc kubenswrapper[4758]: I0122 16:54:41.695876 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a23c56d2-baa4-4aac-b2d2-25da6724e3b1-dns-svc\") pod \"dnsmasq-dns-578cd76f49-qt7ds\" (UID: \"a23c56d2-baa4-4aac-b2d2-25da6724e3b1\") " pod="openstack/dnsmasq-dns-578cd76f49-qt7ds" Jan 22 16:54:41 crc kubenswrapper[4758]: I0122 16:54:41.695945 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a23c56d2-baa4-4aac-b2d2-25da6724e3b1-ovsdbserver-nb\") pod \"dnsmasq-dns-578cd76f49-qt7ds\" (UID: \"a23c56d2-baa4-4aac-b2d2-25da6724e3b1\") " pod="openstack/dnsmasq-dns-578cd76f49-qt7ds" Jan 22 16:54:41 crc kubenswrapper[4758]: I0122 16:54:41.797840 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a23c56d2-baa4-4aac-b2d2-25da6724e3b1-config\") pod \"dnsmasq-dns-578cd76f49-qt7ds\" (UID: \"a23c56d2-baa4-4aac-b2d2-25da6724e3b1\") " pod="openstack/dnsmasq-dns-578cd76f49-qt7ds" Jan 22 16:54:41 crc kubenswrapper[4758]: I0122 16:54:41.797916 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a23c56d2-baa4-4aac-b2d2-25da6724e3b1-ovsdbserver-sb\") pod \"dnsmasq-dns-578cd76f49-qt7ds\" (UID: \"a23c56d2-baa4-4aac-b2d2-25da6724e3b1\") " pod="openstack/dnsmasq-dns-578cd76f49-qt7ds" Jan 22 16:54:41 crc kubenswrapper[4758]: I0122 16:54:41.797983 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcv77\" (UniqueName: \"kubernetes.io/projected/a23c56d2-baa4-4aac-b2d2-25da6724e3b1-kube-api-access-pcv77\") pod \"dnsmasq-dns-578cd76f49-qt7ds\" (UID: \"a23c56d2-baa4-4aac-b2d2-25da6724e3b1\") " pod="openstack/dnsmasq-dns-578cd76f49-qt7ds" Jan 22 16:54:41 crc kubenswrapper[4758]: I0122 16:54:41.798009 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a23c56d2-baa4-4aac-b2d2-25da6724e3b1-dns-svc\") pod \"dnsmasq-dns-578cd76f49-qt7ds\" (UID: \"a23c56d2-baa4-4aac-b2d2-25da6724e3b1\") " pod="openstack/dnsmasq-dns-578cd76f49-qt7ds" Jan 22 16:54:41 crc kubenswrapper[4758]: I0122 16:54:41.798053 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a23c56d2-baa4-4aac-b2d2-25da6724e3b1-ovsdbserver-nb\") pod \"dnsmasq-dns-578cd76f49-qt7ds\" (UID: \"a23c56d2-baa4-4aac-b2d2-25da6724e3b1\") " pod="openstack/dnsmasq-dns-578cd76f49-qt7ds" Jan 22 16:54:41 crc kubenswrapper[4758]: I0122 16:54:41.798119 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a23c56d2-baa4-4aac-b2d2-25da6724e3b1-dns-swift-storage-0\") pod \"dnsmasq-dns-578cd76f49-qt7ds\" (UID: \"a23c56d2-baa4-4aac-b2d2-25da6724e3b1\") " pod="openstack/dnsmasq-dns-578cd76f49-qt7ds" Jan 22 16:54:41 crc kubenswrapper[4758]: I0122 16:54:41.798711 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a23c56d2-baa4-4aac-b2d2-25da6724e3b1-config\") pod \"dnsmasq-dns-578cd76f49-qt7ds\" (UID: \"a23c56d2-baa4-4aac-b2d2-25da6724e3b1\") " pod="openstack/dnsmasq-dns-578cd76f49-qt7ds" Jan 22 16:54:41 crc kubenswrapper[4758]: I0122 16:54:41.798830 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a23c56d2-baa4-4aac-b2d2-25da6724e3b1-ovsdbserver-sb\") pod \"dnsmasq-dns-578cd76f49-qt7ds\" (UID: \"a23c56d2-baa4-4aac-b2d2-25da6724e3b1\") " pod="openstack/dnsmasq-dns-578cd76f49-qt7ds" Jan 22 16:54:41 crc kubenswrapper[4758]: I0122 16:54:41.799066 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a23c56d2-baa4-4aac-b2d2-25da6724e3b1-dns-swift-storage-0\") pod \"dnsmasq-dns-578cd76f49-qt7ds\" (UID: \"a23c56d2-baa4-4aac-b2d2-25da6724e3b1\") " pod="openstack/dnsmasq-dns-578cd76f49-qt7ds" Jan 22 16:54:41 crc kubenswrapper[4758]: I0122 16:54:41.801528 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a23c56d2-baa4-4aac-b2d2-25da6724e3b1-dns-svc\") pod \"dnsmasq-dns-578cd76f49-qt7ds\" (UID: \"a23c56d2-baa4-4aac-b2d2-25da6724e3b1\") " pod="openstack/dnsmasq-dns-578cd76f49-qt7ds" Jan 22 16:54:41 crc kubenswrapper[4758]: I0122 16:54:41.802727 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a23c56d2-baa4-4aac-b2d2-25da6724e3b1-ovsdbserver-nb\") pod \"dnsmasq-dns-578cd76f49-qt7ds\" (UID: \"a23c56d2-baa4-4aac-b2d2-25da6724e3b1\") " pod="openstack/dnsmasq-dns-578cd76f49-qt7ds" Jan 22 16:54:41 crc kubenswrapper[4758]: I0122 16:54:41.833782 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcv77\" (UniqueName: \"kubernetes.io/projected/a23c56d2-baa4-4aac-b2d2-25da6724e3b1-kube-api-access-pcv77\") pod \"dnsmasq-dns-578cd76f49-qt7ds\" (UID: \"a23c56d2-baa4-4aac-b2d2-25da6724e3b1\") " pod="openstack/dnsmasq-dns-578cd76f49-qt7ds" Jan 22 16:54:41 crc kubenswrapper[4758]: I0122 16:54:41.916408 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578cd76f49-qt7ds" Jan 22 16:54:42 crc kubenswrapper[4758]: I0122 16:54:42.373033 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6d192e57-5d00-4cbb-a380-db73a28f70f1","Type":"ContainerStarted","Data":"5389f7e7bc0354d51d958f3fdf7bfd8c234c6c33c518ab435dd7421612ba8c0f"} Jan 22 16:54:42 crc kubenswrapper[4758]: I0122 16:54:42.393929 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.393906865 podStartE2EDuration="2.393906865s" podCreationTimestamp="2026-01-22 16:54:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:54:42.387725396 +0000 UTC m=+1503.871064681" watchObservedRunningTime="2026-01-22 16:54:42.393906865 +0000 UTC m=+1503.877246140" Jan 22 16:54:42 crc kubenswrapper[4758]: I0122 16:54:42.419516 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-578cd76f49-qt7ds"] Jan 22 16:54:43 crc kubenswrapper[4758]: I0122 16:54:43.382607 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578cd76f49-qt7ds" event={"ID":"a23c56d2-baa4-4aac-b2d2-25da6724e3b1","Type":"ContainerStarted","Data":"928815a0333de4a7af4bf2510656f815e92c41cc62800fe9bcb779c6dada9133"} Jan 22 16:54:44 crc kubenswrapper[4758]: I0122 16:54:44.393549 4758 generic.go:334] "Generic (PLEG): container finished" podID="a23c56d2-baa4-4aac-b2d2-25da6724e3b1" containerID="bd06e2155cdc7c965c3a4e8c71278433b7a8df33700304b82efe1a8ec8b329e5" exitCode=0 Jan 22 16:54:44 crc kubenswrapper[4758]: I0122 16:54:44.393662 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578cd76f49-qt7ds" event={"ID":"a23c56d2-baa4-4aac-b2d2-25da6724e3b1","Type":"ContainerDied","Data":"bd06e2155cdc7c965c3a4e8c71278433b7a8df33700304b82efe1a8ec8b329e5"} Jan 22 16:54:45 crc kubenswrapper[4758]: I0122 16:54:45.405512 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578cd76f49-qt7ds" event={"ID":"a23c56d2-baa4-4aac-b2d2-25da6724e3b1","Type":"ContainerStarted","Data":"b0a15ecad05d92f9048b1b064bb20a654958a4887a838c4b6ae7e3cf23611ea8"} Jan 22 16:54:45 crc kubenswrapper[4758]: I0122 16:54:45.406069 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-578cd76f49-qt7ds" Jan 22 16:54:45 crc kubenswrapper[4758]: I0122 16:54:45.425485 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 22 16:54:45 crc kubenswrapper[4758]: I0122 16:54:45.425758 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c3278110-ce90-4374-9bf3-ae452ca7747f" containerName="nova-api-api" containerID="cri-o://89bec1a70570814c7675707bce6758ddf91569ba773a130bf89f61be71929adf" gracePeriod=30 Jan 22 16:54:45 crc kubenswrapper[4758]: I0122 16:54:45.425701 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c3278110-ce90-4374-9bf3-ae452ca7747f" containerName="nova-api-log" containerID="cri-o://69f9e28a308bba8979c053b9a79ca099de485a987a8bd8a1c09174f989f5dae4" gracePeriod=30 Jan 22 16:54:45 crc kubenswrapper[4758]: I0122 16:54:45.447272 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-578cd76f49-qt7ds" podStartSLOduration=4.447251579 podStartE2EDuration="4.447251579s" podCreationTimestamp="2026-01-22 16:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:54:45.439140197 +0000 UTC m=+1506.922479482" watchObservedRunningTime="2026-01-22 16:54:45.447251579 +0000 UTC m=+1506.930590864" Jan 22 16:54:45 crc kubenswrapper[4758]: I0122 16:54:45.742141 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 22 16:54:46 crc kubenswrapper[4758]: I0122 16:54:46.418139 4758 generic.go:334] "Generic (PLEG): container finished" podID="c3278110-ce90-4374-9bf3-ae452ca7747f" containerID="69f9e28a308bba8979c053b9a79ca099de485a987a8bd8a1c09174f989f5dae4" exitCode=143 Jan 22 16:54:46 crc kubenswrapper[4758]: I0122 16:54:46.418242 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c3278110-ce90-4374-9bf3-ae452ca7747f","Type":"ContainerDied","Data":"69f9e28a308bba8979c053b9a79ca099de485a987a8bd8a1c09174f989f5dae4"} Jan 22 16:54:46 crc kubenswrapper[4758]: I0122 16:54:46.902287 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 16:54:46 crc kubenswrapper[4758]: I0122 16:54:46.903389 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e5ced7f7-a89e-41c1-82b7-9fa15533621e" containerName="ceilometer-central-agent" containerID="cri-o://4b8c7c2883c90cf404ece1037564f60b06c4d9c1158a94e230aa682e257e3715" gracePeriod=30 Jan 22 16:54:46 crc kubenswrapper[4758]: I0122 16:54:46.903920 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e5ced7f7-a89e-41c1-82b7-9fa15533621e" containerName="proxy-httpd" containerID="cri-o://930e6afaf134c86ac2a51e1859aee3271aa996166f460663d9d26704a982000a" gracePeriod=30 Jan 22 16:54:46 crc kubenswrapper[4758]: I0122 16:54:46.903997 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e5ced7f7-a89e-41c1-82b7-9fa15533621e" containerName="sg-core" containerID="cri-o://84b988ab26b2cbaf00e534569b818aefd880af8ecf2497770a497c586f27f20e" gracePeriod=30 Jan 22 16:54:46 crc kubenswrapper[4758]: I0122 16:54:46.904047 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e5ced7f7-a89e-41c1-82b7-9fa15533621e" containerName="ceilometer-notification-agent" containerID="cri-o://e39dbebd02e22afd1643ed05975ad545e87632dd242581db414641f90b67b1b8" gracePeriod=30 Jan 22 16:54:46 crc kubenswrapper[4758]: I0122 16:54:46.931217 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 16:54:46 crc kubenswrapper[4758]: I0122 16:54:46.931411 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="772760c9-f1af-44f5-bfc0-9b949a639e9f" containerName="kube-state-metrics" containerID="cri-o://6945f422db816e99c4a31fb4f595ed73d5016f4e61e618d613699a55e148daa7" gracePeriod=30 Jan 22 16:54:46 crc kubenswrapper[4758]: I0122 16:54:46.993069 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.186419 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3278110-ce90-4374-9bf3-ae452ca7747f-config-data\") pod \"c3278110-ce90-4374-9bf3-ae452ca7747f\" (UID: \"c3278110-ce90-4374-9bf3-ae452ca7747f\") " Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.186493 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvs4l\" (UniqueName: \"kubernetes.io/projected/c3278110-ce90-4374-9bf3-ae452ca7747f-kube-api-access-gvs4l\") pod \"c3278110-ce90-4374-9bf3-ae452ca7747f\" (UID: \"c3278110-ce90-4374-9bf3-ae452ca7747f\") " Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.187491 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3278110-ce90-4374-9bf3-ae452ca7747f-combined-ca-bundle\") pod \"c3278110-ce90-4374-9bf3-ae452ca7747f\" (UID: \"c3278110-ce90-4374-9bf3-ae452ca7747f\") " Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.187520 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3278110-ce90-4374-9bf3-ae452ca7747f-logs\") pod \"c3278110-ce90-4374-9bf3-ae452ca7747f\" (UID: \"c3278110-ce90-4374-9bf3-ae452ca7747f\") " Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.188230 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3278110-ce90-4374-9bf3-ae452ca7747f-logs" (OuterVolumeSpecName: "logs") pod "c3278110-ce90-4374-9bf3-ae452ca7747f" (UID: "c3278110-ce90-4374-9bf3-ae452ca7747f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.195715 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3278110-ce90-4374-9bf3-ae452ca7747f-kube-api-access-gvs4l" (OuterVolumeSpecName: "kube-api-access-gvs4l") pod "c3278110-ce90-4374-9bf3-ae452ca7747f" (UID: "c3278110-ce90-4374-9bf3-ae452ca7747f"). InnerVolumeSpecName "kube-api-access-gvs4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.219541 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3278110-ce90-4374-9bf3-ae452ca7747f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c3278110-ce90-4374-9bf3-ae452ca7747f" (UID: "c3278110-ce90-4374-9bf3-ae452ca7747f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.223932 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3278110-ce90-4374-9bf3-ae452ca7747f-config-data" (OuterVolumeSpecName: "config-data") pod "c3278110-ce90-4374-9bf3-ae452ca7747f" (UID: "c3278110-ce90-4374-9bf3-ae452ca7747f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.291474 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3278110-ce90-4374-9bf3-ae452ca7747f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.291511 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3278110-ce90-4374-9bf3-ae452ca7747f-logs\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.291524 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3278110-ce90-4374-9bf3-ae452ca7747f-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.291584 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvs4l\" (UniqueName: \"kubernetes.io/projected/c3278110-ce90-4374-9bf3-ae452ca7747f-kube-api-access-gvs4l\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.415702 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.429903 4758 generic.go:334] "Generic (PLEG): container finished" podID="e5ced7f7-a89e-41c1-82b7-9fa15533621e" containerID="930e6afaf134c86ac2a51e1859aee3271aa996166f460663d9d26704a982000a" exitCode=0 Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.429944 4758 generic.go:334] "Generic (PLEG): container finished" podID="e5ced7f7-a89e-41c1-82b7-9fa15533621e" containerID="84b988ab26b2cbaf00e534569b818aefd880af8ecf2497770a497c586f27f20e" exitCode=2 Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.429956 4758 generic.go:334] "Generic (PLEG): container finished" podID="e5ced7f7-a89e-41c1-82b7-9fa15533621e" containerID="4b8c7c2883c90cf404ece1037564f60b06c4d9c1158a94e230aa682e257e3715" exitCode=0 Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.429976 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e5ced7f7-a89e-41c1-82b7-9fa15533621e","Type":"ContainerDied","Data":"930e6afaf134c86ac2a51e1859aee3271aa996166f460663d9d26704a982000a"} Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.430037 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e5ced7f7-a89e-41c1-82b7-9fa15533621e","Type":"ContainerDied","Data":"84b988ab26b2cbaf00e534569b818aefd880af8ecf2497770a497c586f27f20e"} Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.430053 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e5ced7f7-a89e-41c1-82b7-9fa15533621e","Type":"ContainerDied","Data":"4b8c7c2883c90cf404ece1037564f60b06c4d9c1158a94e230aa682e257e3715"} Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.476437 4758 generic.go:334] "Generic (PLEG): container finished" podID="772760c9-f1af-44f5-bfc0-9b949a639e9f" containerID="6945f422db816e99c4a31fb4f595ed73d5016f4e61e618d613699a55e148daa7" exitCode=2 Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.476558 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.477432 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"772760c9-f1af-44f5-bfc0-9b949a639e9f","Type":"ContainerDied","Data":"6945f422db816e99c4a31fb4f595ed73d5016f4e61e618d613699a55e148daa7"} Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.477472 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"772760c9-f1af-44f5-bfc0-9b949a639e9f","Type":"ContainerDied","Data":"10f76fd4984e92250fd0bbeb0545a5e87393aca7916feaf8654f21daa58194c3"} Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.477495 4758 scope.go:117] "RemoveContainer" containerID="6945f422db816e99c4a31fb4f595ed73d5016f4e61e618d613699a55e148daa7" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.488752 4758 generic.go:334] "Generic (PLEG): container finished" podID="c3278110-ce90-4374-9bf3-ae452ca7747f" containerID="89bec1a70570814c7675707bce6758ddf91569ba773a130bf89f61be71929adf" exitCode=0 Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.488781 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c3278110-ce90-4374-9bf3-ae452ca7747f","Type":"ContainerDied","Data":"89bec1a70570814c7675707bce6758ddf91569ba773a130bf89f61be71929adf"} Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.488832 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c3278110-ce90-4374-9bf3-ae452ca7747f","Type":"ContainerDied","Data":"b68d5edb87dd2034bf83872f5d0bef0f1e46f153f5573ce0065307d570dc3269"} Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.488778 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.509521 4758 scope.go:117] "RemoveContainer" containerID="6945f422db816e99c4a31fb4f595ed73d5016f4e61e618d613699a55e148daa7" Jan 22 16:54:47 crc kubenswrapper[4758]: E0122 16:54:47.511266 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6945f422db816e99c4a31fb4f595ed73d5016f4e61e618d613699a55e148daa7\": container with ID starting with 6945f422db816e99c4a31fb4f595ed73d5016f4e61e618d613699a55e148daa7 not found: ID does not exist" containerID="6945f422db816e99c4a31fb4f595ed73d5016f4e61e618d613699a55e148daa7" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.511320 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6945f422db816e99c4a31fb4f595ed73d5016f4e61e618d613699a55e148daa7"} err="failed to get container status \"6945f422db816e99c4a31fb4f595ed73d5016f4e61e618d613699a55e148daa7\": rpc error: code = NotFound desc = could not find container \"6945f422db816e99c4a31fb4f595ed73d5016f4e61e618d613699a55e148daa7\": container with ID starting with 6945f422db816e99c4a31fb4f595ed73d5016f4e61e618d613699a55e148daa7 not found: ID does not exist" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.511350 4758 scope.go:117] "RemoveContainer" containerID="89bec1a70570814c7675707bce6758ddf91569ba773a130bf89f61be71929adf" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.546004 4758 scope.go:117] "RemoveContainer" containerID="69f9e28a308bba8979c053b9a79ca099de485a987a8bd8a1c09174f989f5dae4" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.554265 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.578999 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.581696 4758 scope.go:117] "RemoveContainer" containerID="89bec1a70570814c7675707bce6758ddf91569ba773a130bf89f61be71929adf" Jan 22 16:54:47 crc kubenswrapper[4758]: E0122 16:54:47.582199 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89bec1a70570814c7675707bce6758ddf91569ba773a130bf89f61be71929adf\": container with ID starting with 89bec1a70570814c7675707bce6758ddf91569ba773a130bf89f61be71929adf not found: ID does not exist" containerID="89bec1a70570814c7675707bce6758ddf91569ba773a130bf89f61be71929adf" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.582263 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89bec1a70570814c7675707bce6758ddf91569ba773a130bf89f61be71929adf"} err="failed to get container status \"89bec1a70570814c7675707bce6758ddf91569ba773a130bf89f61be71929adf\": rpc error: code = NotFound desc = could not find container \"89bec1a70570814c7675707bce6758ddf91569ba773a130bf89f61be71929adf\": container with ID starting with 89bec1a70570814c7675707bce6758ddf91569ba773a130bf89f61be71929adf not found: ID does not exist" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.582299 4758 scope.go:117] "RemoveContainer" containerID="69f9e28a308bba8979c053b9a79ca099de485a987a8bd8a1c09174f989f5dae4" Jan 22 16:54:47 crc kubenswrapper[4758]: E0122 16:54:47.582679 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69f9e28a308bba8979c053b9a79ca099de485a987a8bd8a1c09174f989f5dae4\": container with ID starting with 69f9e28a308bba8979c053b9a79ca099de485a987a8bd8a1c09174f989f5dae4 not found: ID does not exist" containerID="69f9e28a308bba8979c053b9a79ca099de485a987a8bd8a1c09174f989f5dae4" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.582710 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69f9e28a308bba8979c053b9a79ca099de485a987a8bd8a1c09174f989f5dae4"} err="failed to get container status \"69f9e28a308bba8979c053b9a79ca099de485a987a8bd8a1c09174f989f5dae4\": rpc error: code = NotFound desc = could not find container \"69f9e28a308bba8979c053b9a79ca099de485a987a8bd8a1c09174f989f5dae4\": container with ID starting with 69f9e28a308bba8979c053b9a79ca099de485a987a8bd8a1c09174f989f5dae4 not found: ID does not exist" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.597915 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brf9k\" (UniqueName: \"kubernetes.io/projected/772760c9-f1af-44f5-bfc0-9b949a639e9f-kube-api-access-brf9k\") pod \"772760c9-f1af-44f5-bfc0-9b949a639e9f\" (UID: \"772760c9-f1af-44f5-bfc0-9b949a639e9f\") " Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.598626 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 22 16:54:47 crc kubenswrapper[4758]: E0122 16:54:47.599136 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3278110-ce90-4374-9bf3-ae452ca7747f" containerName="nova-api-api" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.599155 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3278110-ce90-4374-9bf3-ae452ca7747f" containerName="nova-api-api" Jan 22 16:54:47 crc kubenswrapper[4758]: E0122 16:54:47.599170 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="772760c9-f1af-44f5-bfc0-9b949a639e9f" containerName="kube-state-metrics" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.599176 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="772760c9-f1af-44f5-bfc0-9b949a639e9f" containerName="kube-state-metrics" Jan 22 16:54:47 crc kubenswrapper[4758]: E0122 16:54:47.599187 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3278110-ce90-4374-9bf3-ae452ca7747f" containerName="nova-api-log" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.599192 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3278110-ce90-4374-9bf3-ae452ca7747f" containerName="nova-api-log" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.599415 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3278110-ce90-4374-9bf3-ae452ca7747f" containerName="nova-api-log" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.599432 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3278110-ce90-4374-9bf3-ae452ca7747f" containerName="nova-api-api" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.599440 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="772760c9-f1af-44f5-bfc0-9b949a639e9f" containerName="kube-state-metrics" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.600540 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.602376 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/772760c9-f1af-44f5-bfc0-9b949a639e9f-kube-api-access-brf9k" (OuterVolumeSpecName: "kube-api-access-brf9k") pod "772760c9-f1af-44f5-bfc0-9b949a639e9f" (UID: "772760c9-f1af-44f5-bfc0-9b949a639e9f"). InnerVolumeSpecName "kube-api-access-brf9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.603828 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.604209 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.605180 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.614019 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.702318 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a\") " pod="openstack/nova-api-0" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.702388 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a-public-tls-certs\") pod \"nova-api-0\" (UID: \"c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a\") " pod="openstack/nova-api-0" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.702567 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a\") " pod="openstack/nova-api-0" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.702816 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a-logs\") pod \"nova-api-0\" (UID: \"c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a\") " pod="openstack/nova-api-0" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.702863 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bp5z\" (UniqueName: \"kubernetes.io/projected/c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a-kube-api-access-8bp5z\") pod \"nova-api-0\" (UID: \"c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a\") " pod="openstack/nova-api-0" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.703079 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a-config-data\") pod \"nova-api-0\" (UID: \"c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a\") " pod="openstack/nova-api-0" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.703214 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brf9k\" (UniqueName: \"kubernetes.io/projected/772760c9-f1af-44f5-bfc0-9b949a639e9f-kube-api-access-brf9k\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.805374 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a\") " pod="openstack/nova-api-0" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.805524 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a-logs\") pod \"nova-api-0\" (UID: \"c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a\") " pod="openstack/nova-api-0" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.805580 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bp5z\" (UniqueName: \"kubernetes.io/projected/c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a-kube-api-access-8bp5z\") pod \"nova-api-0\" (UID: \"c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a\") " pod="openstack/nova-api-0" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.805714 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a-config-data\") pod \"nova-api-0\" (UID: \"c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a\") " pod="openstack/nova-api-0" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.805823 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a\") " pod="openstack/nova-api-0" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.805885 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a-public-tls-certs\") pod \"nova-api-0\" (UID: \"c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a\") " pod="openstack/nova-api-0" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.806024 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a-logs\") pod \"nova-api-0\" (UID: \"c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a\") " pod="openstack/nova-api-0" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.809700 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a\") " pod="openstack/nova-api-0" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.809726 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a\") " pod="openstack/nova-api-0" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.809762 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a-config-data\") pod \"nova-api-0\" (UID: \"c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a\") " pod="openstack/nova-api-0" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.810835 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a-public-tls-certs\") pod \"nova-api-0\" (UID: \"c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a\") " pod="openstack/nova-api-0" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.824733 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bp5z\" (UniqueName: \"kubernetes.io/projected/c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a-kube-api-access-8bp5z\") pod \"nova-api-0\" (UID: \"c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a\") " pod="openstack/nova-api-0" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.945564 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.955815 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 16:54:47 crc kubenswrapper[4758]: I0122 16:54:47.987512 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 16:54:48 crc kubenswrapper[4758]: I0122 16:54:48.020536 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 16:54:48 crc kubenswrapper[4758]: I0122 16:54:48.022249 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 22 16:54:48 crc kubenswrapper[4758]: I0122 16:54:48.028053 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 22 16:54:48 crc kubenswrapper[4758]: I0122 16:54:48.029095 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 22 16:54:48 crc kubenswrapper[4758]: I0122 16:54:48.032941 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 16:54:48 crc kubenswrapper[4758]: I0122 16:54:48.211861 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5a7a812-eaba-4ae7-8d97-e80ae4f70d78-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"d5a7a812-eaba-4ae7-8d97-e80ae4f70d78\") " pod="openstack/kube-state-metrics-0" Jan 22 16:54:48 crc kubenswrapper[4758]: I0122 16:54:48.212011 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srm2c\" (UniqueName: \"kubernetes.io/projected/d5a7a812-eaba-4ae7-8d97-e80ae4f70d78-kube-api-access-srm2c\") pod \"kube-state-metrics-0\" (UID: \"d5a7a812-eaba-4ae7-8d97-e80ae4f70d78\") " pod="openstack/kube-state-metrics-0" Jan 22 16:54:48 crc kubenswrapper[4758]: I0122 16:54:48.212142 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5a7a812-eaba-4ae7-8d97-e80ae4f70d78-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"d5a7a812-eaba-4ae7-8d97-e80ae4f70d78\") " pod="openstack/kube-state-metrics-0" Jan 22 16:54:48 crc kubenswrapper[4758]: I0122 16:54:48.212330 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/d5a7a812-eaba-4ae7-8d97-e80ae4f70d78-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"d5a7a812-eaba-4ae7-8d97-e80ae4f70d78\") " pod="openstack/kube-state-metrics-0" Jan 22 16:54:48 crc kubenswrapper[4758]: I0122 16:54:48.314046 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5a7a812-eaba-4ae7-8d97-e80ae4f70d78-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"d5a7a812-eaba-4ae7-8d97-e80ae4f70d78\") " pod="openstack/kube-state-metrics-0" Jan 22 16:54:48 crc kubenswrapper[4758]: I0122 16:54:48.315066 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/d5a7a812-eaba-4ae7-8d97-e80ae4f70d78-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"d5a7a812-eaba-4ae7-8d97-e80ae4f70d78\") " pod="openstack/kube-state-metrics-0" Jan 22 16:54:48 crc kubenswrapper[4758]: I0122 16:54:48.315602 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5a7a812-eaba-4ae7-8d97-e80ae4f70d78-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"d5a7a812-eaba-4ae7-8d97-e80ae4f70d78\") " pod="openstack/kube-state-metrics-0" Jan 22 16:54:48 crc kubenswrapper[4758]: I0122 16:54:48.315725 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srm2c\" (UniqueName: \"kubernetes.io/projected/d5a7a812-eaba-4ae7-8d97-e80ae4f70d78-kube-api-access-srm2c\") pod \"kube-state-metrics-0\" (UID: \"d5a7a812-eaba-4ae7-8d97-e80ae4f70d78\") " pod="openstack/kube-state-metrics-0" Jan 22 16:54:48 crc kubenswrapper[4758]: I0122 16:54:48.321234 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5a7a812-eaba-4ae7-8d97-e80ae4f70d78-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"d5a7a812-eaba-4ae7-8d97-e80ae4f70d78\") " pod="openstack/kube-state-metrics-0" Jan 22 16:54:48 crc kubenswrapper[4758]: I0122 16:54:48.322138 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5a7a812-eaba-4ae7-8d97-e80ae4f70d78-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"d5a7a812-eaba-4ae7-8d97-e80ae4f70d78\") " pod="openstack/kube-state-metrics-0" Jan 22 16:54:48 crc kubenswrapper[4758]: I0122 16:54:48.322622 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/d5a7a812-eaba-4ae7-8d97-e80ae4f70d78-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"d5a7a812-eaba-4ae7-8d97-e80ae4f70d78\") " pod="openstack/kube-state-metrics-0" Jan 22 16:54:48 crc kubenswrapper[4758]: I0122 16:54:48.344471 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srm2c\" (UniqueName: \"kubernetes.io/projected/d5a7a812-eaba-4ae7-8d97-e80ae4f70d78-kube-api-access-srm2c\") pod \"kube-state-metrics-0\" (UID: \"d5a7a812-eaba-4ae7-8d97-e80ae4f70d78\") " pod="openstack/kube-state-metrics-0" Jan 22 16:54:48 crc kubenswrapper[4758]: I0122 16:54:48.396157 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 22 16:54:48 crc kubenswrapper[4758]: I0122 16:54:48.440814 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 16:54:48 crc kubenswrapper[4758]: I0122 16:54:48.520077 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a","Type":"ContainerStarted","Data":"2a2f3b970c425ba1f2d86284078b5c6771b7c5dc71b45af7f422971f872ddb35"} Jan 22 16:54:48 crc kubenswrapper[4758]: I0122 16:54:48.818627 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="772760c9-f1af-44f5-bfc0-9b949a639e9f" path="/var/lib/kubelet/pods/772760c9-f1af-44f5-bfc0-9b949a639e9f/volumes" Jan 22 16:54:48 crc kubenswrapper[4758]: I0122 16:54:48.819526 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3278110-ce90-4374-9bf3-ae452ca7747f" path="/var/lib/kubelet/pods/c3278110-ce90-4374-9bf3-ae452ca7747f/volumes" Jan 22 16:54:48 crc kubenswrapper[4758]: I0122 16:54:48.865992 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 16:54:49 crc kubenswrapper[4758]: I0122 16:54:49.534099 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d5a7a812-eaba-4ae7-8d97-e80ae4f70d78","Type":"ContainerStarted","Data":"86d883a220d507d58001299566ebc825d30baa30d69c33bbcca8033c0d96efa7"} Jan 22 16:54:49 crc kubenswrapper[4758]: I0122 16:54:49.537395 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a","Type":"ContainerStarted","Data":"287f91ca92e5106b01ad2e91751f2fe89ddd3b49b01ecd47af389841982b9ded"} Jan 22 16:54:49 crc kubenswrapper[4758]: I0122 16:54:49.537425 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a","Type":"ContainerStarted","Data":"5f6db6b5b39cfac1304684b23a2efdc0dd1e0e71ce4fc2bc29daaff0d2a87fc5"} Jan 22 16:54:50 crc kubenswrapper[4758]: I0122 16:54:50.553376 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d5a7a812-eaba-4ae7-8d97-e80ae4f70d78","Type":"ContainerStarted","Data":"78a6ec775e3414b464115c9d589c3eae8881ff824d356dbc942d4deea2d4d1d1"} Jan 22 16:54:50 crc kubenswrapper[4758]: I0122 16:54:50.578223 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.104073306 podStartE2EDuration="3.57820274s" podCreationTimestamp="2026-01-22 16:54:47 +0000 UTC" firstStartedPulling="2026-01-22 16:54:48.872084064 +0000 UTC m=+1510.355423349" lastFinishedPulling="2026-01-22 16:54:49.346213498 +0000 UTC m=+1510.829552783" observedRunningTime="2026-01-22 16:54:50.573627334 +0000 UTC m=+1512.056966619" watchObservedRunningTime="2026-01-22 16:54:50.57820274 +0000 UTC m=+1512.061542025" Jan 22 16:54:50 crc kubenswrapper[4758]: I0122 16:54:50.586216 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.586193047 podStartE2EDuration="3.586193047s" podCreationTimestamp="2026-01-22 16:54:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:54:49.563188381 +0000 UTC m=+1511.046527676" watchObservedRunningTime="2026-01-22 16:54:50.586193047 +0000 UTC m=+1512.069532332" Jan 22 16:54:50 crc kubenswrapper[4758]: I0122 16:54:50.742383 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 22 16:54:50 crc kubenswrapper[4758]: I0122 16:54:50.770851 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 22 16:54:51 crc kubenswrapper[4758]: I0122 16:54:51.564378 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 22 16:54:51 crc kubenswrapper[4758]: I0122 16:54:51.585355 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 22 16:54:51 crc kubenswrapper[4758]: I0122 16:54:51.743313 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-vlp59"] Jan 22 16:54:51 crc kubenswrapper[4758]: I0122 16:54:51.745055 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-vlp59" Jan 22 16:54:51 crc kubenswrapper[4758]: I0122 16:54:51.747259 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 22 16:54:51 crc kubenswrapper[4758]: I0122 16:54:51.747640 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 22 16:54:51 crc kubenswrapper[4758]: I0122 16:54:51.760544 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-vlp59"] Jan 22 16:54:51 crc kubenswrapper[4758]: I0122 16:54:51.905866 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1c22116-ce0a-4806-bbf7-e514519abff0-config-data\") pod \"nova-cell1-cell-mapping-vlp59\" (UID: \"e1c22116-ce0a-4806-bbf7-e514519abff0\") " pod="openstack/nova-cell1-cell-mapping-vlp59" Jan 22 16:54:51 crc kubenswrapper[4758]: I0122 16:54:51.906459 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1c22116-ce0a-4806-bbf7-e514519abff0-scripts\") pod \"nova-cell1-cell-mapping-vlp59\" (UID: \"e1c22116-ce0a-4806-bbf7-e514519abff0\") " pod="openstack/nova-cell1-cell-mapping-vlp59" Jan 22 16:54:51 crc kubenswrapper[4758]: I0122 16:54:51.906670 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1c22116-ce0a-4806-bbf7-e514519abff0-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-vlp59\" (UID: \"e1c22116-ce0a-4806-bbf7-e514519abff0\") " pod="openstack/nova-cell1-cell-mapping-vlp59" Jan 22 16:54:51 crc kubenswrapper[4758]: I0122 16:54:51.906715 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnqpm\" (UniqueName: \"kubernetes.io/projected/e1c22116-ce0a-4806-bbf7-e514519abff0-kube-api-access-dnqpm\") pod \"nova-cell1-cell-mapping-vlp59\" (UID: \"e1c22116-ce0a-4806-bbf7-e514519abff0\") " pod="openstack/nova-cell1-cell-mapping-vlp59" Jan 22 16:54:51 crc kubenswrapper[4758]: I0122 16:54:51.917929 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-578cd76f49-qt7ds" Jan 22 16:54:51 crc kubenswrapper[4758]: I0122 16:54:51.976501 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dc6789cf7-9vznq"] Jan 22 16:54:51 crc kubenswrapper[4758]: I0122 16:54:51.976868 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5dc6789cf7-9vznq" podUID="d77dabf3-2031-4c96-a78f-bb704b2f7f84" containerName="dnsmasq-dns" containerID="cri-o://44260ebc26f36a8331b9b638f599fbfd97bc4759eda893c7d39cec227d149b20" gracePeriod=10 Jan 22 16:54:52 crc kubenswrapper[4758]: I0122 16:54:52.022016 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1c22116-ce0a-4806-bbf7-e514519abff0-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-vlp59\" (UID: \"e1c22116-ce0a-4806-bbf7-e514519abff0\") " pod="openstack/nova-cell1-cell-mapping-vlp59" Jan 22 16:54:52 crc kubenswrapper[4758]: I0122 16:54:52.022066 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnqpm\" (UniqueName: \"kubernetes.io/projected/e1c22116-ce0a-4806-bbf7-e514519abff0-kube-api-access-dnqpm\") pod \"nova-cell1-cell-mapping-vlp59\" (UID: \"e1c22116-ce0a-4806-bbf7-e514519abff0\") " pod="openstack/nova-cell1-cell-mapping-vlp59" Jan 22 16:54:52 crc kubenswrapper[4758]: I0122 16:54:52.022213 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1c22116-ce0a-4806-bbf7-e514519abff0-config-data\") pod \"nova-cell1-cell-mapping-vlp59\" (UID: \"e1c22116-ce0a-4806-bbf7-e514519abff0\") " pod="openstack/nova-cell1-cell-mapping-vlp59" Jan 22 16:54:52 crc kubenswrapper[4758]: I0122 16:54:52.022244 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1c22116-ce0a-4806-bbf7-e514519abff0-scripts\") pod \"nova-cell1-cell-mapping-vlp59\" (UID: \"e1c22116-ce0a-4806-bbf7-e514519abff0\") " pod="openstack/nova-cell1-cell-mapping-vlp59" Jan 22 16:54:52 crc kubenswrapper[4758]: I0122 16:54:52.029269 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1c22116-ce0a-4806-bbf7-e514519abff0-config-data\") pod \"nova-cell1-cell-mapping-vlp59\" (UID: \"e1c22116-ce0a-4806-bbf7-e514519abff0\") " pod="openstack/nova-cell1-cell-mapping-vlp59" Jan 22 16:54:52 crc kubenswrapper[4758]: I0122 16:54:52.029288 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1c22116-ce0a-4806-bbf7-e514519abff0-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-vlp59\" (UID: \"e1c22116-ce0a-4806-bbf7-e514519abff0\") " pod="openstack/nova-cell1-cell-mapping-vlp59" Jan 22 16:54:52 crc kubenswrapper[4758]: I0122 16:54:52.034158 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1c22116-ce0a-4806-bbf7-e514519abff0-scripts\") pod \"nova-cell1-cell-mapping-vlp59\" (UID: \"e1c22116-ce0a-4806-bbf7-e514519abff0\") " pod="openstack/nova-cell1-cell-mapping-vlp59" Jan 22 16:54:52 crc kubenswrapper[4758]: I0122 16:54:52.052279 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnqpm\" (UniqueName: \"kubernetes.io/projected/e1c22116-ce0a-4806-bbf7-e514519abff0-kube-api-access-dnqpm\") pod \"nova-cell1-cell-mapping-vlp59\" (UID: \"e1c22116-ce0a-4806-bbf7-e514519abff0\") " pod="openstack/nova-cell1-cell-mapping-vlp59" Jan 22 16:54:52 crc kubenswrapper[4758]: I0122 16:54:52.081024 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-vlp59" Jan 22 16:54:52 crc kubenswrapper[4758]: I0122 16:54:52.291932 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5dc6789cf7-9vznq" podUID="d77dabf3-2031-4c96-a78f-bb704b2f7f84" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.210:5353: connect: connection refused" Jan 22 16:54:52 crc kubenswrapper[4758]: I0122 16:54:52.575031 4758 generic.go:334] "Generic (PLEG): container finished" podID="d77dabf3-2031-4c96-a78f-bb704b2f7f84" containerID="44260ebc26f36a8331b9b638f599fbfd97bc4759eda893c7d39cec227d149b20" exitCode=0 Jan 22 16:54:52 crc kubenswrapper[4758]: I0122 16:54:52.575104 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc6789cf7-9vznq" event={"ID":"d77dabf3-2031-4c96-a78f-bb704b2f7f84","Type":"ContainerDied","Data":"44260ebc26f36a8331b9b638f599fbfd97bc4759eda893c7d39cec227d149b20"} Jan 22 16:54:52 crc kubenswrapper[4758]: I0122 16:54:52.646573 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-vlp59"] Jan 22 16:54:52 crc kubenswrapper[4758]: W0122 16:54:52.651397 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1c22116_ce0a_4806_bbf7_e514519abff0.slice/crio-a09d3ea3a71440379db77d21df9e4c8c0828f7f71ec99368f71dddb121752690 WatchSource:0}: Error finding container a09d3ea3a71440379db77d21df9e4c8c0828f7f71ec99368f71dddb121752690: Status 404 returned error can't find the container with id a09d3ea3a71440379db77d21df9e4c8c0828f7f71ec99368f71dddb121752690 Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.211353 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc6789cf7-9vznq" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.369662 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d77dabf3-2031-4c96-a78f-bb704b2f7f84-config\") pod \"d77dabf3-2031-4c96-a78f-bb704b2f7f84\" (UID: \"d77dabf3-2031-4c96-a78f-bb704b2f7f84\") " Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.369722 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d77dabf3-2031-4c96-a78f-bb704b2f7f84-ovsdbserver-nb\") pod \"d77dabf3-2031-4c96-a78f-bb704b2f7f84\" (UID: \"d77dabf3-2031-4c96-a78f-bb704b2f7f84\") " Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.369879 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d77dabf3-2031-4c96-a78f-bb704b2f7f84-ovsdbserver-sb\") pod \"d77dabf3-2031-4c96-a78f-bb704b2f7f84\" (UID: \"d77dabf3-2031-4c96-a78f-bb704b2f7f84\") " Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.370050 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d77dabf3-2031-4c96-a78f-bb704b2f7f84-dns-svc\") pod \"d77dabf3-2031-4c96-a78f-bb704b2f7f84\" (UID: \"d77dabf3-2031-4c96-a78f-bb704b2f7f84\") " Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.370071 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sph6z\" (UniqueName: \"kubernetes.io/projected/d77dabf3-2031-4c96-a78f-bb704b2f7f84-kube-api-access-sph6z\") pod \"d77dabf3-2031-4c96-a78f-bb704b2f7f84\" (UID: \"d77dabf3-2031-4c96-a78f-bb704b2f7f84\") " Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.370154 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d77dabf3-2031-4c96-a78f-bb704b2f7f84-dns-swift-storage-0\") pod \"d77dabf3-2031-4c96-a78f-bb704b2f7f84\" (UID: \"d77dabf3-2031-4c96-a78f-bb704b2f7f84\") " Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.376983 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d77dabf3-2031-4c96-a78f-bb704b2f7f84-kube-api-access-sph6z" (OuterVolumeSpecName: "kube-api-access-sph6z") pod "d77dabf3-2031-4c96-a78f-bb704b2f7f84" (UID: "d77dabf3-2031-4c96-a78f-bb704b2f7f84"). InnerVolumeSpecName "kube-api-access-sph6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.436478 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d77dabf3-2031-4c96-a78f-bb704b2f7f84-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d77dabf3-2031-4c96-a78f-bb704b2f7f84" (UID: "d77dabf3-2031-4c96-a78f-bb704b2f7f84"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.436569 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d77dabf3-2031-4c96-a78f-bb704b2f7f84-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d77dabf3-2031-4c96-a78f-bb704b2f7f84" (UID: "d77dabf3-2031-4c96-a78f-bb704b2f7f84"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.437714 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d77dabf3-2031-4c96-a78f-bb704b2f7f84-config" (OuterVolumeSpecName: "config") pod "d77dabf3-2031-4c96-a78f-bb704b2f7f84" (UID: "d77dabf3-2031-4c96-a78f-bb704b2f7f84"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.439855 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d77dabf3-2031-4c96-a78f-bb704b2f7f84-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d77dabf3-2031-4c96-a78f-bb704b2f7f84" (UID: "d77dabf3-2031-4c96-a78f-bb704b2f7f84"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.447192 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d77dabf3-2031-4c96-a78f-bb704b2f7f84-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d77dabf3-2031-4c96-a78f-bb704b2f7f84" (UID: "d77dabf3-2031-4c96-a78f-bb704b2f7f84"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.472895 4758 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d77dabf3-2031-4c96-a78f-bb704b2f7f84-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.472936 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sph6z\" (UniqueName: \"kubernetes.io/projected/d77dabf3-2031-4c96-a78f-bb704b2f7f84-kube-api-access-sph6z\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.472953 4758 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d77dabf3-2031-4c96-a78f-bb704b2f7f84-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.472967 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d77dabf3-2031-4c96-a78f-bb704b2f7f84-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.472977 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d77dabf3-2031-4c96-a78f-bb704b2f7f84-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.472989 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d77dabf3-2031-4c96-a78f-bb704b2f7f84-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.522839 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.590042 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc6789cf7-9vznq" event={"ID":"d77dabf3-2031-4c96-a78f-bb704b2f7f84","Type":"ContainerDied","Data":"96bf60d67cd296d39707b47c7b19a427b5bc030d58d9a59296239b217c29c0a6"} Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.590130 4758 scope.go:117] "RemoveContainer" containerID="44260ebc26f36a8331b9b638f599fbfd97bc4759eda893c7d39cec227d149b20" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.590255 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc6789cf7-9vznq" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.605826 4758 generic.go:334] "Generic (PLEG): container finished" podID="e5ced7f7-a89e-41c1-82b7-9fa15533621e" containerID="e39dbebd02e22afd1643ed05975ad545e87632dd242581db414641f90b67b1b8" exitCode=0 Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.605929 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.606710 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e5ced7f7-a89e-41c1-82b7-9fa15533621e","Type":"ContainerDied","Data":"e39dbebd02e22afd1643ed05975ad545e87632dd242581db414641f90b67b1b8"} Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.606761 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e5ced7f7-a89e-41c1-82b7-9fa15533621e","Type":"ContainerDied","Data":"327d33055c3d767282e1fb1d6af6cb1c3bff2dd2e5f7f68d637b34929bfd6cad"} Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.612939 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-vlp59" event={"ID":"e1c22116-ce0a-4806-bbf7-e514519abff0","Type":"ContainerStarted","Data":"64e1a857d67e593db9601cf41360703e3f11f22770e474322a231e40b2dbbd2d"} Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.612983 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-vlp59" event={"ID":"e1c22116-ce0a-4806-bbf7-e514519abff0","Type":"ContainerStarted","Data":"a09d3ea3a71440379db77d21df9e4c8c0828f7f71ec99368f71dddb121752690"} Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.640432 4758 scope.go:117] "RemoveContainer" containerID="2770bbe396420b96e5e9eda38817ae7e24c154e62dff2e75867809a6bf7d7a60" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.642954 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-vlp59" podStartSLOduration=2.6429390440000002 podStartE2EDuration="2.642939044s" podCreationTimestamp="2026-01-22 16:54:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:54:53.640613661 +0000 UTC m=+1515.123952946" watchObservedRunningTime="2026-01-22 16:54:53.642939044 +0000 UTC m=+1515.126278329" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.668929 4758 scope.go:117] "RemoveContainer" containerID="930e6afaf134c86ac2a51e1859aee3271aa996166f460663d9d26704a982000a" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.669757 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dc6789cf7-9vznq"] Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.683042 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5ced7f7-a89e-41c1-82b7-9fa15533621e-config-data\") pod \"e5ced7f7-a89e-41c1-82b7-9fa15533621e\" (UID: \"e5ced7f7-a89e-41c1-82b7-9fa15533621e\") " Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.683156 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zncrr\" (UniqueName: \"kubernetes.io/projected/e5ced7f7-a89e-41c1-82b7-9fa15533621e-kube-api-access-zncrr\") pod \"e5ced7f7-a89e-41c1-82b7-9fa15533621e\" (UID: \"e5ced7f7-a89e-41c1-82b7-9fa15533621e\") " Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.683194 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e5ced7f7-a89e-41c1-82b7-9fa15533621e-run-httpd\") pod \"e5ced7f7-a89e-41c1-82b7-9fa15533621e\" (UID: \"e5ced7f7-a89e-41c1-82b7-9fa15533621e\") " Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.683221 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e5ced7f7-a89e-41c1-82b7-9fa15533621e-log-httpd\") pod \"e5ced7f7-a89e-41c1-82b7-9fa15533621e\" (UID: \"e5ced7f7-a89e-41c1-82b7-9fa15533621e\") " Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.683242 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5ced7f7-a89e-41c1-82b7-9fa15533621e-scripts\") pod \"e5ced7f7-a89e-41c1-82b7-9fa15533621e\" (UID: \"e5ced7f7-a89e-41c1-82b7-9fa15533621e\") " Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.683556 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5ced7f7-a89e-41c1-82b7-9fa15533621e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e5ced7f7-a89e-41c1-82b7-9fa15533621e" (UID: "e5ced7f7-a89e-41c1-82b7-9fa15533621e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.683842 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5ced7f7-a89e-41c1-82b7-9fa15533621e-combined-ca-bundle\") pod \"e5ced7f7-a89e-41c1-82b7-9fa15533621e\" (UID: \"e5ced7f7-a89e-41c1-82b7-9fa15533621e\") " Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.683875 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e5ced7f7-a89e-41c1-82b7-9fa15533621e-sg-core-conf-yaml\") pod \"e5ced7f7-a89e-41c1-82b7-9fa15533621e\" (UID: \"e5ced7f7-a89e-41c1-82b7-9fa15533621e\") " Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.684232 4758 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e5ced7f7-a89e-41c1-82b7-9fa15533621e-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.684440 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5ced7f7-a89e-41c1-82b7-9fa15533621e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e5ced7f7-a89e-41c1-82b7-9fa15533621e" (UID: "e5ced7f7-a89e-41c1-82b7-9fa15533621e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.688420 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5dc6789cf7-9vznq"] Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.689651 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5ced7f7-a89e-41c1-82b7-9fa15533621e-kube-api-access-zncrr" (OuterVolumeSpecName: "kube-api-access-zncrr") pod "e5ced7f7-a89e-41c1-82b7-9fa15533621e" (UID: "e5ced7f7-a89e-41c1-82b7-9fa15533621e"). InnerVolumeSpecName "kube-api-access-zncrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.690972 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5ced7f7-a89e-41c1-82b7-9fa15533621e-scripts" (OuterVolumeSpecName: "scripts") pod "e5ced7f7-a89e-41c1-82b7-9fa15533621e" (UID: "e5ced7f7-a89e-41c1-82b7-9fa15533621e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.693971 4758 scope.go:117] "RemoveContainer" containerID="84b988ab26b2cbaf00e534569b818aefd880af8ecf2497770a497c586f27f20e" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.720235 4758 scope.go:117] "RemoveContainer" containerID="e39dbebd02e22afd1643ed05975ad545e87632dd242581db414641f90b67b1b8" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.723034 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5ced7f7-a89e-41c1-82b7-9fa15533621e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e5ced7f7-a89e-41c1-82b7-9fa15533621e" (UID: "e5ced7f7-a89e-41c1-82b7-9fa15533621e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.741797 4758 scope.go:117] "RemoveContainer" containerID="4b8c7c2883c90cf404ece1037564f60b06c4d9c1158a94e230aa682e257e3715" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.762050 4758 scope.go:117] "RemoveContainer" containerID="930e6afaf134c86ac2a51e1859aee3271aa996166f460663d9d26704a982000a" Jan 22 16:54:53 crc kubenswrapper[4758]: E0122 16:54:53.762593 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"930e6afaf134c86ac2a51e1859aee3271aa996166f460663d9d26704a982000a\": container with ID starting with 930e6afaf134c86ac2a51e1859aee3271aa996166f460663d9d26704a982000a not found: ID does not exist" containerID="930e6afaf134c86ac2a51e1859aee3271aa996166f460663d9d26704a982000a" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.762635 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"930e6afaf134c86ac2a51e1859aee3271aa996166f460663d9d26704a982000a"} err="failed to get container status \"930e6afaf134c86ac2a51e1859aee3271aa996166f460663d9d26704a982000a\": rpc error: code = NotFound desc = could not find container \"930e6afaf134c86ac2a51e1859aee3271aa996166f460663d9d26704a982000a\": container with ID starting with 930e6afaf134c86ac2a51e1859aee3271aa996166f460663d9d26704a982000a not found: ID does not exist" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.762667 4758 scope.go:117] "RemoveContainer" containerID="84b988ab26b2cbaf00e534569b818aefd880af8ecf2497770a497c586f27f20e" Jan 22 16:54:53 crc kubenswrapper[4758]: E0122 16:54:53.762967 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84b988ab26b2cbaf00e534569b818aefd880af8ecf2497770a497c586f27f20e\": container with ID starting with 84b988ab26b2cbaf00e534569b818aefd880af8ecf2497770a497c586f27f20e not found: ID does not exist" containerID="84b988ab26b2cbaf00e534569b818aefd880af8ecf2497770a497c586f27f20e" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.762990 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84b988ab26b2cbaf00e534569b818aefd880af8ecf2497770a497c586f27f20e"} err="failed to get container status \"84b988ab26b2cbaf00e534569b818aefd880af8ecf2497770a497c586f27f20e\": rpc error: code = NotFound desc = could not find container \"84b988ab26b2cbaf00e534569b818aefd880af8ecf2497770a497c586f27f20e\": container with ID starting with 84b988ab26b2cbaf00e534569b818aefd880af8ecf2497770a497c586f27f20e not found: ID does not exist" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.763007 4758 scope.go:117] "RemoveContainer" containerID="e39dbebd02e22afd1643ed05975ad545e87632dd242581db414641f90b67b1b8" Jan 22 16:54:53 crc kubenswrapper[4758]: E0122 16:54:53.763308 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e39dbebd02e22afd1643ed05975ad545e87632dd242581db414641f90b67b1b8\": container with ID starting with e39dbebd02e22afd1643ed05975ad545e87632dd242581db414641f90b67b1b8 not found: ID does not exist" containerID="e39dbebd02e22afd1643ed05975ad545e87632dd242581db414641f90b67b1b8" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.763328 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e39dbebd02e22afd1643ed05975ad545e87632dd242581db414641f90b67b1b8"} err="failed to get container status \"e39dbebd02e22afd1643ed05975ad545e87632dd242581db414641f90b67b1b8\": rpc error: code = NotFound desc = could not find container \"e39dbebd02e22afd1643ed05975ad545e87632dd242581db414641f90b67b1b8\": container with ID starting with e39dbebd02e22afd1643ed05975ad545e87632dd242581db414641f90b67b1b8 not found: ID does not exist" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.763345 4758 scope.go:117] "RemoveContainer" containerID="4b8c7c2883c90cf404ece1037564f60b06c4d9c1158a94e230aa682e257e3715" Jan 22 16:54:53 crc kubenswrapper[4758]: E0122 16:54:53.763586 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b8c7c2883c90cf404ece1037564f60b06c4d9c1158a94e230aa682e257e3715\": container with ID starting with 4b8c7c2883c90cf404ece1037564f60b06c4d9c1158a94e230aa682e257e3715 not found: ID does not exist" containerID="4b8c7c2883c90cf404ece1037564f60b06c4d9c1158a94e230aa682e257e3715" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.763608 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b8c7c2883c90cf404ece1037564f60b06c4d9c1158a94e230aa682e257e3715"} err="failed to get container status \"4b8c7c2883c90cf404ece1037564f60b06c4d9c1158a94e230aa682e257e3715\": rpc error: code = NotFound desc = could not find container \"4b8c7c2883c90cf404ece1037564f60b06c4d9c1158a94e230aa682e257e3715\": container with ID starting with 4b8c7c2883c90cf404ece1037564f60b06c4d9c1158a94e230aa682e257e3715 not found: ID does not exist" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.765472 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5ced7f7-a89e-41c1-82b7-9fa15533621e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5ced7f7-a89e-41c1-82b7-9fa15533621e" (UID: "e5ced7f7-a89e-41c1-82b7-9fa15533621e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.786141 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zncrr\" (UniqueName: \"kubernetes.io/projected/e5ced7f7-a89e-41c1-82b7-9fa15533621e-kube-api-access-zncrr\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.786176 4758 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e5ced7f7-a89e-41c1-82b7-9fa15533621e-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.786188 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5ced7f7-a89e-41c1-82b7-9fa15533621e-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.786199 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5ced7f7-a89e-41c1-82b7-9fa15533621e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.786211 4758 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e5ced7f7-a89e-41c1-82b7-9fa15533621e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.798603 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5ced7f7-a89e-41c1-82b7-9fa15533621e-config-data" (OuterVolumeSpecName: "config-data") pod "e5ced7f7-a89e-41c1-82b7-9fa15533621e" (UID: "e5ced7f7-a89e-41c1-82b7-9fa15533621e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.888485 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5ced7f7-a89e-41c1-82b7-9fa15533621e-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.944035 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.953495 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.971490 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 16:54:53 crc kubenswrapper[4758]: E0122 16:54:53.971935 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d77dabf3-2031-4c96-a78f-bb704b2f7f84" containerName="dnsmasq-dns" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.971953 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="d77dabf3-2031-4c96-a78f-bb704b2f7f84" containerName="dnsmasq-dns" Jan 22 16:54:53 crc kubenswrapper[4758]: E0122 16:54:53.971970 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d77dabf3-2031-4c96-a78f-bb704b2f7f84" containerName="init" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.971977 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="d77dabf3-2031-4c96-a78f-bb704b2f7f84" containerName="init" Jan 22 16:54:53 crc kubenswrapper[4758]: E0122 16:54:53.971985 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5ced7f7-a89e-41c1-82b7-9fa15533621e" containerName="proxy-httpd" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.971991 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5ced7f7-a89e-41c1-82b7-9fa15533621e" containerName="proxy-httpd" Jan 22 16:54:53 crc kubenswrapper[4758]: E0122 16:54:53.972015 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5ced7f7-a89e-41c1-82b7-9fa15533621e" containerName="ceilometer-notification-agent" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.972021 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5ced7f7-a89e-41c1-82b7-9fa15533621e" containerName="ceilometer-notification-agent" Jan 22 16:54:53 crc kubenswrapper[4758]: E0122 16:54:53.972032 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5ced7f7-a89e-41c1-82b7-9fa15533621e" containerName="sg-core" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.972039 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5ced7f7-a89e-41c1-82b7-9fa15533621e" containerName="sg-core" Jan 22 16:54:53 crc kubenswrapper[4758]: E0122 16:54:53.972060 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5ced7f7-a89e-41c1-82b7-9fa15533621e" containerName="ceilometer-central-agent" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.972066 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5ced7f7-a89e-41c1-82b7-9fa15533621e" containerName="ceilometer-central-agent" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.972241 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5ced7f7-a89e-41c1-82b7-9fa15533621e" containerName="ceilometer-central-agent" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.972257 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5ced7f7-a89e-41c1-82b7-9fa15533621e" containerName="proxy-httpd" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.972266 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5ced7f7-a89e-41c1-82b7-9fa15533621e" containerName="sg-core" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.972275 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5ced7f7-a89e-41c1-82b7-9fa15533621e" containerName="ceilometer-notification-agent" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.972291 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="d77dabf3-2031-4c96-a78f-bb704b2f7f84" containerName="dnsmasq-dns" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.974367 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.976539 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.978398 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 16:54:53 crc kubenswrapper[4758]: I0122 16:54:53.978652 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 16:54:54 crc kubenswrapper[4758]: I0122 16:54:54.001467 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 16:54:54 crc kubenswrapper[4758]: I0122 16:54:54.092846 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2tn2\" (UniqueName: \"kubernetes.io/projected/93923998-0016-4db9-adff-a433c7a8d57c-kube-api-access-p2tn2\") pod \"ceilometer-0\" (UID: \"93923998-0016-4db9-adff-a433c7a8d57c\") " pod="openstack/ceilometer-0" Jan 22 16:54:54 crc kubenswrapper[4758]: I0122 16:54:54.093046 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/93923998-0016-4db9-adff-a433c7a8d57c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"93923998-0016-4db9-adff-a433c7a8d57c\") " pod="openstack/ceilometer-0" Jan 22 16:54:54 crc kubenswrapper[4758]: I0122 16:54:54.096087 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93923998-0016-4db9-adff-a433c7a8d57c-log-httpd\") pod \"ceilometer-0\" (UID: \"93923998-0016-4db9-adff-a433c7a8d57c\") " pod="openstack/ceilometer-0" Jan 22 16:54:54 crc kubenswrapper[4758]: I0122 16:54:54.145491 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93923998-0016-4db9-adff-a433c7a8d57c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"93923998-0016-4db9-adff-a433c7a8d57c\") " pod="openstack/ceilometer-0" Jan 22 16:54:54 crc kubenswrapper[4758]: I0122 16:54:54.145962 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93923998-0016-4db9-adff-a433c7a8d57c-scripts\") pod \"ceilometer-0\" (UID: \"93923998-0016-4db9-adff-a433c7a8d57c\") " pod="openstack/ceilometer-0" Jan 22 16:54:54 crc kubenswrapper[4758]: I0122 16:54:54.146084 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93923998-0016-4db9-adff-a433c7a8d57c-run-httpd\") pod \"ceilometer-0\" (UID: \"93923998-0016-4db9-adff-a433c7a8d57c\") " pod="openstack/ceilometer-0" Jan 22 16:54:54 crc kubenswrapper[4758]: I0122 16:54:54.146206 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93923998-0016-4db9-adff-a433c7a8d57c-config-data\") pod \"ceilometer-0\" (UID: \"93923998-0016-4db9-adff-a433c7a8d57c\") " pod="openstack/ceilometer-0" Jan 22 16:54:54 crc kubenswrapper[4758]: I0122 16:54:54.146254 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/93923998-0016-4db9-adff-a433c7a8d57c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"93923998-0016-4db9-adff-a433c7a8d57c\") " pod="openstack/ceilometer-0" Jan 22 16:54:54 crc kubenswrapper[4758]: I0122 16:54:54.248038 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/93923998-0016-4db9-adff-a433c7a8d57c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"93923998-0016-4db9-adff-a433c7a8d57c\") " pod="openstack/ceilometer-0" Jan 22 16:54:54 crc kubenswrapper[4758]: I0122 16:54:54.249262 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93923998-0016-4db9-adff-a433c7a8d57c-log-httpd\") pod \"ceilometer-0\" (UID: \"93923998-0016-4db9-adff-a433c7a8d57c\") " pod="openstack/ceilometer-0" Jan 22 16:54:54 crc kubenswrapper[4758]: I0122 16:54:54.249551 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93923998-0016-4db9-adff-a433c7a8d57c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"93923998-0016-4db9-adff-a433c7a8d57c\") " pod="openstack/ceilometer-0" Jan 22 16:54:54 crc kubenswrapper[4758]: I0122 16:54:54.249705 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93923998-0016-4db9-adff-a433c7a8d57c-scripts\") pod \"ceilometer-0\" (UID: \"93923998-0016-4db9-adff-a433c7a8d57c\") " pod="openstack/ceilometer-0" Jan 22 16:54:54 crc kubenswrapper[4758]: I0122 16:54:54.249954 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93923998-0016-4db9-adff-a433c7a8d57c-run-httpd\") pod \"ceilometer-0\" (UID: \"93923998-0016-4db9-adff-a433c7a8d57c\") " pod="openstack/ceilometer-0" Jan 22 16:54:54 crc kubenswrapper[4758]: I0122 16:54:54.249827 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93923998-0016-4db9-adff-a433c7a8d57c-log-httpd\") pod \"ceilometer-0\" (UID: \"93923998-0016-4db9-adff-a433c7a8d57c\") " pod="openstack/ceilometer-0" Jan 22 16:54:54 crc kubenswrapper[4758]: I0122 16:54:54.250402 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93923998-0016-4db9-adff-a433c7a8d57c-run-httpd\") pod \"ceilometer-0\" (UID: \"93923998-0016-4db9-adff-a433c7a8d57c\") " pod="openstack/ceilometer-0" Jan 22 16:54:54 crc kubenswrapper[4758]: I0122 16:54:54.250623 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93923998-0016-4db9-adff-a433c7a8d57c-config-data\") pod \"ceilometer-0\" (UID: \"93923998-0016-4db9-adff-a433c7a8d57c\") " pod="openstack/ceilometer-0" Jan 22 16:54:54 crc kubenswrapper[4758]: I0122 16:54:54.251357 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/93923998-0016-4db9-adff-a433c7a8d57c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"93923998-0016-4db9-adff-a433c7a8d57c\") " pod="openstack/ceilometer-0" Jan 22 16:54:54 crc kubenswrapper[4758]: I0122 16:54:54.251529 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2tn2\" (UniqueName: \"kubernetes.io/projected/93923998-0016-4db9-adff-a433c7a8d57c-kube-api-access-p2tn2\") pod \"ceilometer-0\" (UID: \"93923998-0016-4db9-adff-a433c7a8d57c\") " pod="openstack/ceilometer-0" Jan 22 16:54:54 crc kubenswrapper[4758]: I0122 16:54:54.252789 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/93923998-0016-4db9-adff-a433c7a8d57c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"93923998-0016-4db9-adff-a433c7a8d57c\") " pod="openstack/ceilometer-0" Jan 22 16:54:54 crc kubenswrapper[4758]: I0122 16:54:54.253057 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93923998-0016-4db9-adff-a433c7a8d57c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"93923998-0016-4db9-adff-a433c7a8d57c\") " pod="openstack/ceilometer-0" Jan 22 16:54:54 crc kubenswrapper[4758]: I0122 16:54:54.253424 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93923998-0016-4db9-adff-a433c7a8d57c-scripts\") pod \"ceilometer-0\" (UID: \"93923998-0016-4db9-adff-a433c7a8d57c\") " pod="openstack/ceilometer-0" Jan 22 16:54:54 crc kubenswrapper[4758]: I0122 16:54:54.255545 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93923998-0016-4db9-adff-a433c7a8d57c-config-data\") pod \"ceilometer-0\" (UID: \"93923998-0016-4db9-adff-a433c7a8d57c\") " pod="openstack/ceilometer-0" Jan 22 16:54:54 crc kubenswrapper[4758]: I0122 16:54:54.255661 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/93923998-0016-4db9-adff-a433c7a8d57c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"93923998-0016-4db9-adff-a433c7a8d57c\") " pod="openstack/ceilometer-0" Jan 22 16:54:54 crc kubenswrapper[4758]: I0122 16:54:54.269657 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2tn2\" (UniqueName: \"kubernetes.io/projected/93923998-0016-4db9-adff-a433c7a8d57c-kube-api-access-p2tn2\") pod \"ceilometer-0\" (UID: \"93923998-0016-4db9-adff-a433c7a8d57c\") " pod="openstack/ceilometer-0" Jan 22 16:54:54 crc kubenswrapper[4758]: I0122 16:54:54.298077 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 16:54:54 crc kubenswrapper[4758]: I0122 16:54:54.756254 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 16:54:54 crc kubenswrapper[4758]: W0122 16:54:54.760645 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93923998_0016_4db9_adff_a433c7a8d57c.slice/crio-f073fabd29d118165bc808b40a4a404ad3f72204b5a6b5171354243892cbee92 WatchSource:0}: Error finding container f073fabd29d118165bc808b40a4a404ad3f72204b5a6b5171354243892cbee92: Status 404 returned error can't find the container with id f073fabd29d118165bc808b40a4a404ad3f72204b5a6b5171354243892cbee92 Jan 22 16:54:54 crc kubenswrapper[4758]: I0122 16:54:54.823119 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d77dabf3-2031-4c96-a78f-bb704b2f7f84" path="/var/lib/kubelet/pods/d77dabf3-2031-4c96-a78f-bb704b2f7f84/volumes" Jan 22 16:54:54 crc kubenswrapper[4758]: I0122 16:54:54.824947 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5ced7f7-a89e-41c1-82b7-9fa15533621e" path="/var/lib/kubelet/pods/e5ced7f7-a89e-41c1-82b7-9fa15533621e/volumes" Jan 22 16:54:55 crc kubenswrapper[4758]: I0122 16:54:55.708520 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93923998-0016-4db9-adff-a433c7a8d57c","Type":"ContainerStarted","Data":"ac9b523b39a8fc616563df35ca3aa97f65c7d130f93997569e78a6b68ebfdb47"} Jan 22 16:54:55 crc kubenswrapper[4758]: I0122 16:54:55.708884 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93923998-0016-4db9-adff-a433c7a8d57c","Type":"ContainerStarted","Data":"f073fabd29d118165bc808b40a4a404ad3f72204b5a6b5171354243892cbee92"} Jan 22 16:54:56 crc kubenswrapper[4758]: I0122 16:54:56.723479 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93923998-0016-4db9-adff-a433c7a8d57c","Type":"ContainerStarted","Data":"0b8ebd13ee088271311eb06a3e11d8befc82745349289f4a4c1b3db3f72a8f08"} Jan 22 16:54:56 crc kubenswrapper[4758]: I0122 16:54:56.724082 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93923998-0016-4db9-adff-a433c7a8d57c","Type":"ContainerStarted","Data":"fafb5d2fa75b2b190a38003bc6cece90b275597f24e157d6ae4d1a4780c75472"} Jan 22 16:54:57 crc kubenswrapper[4758]: I0122 16:54:57.946340 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 16:54:57 crc kubenswrapper[4758]: I0122 16:54:57.947711 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 16:54:58 crc kubenswrapper[4758]: I0122 16:54:58.428919 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 22 16:54:58 crc kubenswrapper[4758]: I0122 16:54:58.750900 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93923998-0016-4db9-adff-a433c7a8d57c","Type":"ContainerStarted","Data":"d10bdaa0e85198e6cda114f158f9170812960510e54686b2e3ea7003ad3bf77e"} Jan 22 16:54:58 crc kubenswrapper[4758]: I0122 16:54:58.751282 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 16:54:58 crc kubenswrapper[4758]: I0122 16:54:58.777062 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.408190124 podStartE2EDuration="5.77702858s" podCreationTimestamp="2026-01-22 16:54:53 +0000 UTC" firstStartedPulling="2026-01-22 16:54:54.763195977 +0000 UTC m=+1516.246535272" lastFinishedPulling="2026-01-22 16:54:58.132034443 +0000 UTC m=+1519.615373728" observedRunningTime="2026-01-22 16:54:58.771644243 +0000 UTC m=+1520.254983528" watchObservedRunningTime="2026-01-22 16:54:58.77702858 +0000 UTC m=+1520.260367865" Jan 22 16:54:58 crc kubenswrapper[4758]: I0122 16:54:58.961896 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.220:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 16:54:58 crc kubenswrapper[4758]: I0122 16:54:58.961936 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.220:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 16:55:00 crc kubenswrapper[4758]: I0122 16:55:00.776006 4758 generic.go:334] "Generic (PLEG): container finished" podID="e1c22116-ce0a-4806-bbf7-e514519abff0" containerID="64e1a857d67e593db9601cf41360703e3f11f22770e474322a231e40b2dbbd2d" exitCode=0 Jan 22 16:55:00 crc kubenswrapper[4758]: I0122 16:55:00.776102 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-vlp59" event={"ID":"e1c22116-ce0a-4806-bbf7-e514519abff0","Type":"ContainerDied","Data":"64e1a857d67e593db9601cf41360703e3f11f22770e474322a231e40b2dbbd2d"} Jan 22 16:55:02 crc kubenswrapper[4758]: I0122 16:55:02.215234 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-vlp59" Jan 22 16:55:02 crc kubenswrapper[4758]: I0122 16:55:02.359539 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1c22116-ce0a-4806-bbf7-e514519abff0-combined-ca-bundle\") pod \"e1c22116-ce0a-4806-bbf7-e514519abff0\" (UID: \"e1c22116-ce0a-4806-bbf7-e514519abff0\") " Jan 22 16:55:02 crc kubenswrapper[4758]: I0122 16:55:02.360492 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnqpm\" (UniqueName: \"kubernetes.io/projected/e1c22116-ce0a-4806-bbf7-e514519abff0-kube-api-access-dnqpm\") pod \"e1c22116-ce0a-4806-bbf7-e514519abff0\" (UID: \"e1c22116-ce0a-4806-bbf7-e514519abff0\") " Jan 22 16:55:02 crc kubenswrapper[4758]: I0122 16:55:02.360647 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1c22116-ce0a-4806-bbf7-e514519abff0-scripts\") pod \"e1c22116-ce0a-4806-bbf7-e514519abff0\" (UID: \"e1c22116-ce0a-4806-bbf7-e514519abff0\") " Jan 22 16:55:02 crc kubenswrapper[4758]: I0122 16:55:02.360826 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1c22116-ce0a-4806-bbf7-e514519abff0-config-data\") pod \"e1c22116-ce0a-4806-bbf7-e514519abff0\" (UID: \"e1c22116-ce0a-4806-bbf7-e514519abff0\") " Jan 22 16:55:02 crc kubenswrapper[4758]: I0122 16:55:02.365904 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1c22116-ce0a-4806-bbf7-e514519abff0-kube-api-access-dnqpm" (OuterVolumeSpecName: "kube-api-access-dnqpm") pod "e1c22116-ce0a-4806-bbf7-e514519abff0" (UID: "e1c22116-ce0a-4806-bbf7-e514519abff0"). InnerVolumeSpecName "kube-api-access-dnqpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:55:02 crc kubenswrapper[4758]: I0122 16:55:02.376453 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1c22116-ce0a-4806-bbf7-e514519abff0-scripts" (OuterVolumeSpecName: "scripts") pod "e1c22116-ce0a-4806-bbf7-e514519abff0" (UID: "e1c22116-ce0a-4806-bbf7-e514519abff0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:55:02 crc kubenswrapper[4758]: I0122 16:55:02.435138 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1c22116-ce0a-4806-bbf7-e514519abff0-config-data" (OuterVolumeSpecName: "config-data") pod "e1c22116-ce0a-4806-bbf7-e514519abff0" (UID: "e1c22116-ce0a-4806-bbf7-e514519abff0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:55:02 crc kubenswrapper[4758]: I0122 16:55:02.439938 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1c22116-ce0a-4806-bbf7-e514519abff0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e1c22116-ce0a-4806-bbf7-e514519abff0" (UID: "e1c22116-ce0a-4806-bbf7-e514519abff0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:55:02 crc kubenswrapper[4758]: I0122 16:55:02.464031 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1c22116-ce0a-4806-bbf7-e514519abff0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:55:02 crc kubenswrapper[4758]: I0122 16:55:02.464082 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnqpm\" (UniqueName: \"kubernetes.io/projected/e1c22116-ce0a-4806-bbf7-e514519abff0-kube-api-access-dnqpm\") on node \"crc\" DevicePath \"\"" Jan 22 16:55:02 crc kubenswrapper[4758]: I0122 16:55:02.464104 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1c22116-ce0a-4806-bbf7-e514519abff0-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:55:02 crc kubenswrapper[4758]: I0122 16:55:02.464120 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1c22116-ce0a-4806-bbf7-e514519abff0-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:55:02 crc kubenswrapper[4758]: I0122 16:55:02.801228 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-vlp59" event={"ID":"e1c22116-ce0a-4806-bbf7-e514519abff0","Type":"ContainerDied","Data":"a09d3ea3a71440379db77d21df9e4c8c0828f7f71ec99368f71dddb121752690"} Jan 22 16:55:02 crc kubenswrapper[4758]: I0122 16:55:02.801272 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a09d3ea3a71440379db77d21df9e4c8c0828f7f71ec99368f71dddb121752690" Jan 22 16:55:02 crc kubenswrapper[4758]: I0122 16:55:02.801281 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-vlp59" Jan 22 16:55:02 crc kubenswrapper[4758]: I0122 16:55:02.993371 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 16:55:02 crc kubenswrapper[4758]: I0122 16:55:02.993772 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="8c26bc28-4e84-4218-9bfb-7d7cc6206cac" containerName="nova-scheduler-scheduler" containerID="cri-o://e15f171d4c44505a373a958aea50becf54aa8f12667b4062e81c229ace225892" gracePeriod=30 Jan 22 16:55:03 crc kubenswrapper[4758]: I0122 16:55:03.005862 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 22 16:55:03 crc kubenswrapper[4758]: I0122 16:55:03.006095 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a" containerName="nova-api-log" containerID="cri-o://5f6db6b5b39cfac1304684b23a2efdc0dd1e0e71ce4fc2bc29daaff0d2a87fc5" gracePeriod=30 Jan 22 16:55:03 crc kubenswrapper[4758]: I0122 16:55:03.006208 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a" containerName="nova-api-api" containerID="cri-o://287f91ca92e5106b01ad2e91751f2fe89ddd3b49b01ecd47af389841982b9ded" gracePeriod=30 Jan 22 16:55:03 crc kubenswrapper[4758]: I0122 16:55:03.020025 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 16:55:03 crc kubenswrapper[4758]: I0122 16:55:03.020260 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ef732e48-f2b4-48cf-822b-c1dabb02ec5c" containerName="nova-metadata-log" containerID="cri-o://97898b437d0b252168fdc2ceed1cdc4f24c936263623060876488966a3107070" gracePeriod=30 Jan 22 16:55:03 crc kubenswrapper[4758]: I0122 16:55:03.020809 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ef732e48-f2b4-48cf-822b-c1dabb02ec5c" containerName="nova-metadata-metadata" containerID="cri-o://2db3fc968b5303b0f720ed6af61aa5662cda312bcb35a9c1d16660eb5ab4418a" gracePeriod=30 Jan 22 16:55:03 crc kubenswrapper[4758]: E0122 16:55:03.509148 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e15f171d4c44505a373a958aea50becf54aa8f12667b4062e81c229ace225892" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 22 16:55:03 crc kubenswrapper[4758]: E0122 16:55:03.511284 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e15f171d4c44505a373a958aea50becf54aa8f12667b4062e81c229ace225892" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 22 16:55:03 crc kubenswrapper[4758]: E0122 16:55:03.516201 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e15f171d4c44505a373a958aea50becf54aa8f12667b4062e81c229ace225892" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 22 16:55:03 crc kubenswrapper[4758]: E0122 16:55:03.516256 4758 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="8c26bc28-4e84-4218-9bfb-7d7cc6206cac" containerName="nova-scheduler-scheduler" Jan 22 16:55:04 crc kubenswrapper[4758]: I0122 16:55:04.894492 4758 generic.go:334] "Generic (PLEG): container finished" podID="ef732e48-f2b4-48cf-822b-c1dabb02ec5c" containerID="2db3fc968b5303b0f720ed6af61aa5662cda312bcb35a9c1d16660eb5ab4418a" exitCode=0 Jan 22 16:55:04 crc kubenswrapper[4758]: I0122 16:55:04.894937 4758 generic.go:334] "Generic (PLEG): container finished" podID="ef732e48-f2b4-48cf-822b-c1dabb02ec5c" containerID="97898b437d0b252168fdc2ceed1cdc4f24c936263623060876488966a3107070" exitCode=143 Jan 22 16:55:04 crc kubenswrapper[4758]: I0122 16:55:04.894710 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ef732e48-f2b4-48cf-822b-c1dabb02ec5c","Type":"ContainerDied","Data":"2db3fc968b5303b0f720ed6af61aa5662cda312bcb35a9c1d16660eb5ab4418a"} Jan 22 16:55:04 crc kubenswrapper[4758]: I0122 16:55:04.895135 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ef732e48-f2b4-48cf-822b-c1dabb02ec5c","Type":"ContainerDied","Data":"97898b437d0b252168fdc2ceed1cdc4f24c936263623060876488966a3107070"} Jan 22 16:55:04 crc kubenswrapper[4758]: I0122 16:55:04.900398 4758 generic.go:334] "Generic (PLEG): container finished" podID="c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a" containerID="287f91ca92e5106b01ad2e91751f2fe89ddd3b49b01ecd47af389841982b9ded" exitCode=0 Jan 22 16:55:04 crc kubenswrapper[4758]: I0122 16:55:04.900425 4758 generic.go:334] "Generic (PLEG): container finished" podID="c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a" containerID="5f6db6b5b39cfac1304684b23a2efdc0dd1e0e71ce4fc2bc29daaff0d2a87fc5" exitCode=143 Jan 22 16:55:04 crc kubenswrapper[4758]: I0122 16:55:04.900446 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a","Type":"ContainerDied","Data":"287f91ca92e5106b01ad2e91751f2fe89ddd3b49b01ecd47af389841982b9ded"} Jan 22 16:55:04 crc kubenswrapper[4758]: I0122 16:55:04.900488 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a","Type":"ContainerDied","Data":"5f6db6b5b39cfac1304684b23a2efdc0dd1e0e71ce4fc2bc29daaff0d2a87fc5"} Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:05.077902 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:05.107407 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef732e48-f2b4-48cf-822b-c1dabb02ec5c-combined-ca-bundle\") pod \"ef732e48-f2b4-48cf-822b-c1dabb02ec5c\" (UID: \"ef732e48-f2b4-48cf-822b-c1dabb02ec5c\") " Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:05.107554 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpncv\" (UniqueName: \"kubernetes.io/projected/ef732e48-f2b4-48cf-822b-c1dabb02ec5c-kube-api-access-cpncv\") pod \"ef732e48-f2b4-48cf-822b-c1dabb02ec5c\" (UID: \"ef732e48-f2b4-48cf-822b-c1dabb02ec5c\") " Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:05.107685 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef732e48-f2b4-48cf-822b-c1dabb02ec5c-nova-metadata-tls-certs\") pod \"ef732e48-f2b4-48cf-822b-c1dabb02ec5c\" (UID: \"ef732e48-f2b4-48cf-822b-c1dabb02ec5c\") " Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:05.107714 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef732e48-f2b4-48cf-822b-c1dabb02ec5c-logs\") pod \"ef732e48-f2b4-48cf-822b-c1dabb02ec5c\" (UID: \"ef732e48-f2b4-48cf-822b-c1dabb02ec5c\") " Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:05.107802 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef732e48-f2b4-48cf-822b-c1dabb02ec5c-config-data\") pod \"ef732e48-f2b4-48cf-822b-c1dabb02ec5c\" (UID: \"ef732e48-f2b4-48cf-822b-c1dabb02ec5c\") " Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:05.119212 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef732e48-f2b4-48cf-822b-c1dabb02ec5c-kube-api-access-cpncv" (OuterVolumeSpecName: "kube-api-access-cpncv") pod "ef732e48-f2b4-48cf-822b-c1dabb02ec5c" (UID: "ef732e48-f2b4-48cf-822b-c1dabb02ec5c"). InnerVolumeSpecName "kube-api-access-cpncv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:05.119311 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef732e48-f2b4-48cf-822b-c1dabb02ec5c-logs" (OuterVolumeSpecName: "logs") pod "ef732e48-f2b4-48cf-822b-c1dabb02ec5c" (UID: "ef732e48-f2b4-48cf-822b-c1dabb02ec5c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:05.163413 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef732e48-f2b4-48cf-822b-c1dabb02ec5c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ef732e48-f2b4-48cf-822b-c1dabb02ec5c" (UID: "ef732e48-f2b4-48cf-822b-c1dabb02ec5c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:05.180396 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef732e48-f2b4-48cf-822b-c1dabb02ec5c-config-data" (OuterVolumeSpecName: "config-data") pod "ef732e48-f2b4-48cf-822b-c1dabb02ec5c" (UID: "ef732e48-f2b4-48cf-822b-c1dabb02ec5c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:05.213908 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef732e48-f2b4-48cf-822b-c1dabb02ec5c-logs\") on node \"crc\" DevicePath \"\"" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:05.213950 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef732e48-f2b4-48cf-822b-c1dabb02ec5c-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:05.213970 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef732e48-f2b4-48cf-822b-c1dabb02ec5c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:05.213982 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpncv\" (UniqueName: \"kubernetes.io/projected/ef732e48-f2b4-48cf-822b-c1dabb02ec5c-kube-api-access-cpncv\") on node \"crc\" DevicePath \"\"" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:05.222949 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef732e48-f2b4-48cf-822b-c1dabb02ec5c-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "ef732e48-f2b4-48cf-822b-c1dabb02ec5c" (UID: "ef732e48-f2b4-48cf-822b-c1dabb02ec5c"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:05.232852 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:05.315608 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a-internal-tls-certs\") pod \"c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a\" (UID: \"c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a\") " Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:05.315945 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a-combined-ca-bundle\") pod \"c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a\" (UID: \"c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a\") " Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:05.316008 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a-logs\") pod \"c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a\" (UID: \"c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a\") " Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:05.316073 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a-config-data\") pod \"c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a\" (UID: \"c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a\") " Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:05.316230 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bp5z\" (UniqueName: \"kubernetes.io/projected/c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a-kube-api-access-8bp5z\") pod \"c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a\" (UID: \"c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a\") " Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:05.316274 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a-public-tls-certs\") pod \"c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a\" (UID: \"c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a\") " Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:05.316736 4758 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef732e48-f2b4-48cf-822b-c1dabb02ec5c-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:05.316771 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a-logs" (OuterVolumeSpecName: "logs") pod "c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a" (UID: "c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:05.320947 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a-kube-api-access-8bp5z" (OuterVolumeSpecName: "kube-api-access-8bp5z") pod "c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a" (UID: "c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a"). InnerVolumeSpecName "kube-api-access-8bp5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:05.352976 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a-config-data" (OuterVolumeSpecName: "config-data") pod "c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a" (UID: "c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:05.413887 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a" (UID: "c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:05.422378 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:05.422404 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a-logs\") on node \"crc\" DevicePath \"\"" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:05.422413 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:05.422422 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bp5z\" (UniqueName: \"kubernetes.io/projected/c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a-kube-api-access-8bp5z\") on node \"crc\" DevicePath \"\"" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:05.427865 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a" (UID: "c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:05.438928 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a" (UID: "c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:05.524541 4758 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:05.524572 4758 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:05.920470 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a","Type":"ContainerDied","Data":"2a2f3b970c425ba1f2d86284078b5c6771b7c5dc71b45af7f422971f872ddb35"} Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:05.920519 4758 scope.go:117] "RemoveContainer" containerID="287f91ca92e5106b01ad2e91751f2fe89ddd3b49b01ecd47af389841982b9ded" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:05.920518 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:05.924420 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ef732e48-f2b4-48cf-822b-c1dabb02ec5c","Type":"ContainerDied","Data":"91f6d072baae84b549b7be4709fb0477872d828e583d3ac2b36d20e8a806af74"} Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:05.924518 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:05.996844 4758 scope.go:117] "RemoveContainer" containerID="5f6db6b5b39cfac1304684b23a2efdc0dd1e0e71ce4fc2bc29daaff0d2a87fc5" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.010275 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.028265 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.032254 4758 scope.go:117] "RemoveContainer" containerID="2db3fc968b5303b0f720ed6af61aa5662cda312bcb35a9c1d16660eb5ab4418a" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.047537 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.059426 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.071260 4758 scope.go:117] "RemoveContainer" containerID="97898b437d0b252168fdc2ceed1cdc4f24c936263623060876488966a3107070" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.071610 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 22 16:55:06 crc kubenswrapper[4758]: E0122 16:55:06.072712 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef732e48-f2b4-48cf-822b-c1dabb02ec5c" containerName="nova-metadata-metadata" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.072729 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef732e48-f2b4-48cf-822b-c1dabb02ec5c" containerName="nova-metadata-metadata" Jan 22 16:55:06 crc kubenswrapper[4758]: E0122 16:55:06.072763 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef732e48-f2b4-48cf-822b-c1dabb02ec5c" containerName="nova-metadata-log" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.072769 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef732e48-f2b4-48cf-822b-c1dabb02ec5c" containerName="nova-metadata-log" Jan 22 16:55:06 crc kubenswrapper[4758]: E0122 16:55:06.072790 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a" containerName="nova-api-log" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.072797 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a" containerName="nova-api-log" Jan 22 16:55:06 crc kubenswrapper[4758]: E0122 16:55:06.072816 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a" containerName="nova-api-api" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.072821 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a" containerName="nova-api-api" Jan 22 16:55:06 crc kubenswrapper[4758]: E0122 16:55:06.072831 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1c22116-ce0a-4806-bbf7-e514519abff0" containerName="nova-manage" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.072836 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1c22116-ce0a-4806-bbf7-e514519abff0" containerName="nova-manage" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.073031 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1c22116-ce0a-4806-bbf7-e514519abff0" containerName="nova-manage" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.073043 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a" containerName="nova-api-log" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.073050 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a" containerName="nova-api-api" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.073058 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef732e48-f2b4-48cf-822b-c1dabb02ec5c" containerName="nova-metadata-log" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.073085 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef732e48-f2b4-48cf-822b-c1dabb02ec5c" containerName="nova-metadata-metadata" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.074271 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.078067 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.078098 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.078435 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.081795 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.083682 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.088096 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.088190 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.096503 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.109926 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.134997 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/946719d1-252a-449e-9b4e-5ae6639fd635-logs\") pod \"nova-api-0\" (UID: \"946719d1-252a-449e-9b4e-5ae6639fd635\") " pod="openstack/nova-api-0" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.135051 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/946719d1-252a-449e-9b4e-5ae6639fd635-internal-tls-certs\") pod \"nova-api-0\" (UID: \"946719d1-252a-449e-9b4e-5ae6639fd635\") " pod="openstack/nova-api-0" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.135156 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6052fa46-8362-4abe-8577-5e47c36af2c1-config-data\") pod \"nova-metadata-0\" (UID: \"6052fa46-8362-4abe-8577-5e47c36af2c1\") " pod="openstack/nova-metadata-0" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.135251 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6052fa46-8362-4abe-8577-5e47c36af2c1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6052fa46-8362-4abe-8577-5e47c36af2c1\") " pod="openstack/nova-metadata-0" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.135286 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t76b\" (UniqueName: \"kubernetes.io/projected/946719d1-252a-449e-9b4e-5ae6639fd635-kube-api-access-2t76b\") pod \"nova-api-0\" (UID: \"946719d1-252a-449e-9b4e-5ae6639fd635\") " pod="openstack/nova-api-0" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.135317 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6052fa46-8362-4abe-8577-5e47c36af2c1-logs\") pod \"nova-metadata-0\" (UID: \"6052fa46-8362-4abe-8577-5e47c36af2c1\") " pod="openstack/nova-metadata-0" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.135341 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6052fa46-8362-4abe-8577-5e47c36af2c1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6052fa46-8362-4abe-8577-5e47c36af2c1\") " pod="openstack/nova-metadata-0" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.135362 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/946719d1-252a-449e-9b4e-5ae6639fd635-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"946719d1-252a-449e-9b4e-5ae6639fd635\") " pod="openstack/nova-api-0" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.135385 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6dkk\" (UniqueName: \"kubernetes.io/projected/6052fa46-8362-4abe-8577-5e47c36af2c1-kube-api-access-v6dkk\") pod \"nova-metadata-0\" (UID: \"6052fa46-8362-4abe-8577-5e47c36af2c1\") " pod="openstack/nova-metadata-0" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.135440 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/946719d1-252a-449e-9b4e-5ae6639fd635-public-tls-certs\") pod \"nova-api-0\" (UID: \"946719d1-252a-449e-9b4e-5ae6639fd635\") " pod="openstack/nova-api-0" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.135511 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/946719d1-252a-449e-9b4e-5ae6639fd635-config-data\") pod \"nova-api-0\" (UID: \"946719d1-252a-449e-9b4e-5ae6639fd635\") " pod="openstack/nova-api-0" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.236906 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6052fa46-8362-4abe-8577-5e47c36af2c1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6052fa46-8362-4abe-8577-5e47c36af2c1\") " pod="openstack/nova-metadata-0" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.236951 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t76b\" (UniqueName: \"kubernetes.io/projected/946719d1-252a-449e-9b4e-5ae6639fd635-kube-api-access-2t76b\") pod \"nova-api-0\" (UID: \"946719d1-252a-449e-9b4e-5ae6639fd635\") " pod="openstack/nova-api-0" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.236986 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6052fa46-8362-4abe-8577-5e47c36af2c1-logs\") pod \"nova-metadata-0\" (UID: \"6052fa46-8362-4abe-8577-5e47c36af2c1\") " pod="openstack/nova-metadata-0" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.237007 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6052fa46-8362-4abe-8577-5e47c36af2c1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6052fa46-8362-4abe-8577-5e47c36af2c1\") " pod="openstack/nova-metadata-0" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.237023 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6dkk\" (UniqueName: \"kubernetes.io/projected/6052fa46-8362-4abe-8577-5e47c36af2c1-kube-api-access-v6dkk\") pod \"nova-metadata-0\" (UID: \"6052fa46-8362-4abe-8577-5e47c36af2c1\") " pod="openstack/nova-metadata-0" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.237037 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/946719d1-252a-449e-9b4e-5ae6639fd635-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"946719d1-252a-449e-9b4e-5ae6639fd635\") " pod="openstack/nova-api-0" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.237062 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/946719d1-252a-449e-9b4e-5ae6639fd635-public-tls-certs\") pod \"nova-api-0\" (UID: \"946719d1-252a-449e-9b4e-5ae6639fd635\") " pod="openstack/nova-api-0" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.237121 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/946719d1-252a-449e-9b4e-5ae6639fd635-config-data\") pod \"nova-api-0\" (UID: \"946719d1-252a-449e-9b4e-5ae6639fd635\") " pod="openstack/nova-api-0" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.237156 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/946719d1-252a-449e-9b4e-5ae6639fd635-logs\") pod \"nova-api-0\" (UID: \"946719d1-252a-449e-9b4e-5ae6639fd635\") " pod="openstack/nova-api-0" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.237173 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/946719d1-252a-449e-9b4e-5ae6639fd635-internal-tls-certs\") pod \"nova-api-0\" (UID: \"946719d1-252a-449e-9b4e-5ae6639fd635\") " pod="openstack/nova-api-0" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.237208 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6052fa46-8362-4abe-8577-5e47c36af2c1-config-data\") pod \"nova-metadata-0\" (UID: \"6052fa46-8362-4abe-8577-5e47c36af2c1\") " pod="openstack/nova-metadata-0" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.237984 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/946719d1-252a-449e-9b4e-5ae6639fd635-logs\") pod \"nova-api-0\" (UID: \"946719d1-252a-449e-9b4e-5ae6639fd635\") " pod="openstack/nova-api-0" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.238097 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6052fa46-8362-4abe-8577-5e47c36af2c1-logs\") pod \"nova-metadata-0\" (UID: \"6052fa46-8362-4abe-8577-5e47c36af2c1\") " pod="openstack/nova-metadata-0" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.240189 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6052fa46-8362-4abe-8577-5e47c36af2c1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6052fa46-8362-4abe-8577-5e47c36af2c1\") " pod="openstack/nova-metadata-0" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.240873 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/946719d1-252a-449e-9b4e-5ae6639fd635-internal-tls-certs\") pod \"nova-api-0\" (UID: \"946719d1-252a-449e-9b4e-5ae6639fd635\") " pod="openstack/nova-api-0" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.240931 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6052fa46-8362-4abe-8577-5e47c36af2c1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6052fa46-8362-4abe-8577-5e47c36af2c1\") " pod="openstack/nova-metadata-0" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.241285 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/946719d1-252a-449e-9b4e-5ae6639fd635-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"946719d1-252a-449e-9b4e-5ae6639fd635\") " pod="openstack/nova-api-0" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.244501 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/946719d1-252a-449e-9b4e-5ae6639fd635-config-data\") pod \"nova-api-0\" (UID: \"946719d1-252a-449e-9b4e-5ae6639fd635\") " pod="openstack/nova-api-0" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.250861 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6052fa46-8362-4abe-8577-5e47c36af2c1-config-data\") pod \"nova-metadata-0\" (UID: \"6052fa46-8362-4abe-8577-5e47c36af2c1\") " pod="openstack/nova-metadata-0" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.253913 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t76b\" (UniqueName: \"kubernetes.io/projected/946719d1-252a-449e-9b4e-5ae6639fd635-kube-api-access-2t76b\") pod \"nova-api-0\" (UID: \"946719d1-252a-449e-9b4e-5ae6639fd635\") " pod="openstack/nova-api-0" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.254638 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6dkk\" (UniqueName: \"kubernetes.io/projected/6052fa46-8362-4abe-8577-5e47c36af2c1-kube-api-access-v6dkk\") pod \"nova-metadata-0\" (UID: \"6052fa46-8362-4abe-8577-5e47c36af2c1\") " pod="openstack/nova-metadata-0" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.255580 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/946719d1-252a-449e-9b4e-5ae6639fd635-public-tls-certs\") pod \"nova-api-0\" (UID: \"946719d1-252a-449e-9b4e-5ae6639fd635\") " pod="openstack/nova-api-0" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.399071 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.413321 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.820043 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a" path="/var/lib/kubelet/pods/c7b876ea-10a5-4c6b-bca2-a7cf7fc8dc2a/volumes" Jan 22 16:55:06 crc kubenswrapper[4758]: I0122 16:55:06.821472 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef732e48-f2b4-48cf-822b-c1dabb02ec5c" path="/var/lib/kubelet/pods/ef732e48-f2b4-48cf-822b-c1dabb02ec5c/volumes" Jan 22 16:55:07 crc kubenswrapper[4758]: I0122 16:55:07.250949 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 16:55:07 crc kubenswrapper[4758]: I0122 16:55:07.275292 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 16:55:07 crc kubenswrapper[4758]: I0122 16:55:07.958708 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6052fa46-8362-4abe-8577-5e47c36af2c1","Type":"ContainerStarted","Data":"85b0719022de734cbcd026e90b4341b89cb51a2504f3b99d6f3d8042ff465874"} Jan 22 16:55:07 crc kubenswrapper[4758]: I0122 16:55:07.959085 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6052fa46-8362-4abe-8577-5e47c36af2c1","Type":"ContainerStarted","Data":"12634540ea49005686fde2af79171475f367e2db8d5b7e32c0e57ab85100390e"} Jan 22 16:55:07 crc kubenswrapper[4758]: I0122 16:55:07.959103 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6052fa46-8362-4abe-8577-5e47c36af2c1","Type":"ContainerStarted","Data":"551eb65968be8406ad070a313e45b563d292c05cd760e78b2166242e261d2eb0"} Jan 22 16:55:07 crc kubenswrapper[4758]: I0122 16:55:07.962937 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"946719d1-252a-449e-9b4e-5ae6639fd635","Type":"ContainerStarted","Data":"0f9b0549462cd7db5f2ce8536d33a42e8b94cebcf149a469d83e8ea82711b003"} Jan 22 16:55:07 crc kubenswrapper[4758]: I0122 16:55:07.962984 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"946719d1-252a-449e-9b4e-5ae6639fd635","Type":"ContainerStarted","Data":"aeacca723cd88bd46149eb0068c4e1ff0e0b08c471ccf4479775f483f92922f1"} Jan 22 16:55:07 crc kubenswrapper[4758]: I0122 16:55:07.962999 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"946719d1-252a-449e-9b4e-5ae6639fd635","Type":"ContainerStarted","Data":"5831e41c121d089132add0f651264286e95e5920b12b6f76b25e6c55fa6cf6a0"} Jan 22 16:55:07 crc kubenswrapper[4758]: I0122 16:55:07.970445 4758 generic.go:334] "Generic (PLEG): container finished" podID="8c26bc28-4e84-4218-9bfb-7d7cc6206cac" containerID="e15f171d4c44505a373a958aea50becf54aa8f12667b4062e81c229ace225892" exitCode=0 Jan 22 16:55:07 crc kubenswrapper[4758]: I0122 16:55:07.970491 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8c26bc28-4e84-4218-9bfb-7d7cc6206cac","Type":"ContainerDied","Data":"e15f171d4c44505a373a958aea50becf54aa8f12667b4062e81c229ace225892"} Jan 22 16:55:08 crc kubenswrapper[4758]: I0122 16:55:08.002901 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.002879979 podStartE2EDuration="2.002879979s" podCreationTimestamp="2026-01-22 16:55:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:55:07.98057046 +0000 UTC m=+1529.463909755" watchObservedRunningTime="2026-01-22 16:55:08.002879979 +0000 UTC m=+1529.486219264" Jan 22 16:55:08 crc kubenswrapper[4758]: I0122 16:55:08.009922 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.009900471 podStartE2EDuration="2.009900471s" podCreationTimestamp="2026-01-22 16:55:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:55:08.001486991 +0000 UTC m=+1529.484826286" watchObservedRunningTime="2026-01-22 16:55:08.009900471 +0000 UTC m=+1529.493239756" Jan 22 16:55:08 crc kubenswrapper[4758]: I0122 16:55:08.218497 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 16:55:08 crc kubenswrapper[4758]: I0122 16:55:08.399907 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4hnx\" (UniqueName: \"kubernetes.io/projected/8c26bc28-4e84-4218-9bfb-7d7cc6206cac-kube-api-access-q4hnx\") pod \"8c26bc28-4e84-4218-9bfb-7d7cc6206cac\" (UID: \"8c26bc28-4e84-4218-9bfb-7d7cc6206cac\") " Jan 22 16:55:08 crc kubenswrapper[4758]: I0122 16:55:08.400019 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c26bc28-4e84-4218-9bfb-7d7cc6206cac-config-data\") pod \"8c26bc28-4e84-4218-9bfb-7d7cc6206cac\" (UID: \"8c26bc28-4e84-4218-9bfb-7d7cc6206cac\") " Jan 22 16:55:08 crc kubenswrapper[4758]: I0122 16:55:08.400232 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c26bc28-4e84-4218-9bfb-7d7cc6206cac-combined-ca-bundle\") pod \"8c26bc28-4e84-4218-9bfb-7d7cc6206cac\" (UID: \"8c26bc28-4e84-4218-9bfb-7d7cc6206cac\") " Jan 22 16:55:08 crc kubenswrapper[4758]: I0122 16:55:08.405399 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c26bc28-4e84-4218-9bfb-7d7cc6206cac-kube-api-access-q4hnx" (OuterVolumeSpecName: "kube-api-access-q4hnx") pod "8c26bc28-4e84-4218-9bfb-7d7cc6206cac" (UID: "8c26bc28-4e84-4218-9bfb-7d7cc6206cac"). InnerVolumeSpecName "kube-api-access-q4hnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:55:08 crc kubenswrapper[4758]: I0122 16:55:08.445034 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c26bc28-4e84-4218-9bfb-7d7cc6206cac-config-data" (OuterVolumeSpecName: "config-data") pod "8c26bc28-4e84-4218-9bfb-7d7cc6206cac" (UID: "8c26bc28-4e84-4218-9bfb-7d7cc6206cac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:55:08 crc kubenswrapper[4758]: I0122 16:55:08.445853 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c26bc28-4e84-4218-9bfb-7d7cc6206cac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c26bc28-4e84-4218-9bfb-7d7cc6206cac" (UID: "8c26bc28-4e84-4218-9bfb-7d7cc6206cac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:55:08 crc kubenswrapper[4758]: I0122 16:55:08.506142 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4hnx\" (UniqueName: \"kubernetes.io/projected/8c26bc28-4e84-4218-9bfb-7d7cc6206cac-kube-api-access-q4hnx\") on node \"crc\" DevicePath \"\"" Jan 22 16:55:08 crc kubenswrapper[4758]: I0122 16:55:08.506183 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c26bc28-4e84-4218-9bfb-7d7cc6206cac-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:55:08 crc kubenswrapper[4758]: I0122 16:55:08.506198 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c26bc28-4e84-4218-9bfb-7d7cc6206cac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:55:08 crc kubenswrapper[4758]: I0122 16:55:08.982727 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 16:55:08 crc kubenswrapper[4758]: I0122 16:55:08.983026 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8c26bc28-4e84-4218-9bfb-7d7cc6206cac","Type":"ContainerDied","Data":"4488de1456c7f767e5b8b24caa22cf78e7cf04698fa3a350844526acead4f3bc"} Jan 22 16:55:08 crc kubenswrapper[4758]: I0122 16:55:08.983102 4758 scope.go:117] "RemoveContainer" containerID="e15f171d4c44505a373a958aea50becf54aa8f12667b4062e81c229ace225892" Jan 22 16:55:09 crc kubenswrapper[4758]: I0122 16:55:09.010931 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 16:55:09 crc kubenswrapper[4758]: I0122 16:55:09.024255 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 16:55:09 crc kubenswrapper[4758]: I0122 16:55:09.046379 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 16:55:09 crc kubenswrapper[4758]: E0122 16:55:09.046999 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c26bc28-4e84-4218-9bfb-7d7cc6206cac" containerName="nova-scheduler-scheduler" Jan 22 16:55:09 crc kubenswrapper[4758]: I0122 16:55:09.047023 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c26bc28-4e84-4218-9bfb-7d7cc6206cac" containerName="nova-scheduler-scheduler" Jan 22 16:55:09 crc kubenswrapper[4758]: I0122 16:55:09.047263 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c26bc28-4e84-4218-9bfb-7d7cc6206cac" containerName="nova-scheduler-scheduler" Jan 22 16:55:09 crc kubenswrapper[4758]: I0122 16:55:09.047956 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 16:55:09 crc kubenswrapper[4758]: I0122 16:55:09.050232 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 22 16:55:09 crc kubenswrapper[4758]: I0122 16:55:09.080736 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 16:55:09 crc kubenswrapper[4758]: E0122 16:55:09.096708 4758 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c26bc28_4e84_4218_9bfb_7d7cc6206cac.slice/crio-4488de1456c7f767e5b8b24caa22cf78e7cf04698fa3a350844526acead4f3bc\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c26bc28_4e84_4218_9bfb_7d7cc6206cac.slice\": RecentStats: unable to find data in memory cache]" Jan 22 16:55:09 crc kubenswrapper[4758]: I0122 16:55:09.121914 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40fd7db8-beee-4742-bc51-2234f6b22e17-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"40fd7db8-beee-4742-bc51-2234f6b22e17\") " pod="openstack/nova-scheduler-0" Jan 22 16:55:09 crc kubenswrapper[4758]: I0122 16:55:09.121962 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6zlx\" (UniqueName: \"kubernetes.io/projected/40fd7db8-beee-4742-bc51-2234f6b22e17-kube-api-access-l6zlx\") pod \"nova-scheduler-0\" (UID: \"40fd7db8-beee-4742-bc51-2234f6b22e17\") " pod="openstack/nova-scheduler-0" Jan 22 16:55:09 crc kubenswrapper[4758]: I0122 16:55:09.122068 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40fd7db8-beee-4742-bc51-2234f6b22e17-config-data\") pod \"nova-scheduler-0\" (UID: \"40fd7db8-beee-4742-bc51-2234f6b22e17\") " pod="openstack/nova-scheduler-0" Jan 22 16:55:09 crc kubenswrapper[4758]: I0122 16:55:09.223846 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40fd7db8-beee-4742-bc51-2234f6b22e17-config-data\") pod \"nova-scheduler-0\" (UID: \"40fd7db8-beee-4742-bc51-2234f6b22e17\") " pod="openstack/nova-scheduler-0" Jan 22 16:55:09 crc kubenswrapper[4758]: I0122 16:55:09.224002 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40fd7db8-beee-4742-bc51-2234f6b22e17-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"40fd7db8-beee-4742-bc51-2234f6b22e17\") " pod="openstack/nova-scheduler-0" Jan 22 16:55:09 crc kubenswrapper[4758]: I0122 16:55:09.224033 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6zlx\" (UniqueName: \"kubernetes.io/projected/40fd7db8-beee-4742-bc51-2234f6b22e17-kube-api-access-l6zlx\") pod \"nova-scheduler-0\" (UID: \"40fd7db8-beee-4742-bc51-2234f6b22e17\") " pod="openstack/nova-scheduler-0" Jan 22 16:55:09 crc kubenswrapper[4758]: I0122 16:55:09.228851 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40fd7db8-beee-4742-bc51-2234f6b22e17-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"40fd7db8-beee-4742-bc51-2234f6b22e17\") " pod="openstack/nova-scheduler-0" Jan 22 16:55:09 crc kubenswrapper[4758]: I0122 16:55:09.229236 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40fd7db8-beee-4742-bc51-2234f6b22e17-config-data\") pod \"nova-scheduler-0\" (UID: \"40fd7db8-beee-4742-bc51-2234f6b22e17\") " pod="openstack/nova-scheduler-0" Jan 22 16:55:09 crc kubenswrapper[4758]: I0122 16:55:09.241482 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6zlx\" (UniqueName: \"kubernetes.io/projected/40fd7db8-beee-4742-bc51-2234f6b22e17-kube-api-access-l6zlx\") pod \"nova-scheduler-0\" (UID: \"40fd7db8-beee-4742-bc51-2234f6b22e17\") " pod="openstack/nova-scheduler-0" Jan 22 16:55:09 crc kubenswrapper[4758]: I0122 16:55:09.368772 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 16:55:09 crc kubenswrapper[4758]: I0122 16:55:09.856694 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 16:55:09 crc kubenswrapper[4758]: I0122 16:55:09.993930 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"40fd7db8-beee-4742-bc51-2234f6b22e17","Type":"ContainerStarted","Data":"cba4c79b0e487a68378abf43bc9866758fe2ab961d565ec3e36953d775c5baff"} Jan 22 16:55:10 crc kubenswrapper[4758]: I0122 16:55:10.820756 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c26bc28-4e84-4218-9bfb-7d7cc6206cac" path="/var/lib/kubelet/pods/8c26bc28-4e84-4218-9bfb-7d7cc6206cac/volumes" Jan 22 16:55:11 crc kubenswrapper[4758]: I0122 16:55:11.006812 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"40fd7db8-beee-4742-bc51-2234f6b22e17","Type":"ContainerStarted","Data":"67e70ad21ba1b873ead28099bf482578a4545536bcb23beb4ac0f9cbfa0a5a45"} Jan 22 16:55:11 crc kubenswrapper[4758]: I0122 16:55:11.026158 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.026127881 podStartE2EDuration="2.026127881s" podCreationTimestamp="2026-01-22 16:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:55:11.02244916 +0000 UTC m=+1532.505788455" watchObservedRunningTime="2026-01-22 16:55:11.026127881 +0000 UTC m=+1532.509467166" Jan 22 16:55:11 crc kubenswrapper[4758]: I0122 16:55:11.414038 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 22 16:55:11 crc kubenswrapper[4758]: I0122 16:55:11.414436 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 22 16:55:14 crc kubenswrapper[4758]: I0122 16:55:14.369251 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 22 16:55:16 crc kubenswrapper[4758]: I0122 16:55:16.399455 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 16:55:16 crc kubenswrapper[4758]: I0122 16:55:16.400468 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 16:55:16 crc kubenswrapper[4758]: I0122 16:55:16.413960 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 22 16:55:16 crc kubenswrapper[4758]: I0122 16:55:16.414074 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 22 16:55:17 crc kubenswrapper[4758]: I0122 16:55:17.409938 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="946719d1-252a-449e-9b4e-5ae6639fd635" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.224:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 16:55:17 crc kubenswrapper[4758]: I0122 16:55:17.409938 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="946719d1-252a-449e-9b4e-5ae6639fd635" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.224:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 16:55:17 crc kubenswrapper[4758]: I0122 16:55:17.425099 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="6052fa46-8362-4abe-8577-5e47c36af2c1" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.225:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 16:55:17 crc kubenswrapper[4758]: I0122 16:55:17.425135 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="6052fa46-8362-4abe-8577-5e47c36af2c1" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.225:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 16:55:19 crc kubenswrapper[4758]: I0122 16:55:19.368984 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 22 16:55:19 crc kubenswrapper[4758]: I0122 16:55:19.402536 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 22 16:55:20 crc kubenswrapper[4758]: I0122 16:55:20.121170 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 22 16:55:24 crc kubenswrapper[4758]: I0122 16:55:24.310282 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 22 16:55:26 crc kubenswrapper[4758]: I0122 16:55:26.483895 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 22 16:55:26 crc kubenswrapper[4758]: I0122 16:55:26.484305 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 22 16:55:26 crc kubenswrapper[4758]: I0122 16:55:26.492541 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 22 16:55:26 crc kubenswrapper[4758]: I0122 16:55:26.493825 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 22 16:55:26 crc kubenswrapper[4758]: I0122 16:55:26.494526 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 22 16:55:26 crc kubenswrapper[4758]: I0122 16:55:26.496066 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 22 16:55:26 crc kubenswrapper[4758]: I0122 16:55:26.505997 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 22 16:55:26 crc kubenswrapper[4758]: I0122 16:55:26.516239 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 22 16:55:27 crc kubenswrapper[4758]: I0122 16:55:27.167052 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 22 16:55:27 crc kubenswrapper[4758]: I0122 16:55:27.185558 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 22 16:55:35 crc kubenswrapper[4758]: I0122 16:55:35.845286 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 16:55:36 crc kubenswrapper[4758]: I0122 16:55:36.902423 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 16:55:39 crc kubenswrapper[4758]: I0122 16:55:39.262374 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="78374f0a-c7de-486b-9118-fe2dccc5bdca" containerName="rabbitmq" containerID="cri-o://fca875c9cea54d51ccfd1cc1dec5c30439a38813cd673c3933e7c8bd9170113c" gracePeriod=604797 Jan 22 16:55:39 crc kubenswrapper[4758]: I0122 16:55:39.843565 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="f7805c55-6999-45a8-afd4-3fd1fa039b7a" containerName="rabbitmq" containerID="cri-o://6b5b7187bab226acbf09afeb6305336208961ff049b92436aa22b21b922e9304" gracePeriod=604798 Jan 22 16:55:41 crc kubenswrapper[4758]: I0122 16:55:41.937279 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.058979 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f7805c55-6999-45a8-afd4-3fd1fa039b7a-pod-info\") pod \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\" (UID: \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\") " Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.059122 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7805c55-6999-45a8-afd4-3fd1fa039b7a-config-data\") pod \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\" (UID: \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\") " Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.059178 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\" (UID: \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\") " Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.059210 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f7805c55-6999-45a8-afd4-3fd1fa039b7a-plugins-conf\") pod \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\" (UID: \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\") " Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.059260 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f7805c55-6999-45a8-afd4-3fd1fa039b7a-rabbitmq-plugins\") pod \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\" (UID: \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\") " Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.059292 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f7805c55-6999-45a8-afd4-3fd1fa039b7a-rabbitmq-confd\") pod \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\" (UID: \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\") " Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.059361 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f7805c55-6999-45a8-afd4-3fd1fa039b7a-rabbitmq-erlang-cookie\") pod \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\" (UID: \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\") " Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.059408 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f7805c55-6999-45a8-afd4-3fd1fa039b7a-server-conf\") pod \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\" (UID: \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\") " Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.059455 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f7805c55-6999-45a8-afd4-3fd1fa039b7a-rabbitmq-tls\") pod \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\" (UID: \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\") " Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.059476 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f7805c55-6999-45a8-afd4-3fd1fa039b7a-erlang-cookie-secret\") pod \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\" (UID: \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\") " Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.059506 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dlwr\" (UniqueName: \"kubernetes.io/projected/f7805c55-6999-45a8-afd4-3fd1fa039b7a-kube-api-access-8dlwr\") pod \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\" (UID: \"f7805c55-6999-45a8-afd4-3fd1fa039b7a\") " Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.062386 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7805c55-6999-45a8-afd4-3fd1fa039b7a-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f7805c55-6999-45a8-afd4-3fd1fa039b7a" (UID: "f7805c55-6999-45a8-afd4-3fd1fa039b7a"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.071585 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7805c55-6999-45a8-afd4-3fd1fa039b7a-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f7805c55-6999-45a8-afd4-3fd1fa039b7a" (UID: "f7805c55-6999-45a8-afd4-3fd1fa039b7a"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.074262 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7805c55-6999-45a8-afd4-3fd1fa039b7a-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f7805c55-6999-45a8-afd4-3fd1fa039b7a" (UID: "f7805c55-6999-45a8-afd4-3fd1fa039b7a"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.074901 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f7805c55-6999-45a8-afd4-3fd1fa039b7a-pod-info" (OuterVolumeSpecName: "pod-info") pod "f7805c55-6999-45a8-afd4-3fd1fa039b7a" (UID: "f7805c55-6999-45a8-afd4-3fd1fa039b7a"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.075024 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "persistence") pod "f7805c55-6999-45a8-afd4-3fd1fa039b7a" (UID: "f7805c55-6999-45a8-afd4-3fd1fa039b7a"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.075135 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7805c55-6999-45a8-afd4-3fd1fa039b7a-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "f7805c55-6999-45a8-afd4-3fd1fa039b7a" (UID: "f7805c55-6999-45a8-afd4-3fd1fa039b7a"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.075318 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7805c55-6999-45a8-afd4-3fd1fa039b7a-kube-api-access-8dlwr" (OuterVolumeSpecName: "kube-api-access-8dlwr") pod "f7805c55-6999-45a8-afd4-3fd1fa039b7a" (UID: "f7805c55-6999-45a8-afd4-3fd1fa039b7a"). InnerVolumeSpecName "kube-api-access-8dlwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.084480 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7805c55-6999-45a8-afd4-3fd1fa039b7a-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f7805c55-6999-45a8-afd4-3fd1fa039b7a" (UID: "f7805c55-6999-45a8-afd4-3fd1fa039b7a"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.096108 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7805c55-6999-45a8-afd4-3fd1fa039b7a-config-data" (OuterVolumeSpecName: "config-data") pod "f7805c55-6999-45a8-afd4-3fd1fa039b7a" (UID: "f7805c55-6999-45a8-afd4-3fd1fa039b7a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.132308 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.157304 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7805c55-6999-45a8-afd4-3fd1fa039b7a-server-conf" (OuterVolumeSpecName: "server-conf") pod "f7805c55-6999-45a8-afd4-3fd1fa039b7a" (UID: "f7805c55-6999-45a8-afd4-3fd1fa039b7a"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.162055 4758 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f7805c55-6999-45a8-afd4-3fd1fa039b7a-pod-info\") on node \"crc\" DevicePath \"\"" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.162080 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7805c55-6999-45a8-afd4-3fd1fa039b7a-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.162105 4758 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.162141 4758 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f7805c55-6999-45a8-afd4-3fd1fa039b7a-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.162150 4758 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f7805c55-6999-45a8-afd4-3fd1fa039b7a-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.165871 4758 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f7805c55-6999-45a8-afd4-3fd1fa039b7a-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.165945 4758 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f7805c55-6999-45a8-afd4-3fd1fa039b7a-server-conf\") on node \"crc\" DevicePath \"\"" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.165961 4758 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f7805c55-6999-45a8-afd4-3fd1fa039b7a-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.165972 4758 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f7805c55-6999-45a8-afd4-3fd1fa039b7a-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.165984 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dlwr\" (UniqueName: \"kubernetes.io/projected/f7805c55-6999-45a8-afd4-3fd1fa039b7a-kube-api-access-8dlwr\") on node \"crc\" DevicePath \"\"" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.195386 4758 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.240834 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7805c55-6999-45a8-afd4-3fd1fa039b7a-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f7805c55-6999-45a8-afd4-3fd1fa039b7a" (UID: "f7805c55-6999-45a8-afd4-3fd1fa039b7a"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.266592 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/78374f0a-c7de-486b-9118-fe2dccc5bdca-pod-info\") pod \"78374f0a-c7de-486b-9118-fe2dccc5bdca\" (UID: \"78374f0a-c7de-486b-9118-fe2dccc5bdca\") " Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.266692 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/78374f0a-c7de-486b-9118-fe2dccc5bdca-rabbitmq-plugins\") pod \"78374f0a-c7de-486b-9118-fe2dccc5bdca\" (UID: \"78374f0a-c7de-486b-9118-fe2dccc5bdca\") " Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.266757 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"78374f0a-c7de-486b-9118-fe2dccc5bdca\" (UID: \"78374f0a-c7de-486b-9118-fe2dccc5bdca\") " Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.266805 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78374f0a-c7de-486b-9118-fe2dccc5bdca-config-data\") pod \"78374f0a-c7de-486b-9118-fe2dccc5bdca\" (UID: \"78374f0a-c7de-486b-9118-fe2dccc5bdca\") " Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.266883 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/78374f0a-c7de-486b-9118-fe2dccc5bdca-rabbitmq-tls\") pod \"78374f0a-c7de-486b-9118-fe2dccc5bdca\" (UID: \"78374f0a-c7de-486b-9118-fe2dccc5bdca\") " Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.266909 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xz86b\" (UniqueName: \"kubernetes.io/projected/78374f0a-c7de-486b-9118-fe2dccc5bdca-kube-api-access-xz86b\") pod \"78374f0a-c7de-486b-9118-fe2dccc5bdca\" (UID: \"78374f0a-c7de-486b-9118-fe2dccc5bdca\") " Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.266936 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/78374f0a-c7de-486b-9118-fe2dccc5bdca-erlang-cookie-secret\") pod \"78374f0a-c7de-486b-9118-fe2dccc5bdca\" (UID: \"78374f0a-c7de-486b-9118-fe2dccc5bdca\") " Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.267015 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/78374f0a-c7de-486b-9118-fe2dccc5bdca-rabbitmq-erlang-cookie\") pod \"78374f0a-c7de-486b-9118-fe2dccc5bdca\" (UID: \"78374f0a-c7de-486b-9118-fe2dccc5bdca\") " Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.267062 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/78374f0a-c7de-486b-9118-fe2dccc5bdca-rabbitmq-confd\") pod \"78374f0a-c7de-486b-9118-fe2dccc5bdca\" (UID: \"78374f0a-c7de-486b-9118-fe2dccc5bdca\") " Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.267082 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/78374f0a-c7de-486b-9118-fe2dccc5bdca-server-conf\") pod \"78374f0a-c7de-486b-9118-fe2dccc5bdca\" (UID: \"78374f0a-c7de-486b-9118-fe2dccc5bdca\") " Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.267106 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/78374f0a-c7de-486b-9118-fe2dccc5bdca-plugins-conf\") pod \"78374f0a-c7de-486b-9118-fe2dccc5bdca\" (UID: \"78374f0a-c7de-486b-9118-fe2dccc5bdca\") " Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.267806 4758 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.267825 4758 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f7805c55-6999-45a8-afd4-3fd1fa039b7a-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.268603 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78374f0a-c7de-486b-9118-fe2dccc5bdca-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "78374f0a-c7de-486b-9118-fe2dccc5bdca" (UID: "78374f0a-c7de-486b-9118-fe2dccc5bdca"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.269188 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78374f0a-c7de-486b-9118-fe2dccc5bdca-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "78374f0a-c7de-486b-9118-fe2dccc5bdca" (UID: "78374f0a-c7de-486b-9118-fe2dccc5bdca"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.270027 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78374f0a-c7de-486b-9118-fe2dccc5bdca-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "78374f0a-c7de-486b-9118-fe2dccc5bdca" (UID: "78374f0a-c7de-486b-9118-fe2dccc5bdca"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.284957 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78374f0a-c7de-486b-9118-fe2dccc5bdca-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "78374f0a-c7de-486b-9118-fe2dccc5bdca" (UID: "78374f0a-c7de-486b-9118-fe2dccc5bdca"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.288807 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/78374f0a-c7de-486b-9118-fe2dccc5bdca-pod-info" (OuterVolumeSpecName: "pod-info") pod "78374f0a-c7de-486b-9118-fe2dccc5bdca" (UID: "78374f0a-c7de-486b-9118-fe2dccc5bdca"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.290170 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78374f0a-c7de-486b-9118-fe2dccc5bdca-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "78374f0a-c7de-486b-9118-fe2dccc5bdca" (UID: "78374f0a-c7de-486b-9118-fe2dccc5bdca"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.300904 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "persistence") pod "78374f0a-c7de-486b-9118-fe2dccc5bdca" (UID: "78374f0a-c7de-486b-9118-fe2dccc5bdca"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.307016 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78374f0a-c7de-486b-9118-fe2dccc5bdca-kube-api-access-xz86b" (OuterVolumeSpecName: "kube-api-access-xz86b") pod "78374f0a-c7de-486b-9118-fe2dccc5bdca" (UID: "78374f0a-c7de-486b-9118-fe2dccc5bdca"). InnerVolumeSpecName "kube-api-access-xz86b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.367407 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78374f0a-c7de-486b-9118-fe2dccc5bdca-config-data" (OuterVolumeSpecName: "config-data") pod "78374f0a-c7de-486b-9118-fe2dccc5bdca" (UID: "78374f0a-c7de-486b-9118-fe2dccc5bdca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.369187 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xz86b\" (UniqueName: \"kubernetes.io/projected/78374f0a-c7de-486b-9118-fe2dccc5bdca-kube-api-access-xz86b\") on node \"crc\" DevicePath \"\"" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.369220 4758 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/78374f0a-c7de-486b-9118-fe2dccc5bdca-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.369230 4758 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/78374f0a-c7de-486b-9118-fe2dccc5bdca-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.369240 4758 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/78374f0a-c7de-486b-9118-fe2dccc5bdca-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.369249 4758 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/78374f0a-c7de-486b-9118-fe2dccc5bdca-pod-info\") on node \"crc\" DevicePath \"\"" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.369257 4758 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/78374f0a-c7de-486b-9118-fe2dccc5bdca-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.369288 4758 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.369299 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78374f0a-c7de-486b-9118-fe2dccc5bdca-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.369307 4758 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/78374f0a-c7de-486b-9118-fe2dccc5bdca-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.379311 4758 generic.go:334] "Generic (PLEG): container finished" podID="f7805c55-6999-45a8-afd4-3fd1fa039b7a" containerID="6b5b7187bab226acbf09afeb6305336208961ff049b92436aa22b21b922e9304" exitCode=0 Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.379446 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f7805c55-6999-45a8-afd4-3fd1fa039b7a","Type":"ContainerDied","Data":"6b5b7187bab226acbf09afeb6305336208961ff049b92436aa22b21b922e9304"} Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.379548 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f7805c55-6999-45a8-afd4-3fd1fa039b7a","Type":"ContainerDied","Data":"3163ba667ef55e66abe3d198eb0aa4c990e5e7e6e438fec9d7dcf6a48d2f19d9"} Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.379612 4758 scope.go:117] "RemoveContainer" containerID="6b5b7187bab226acbf09afeb6305336208961ff049b92436aa22b21b922e9304" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.379852 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.394942 4758 generic.go:334] "Generic (PLEG): container finished" podID="78374f0a-c7de-486b-9118-fe2dccc5bdca" containerID="fca875c9cea54d51ccfd1cc1dec5c30439a38813cd673c3933e7c8bd9170113c" exitCode=0 Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.395007 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"78374f0a-c7de-486b-9118-fe2dccc5bdca","Type":"ContainerDied","Data":"fca875c9cea54d51ccfd1cc1dec5c30439a38813cd673c3933e7c8bd9170113c"} Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.395035 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"78374f0a-c7de-486b-9118-fe2dccc5bdca","Type":"ContainerDied","Data":"94944dc56131edccaead4d11a34fc16104c1ec896a0e5471a50bf56b08cfb229"} Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.395093 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.439204 4758 scope.go:117] "RemoveContainer" containerID="c26825e462dfa67cd2f638d3befab499bfa4a240e39dfa9dfa58220e27604d5d" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.502325 4758 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.535968 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78374f0a-c7de-486b-9118-fe2dccc5bdca-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "78374f0a-c7de-486b-9118-fe2dccc5bdca" (UID: "78374f0a-c7de-486b-9118-fe2dccc5bdca"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.563399 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78374f0a-c7de-486b-9118-fe2dccc5bdca-server-conf" (OuterVolumeSpecName: "server-conf") pod "78374f0a-c7de-486b-9118-fe2dccc5bdca" (UID: "78374f0a-c7de-486b-9118-fe2dccc5bdca"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.581026 4758 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/78374f0a-c7de-486b-9118-fe2dccc5bdca-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.581057 4758 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/78374f0a-c7de-486b-9118-fe2dccc5bdca-server-conf\") on node \"crc\" DevicePath \"\"" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.581067 4758 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.669459 4758 scope.go:117] "RemoveContainer" containerID="6b5b7187bab226acbf09afeb6305336208961ff049b92436aa22b21b922e9304" Jan 22 16:55:42 crc kubenswrapper[4758]: E0122 16:55:42.669987 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b5b7187bab226acbf09afeb6305336208961ff049b92436aa22b21b922e9304\": container with ID starting with 6b5b7187bab226acbf09afeb6305336208961ff049b92436aa22b21b922e9304 not found: ID does not exist" containerID="6b5b7187bab226acbf09afeb6305336208961ff049b92436aa22b21b922e9304" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.670052 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b5b7187bab226acbf09afeb6305336208961ff049b92436aa22b21b922e9304"} err="failed to get container status \"6b5b7187bab226acbf09afeb6305336208961ff049b92436aa22b21b922e9304\": rpc error: code = NotFound desc = could not find container \"6b5b7187bab226acbf09afeb6305336208961ff049b92436aa22b21b922e9304\": container with ID starting with 6b5b7187bab226acbf09afeb6305336208961ff049b92436aa22b21b922e9304 not found: ID does not exist" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.670088 4758 scope.go:117] "RemoveContainer" containerID="c26825e462dfa67cd2f638d3befab499bfa4a240e39dfa9dfa58220e27604d5d" Jan 22 16:55:42 crc kubenswrapper[4758]: E0122 16:55:42.670454 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c26825e462dfa67cd2f638d3befab499bfa4a240e39dfa9dfa58220e27604d5d\": container with ID starting with c26825e462dfa67cd2f638d3befab499bfa4a240e39dfa9dfa58220e27604d5d not found: ID does not exist" containerID="c26825e462dfa67cd2f638d3befab499bfa4a240e39dfa9dfa58220e27604d5d" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.670501 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c26825e462dfa67cd2f638d3befab499bfa4a240e39dfa9dfa58220e27604d5d"} err="failed to get container status \"c26825e462dfa67cd2f638d3befab499bfa4a240e39dfa9dfa58220e27604d5d\": rpc error: code = NotFound desc = could not find container \"c26825e462dfa67cd2f638d3befab499bfa4a240e39dfa9dfa58220e27604d5d\": container with ID starting with c26825e462dfa67cd2f638d3befab499bfa4a240e39dfa9dfa58220e27604d5d not found: ID does not exist" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.670527 4758 scope.go:117] "RemoveContainer" containerID="fca875c9cea54d51ccfd1cc1dec5c30439a38813cd673c3933e7c8bd9170113c" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.675002 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.688827 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.734405 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.734416 4758 scope.go:117] "RemoveContainer" containerID="8ef43c5864260465182f92e5fb8fcb55f0a5a865cc6f8dd8ac08a77e2cbd0e8e" Jan 22 16:55:42 crc kubenswrapper[4758]: E0122 16:55:42.735145 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7805c55-6999-45a8-afd4-3fd1fa039b7a" containerName="setup-container" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.735179 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7805c55-6999-45a8-afd4-3fd1fa039b7a" containerName="setup-container" Jan 22 16:55:42 crc kubenswrapper[4758]: E0122 16:55:42.735207 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78374f0a-c7de-486b-9118-fe2dccc5bdca" containerName="setup-container" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.735217 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="78374f0a-c7de-486b-9118-fe2dccc5bdca" containerName="setup-container" Jan 22 16:55:42 crc kubenswrapper[4758]: E0122 16:55:42.735248 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78374f0a-c7de-486b-9118-fe2dccc5bdca" containerName="rabbitmq" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.735258 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="78374f0a-c7de-486b-9118-fe2dccc5bdca" containerName="rabbitmq" Jan 22 16:55:42 crc kubenswrapper[4758]: E0122 16:55:42.735282 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7805c55-6999-45a8-afd4-3fd1fa039b7a" containerName="rabbitmq" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.735290 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7805c55-6999-45a8-afd4-3fd1fa039b7a" containerName="rabbitmq" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.735585 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="78374f0a-c7de-486b-9118-fe2dccc5bdca" containerName="rabbitmq" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.735614 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7805c55-6999-45a8-afd4-3fd1fa039b7a" containerName="rabbitmq" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.737188 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.739575 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.744791 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.745229 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.745234 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.745804 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.746465 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-5sdkn" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.746709 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.747615 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.763950 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.768006 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.782464 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.785057 4758 scope.go:117] "RemoveContainer" containerID="fca875c9cea54d51ccfd1cc1dec5c30439a38813cd673c3933e7c8bd9170113c" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.785435 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/11ff72c7-325b-4836-8d06-dce1d2e8ea26-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"11ff72c7-325b-4836-8d06-dce1d2e8ea26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.785499 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/11ff72c7-325b-4836-8d06-dce1d2e8ea26-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"11ff72c7-325b-4836-8d06-dce1d2e8ea26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.785524 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/11ff72c7-325b-4836-8d06-dce1d2e8ea26-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"11ff72c7-325b-4836-8d06-dce1d2e8ea26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.785564 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/11ff72c7-325b-4836-8d06-dce1d2e8ea26-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"11ff72c7-325b-4836-8d06-dce1d2e8ea26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.785631 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/11ff72c7-325b-4836-8d06-dce1d2e8ea26-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"11ff72c7-325b-4836-8d06-dce1d2e8ea26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.785650 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/11ff72c7-325b-4836-8d06-dce1d2e8ea26-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"11ff72c7-325b-4836-8d06-dce1d2e8ea26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.785684 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/11ff72c7-325b-4836-8d06-dce1d2e8ea26-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"11ff72c7-325b-4836-8d06-dce1d2e8ea26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.785705 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/11ff72c7-325b-4836-8d06-dce1d2e8ea26-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"11ff72c7-325b-4836-8d06-dce1d2e8ea26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.785732 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq2jm\" (UniqueName: \"kubernetes.io/projected/11ff72c7-325b-4836-8d06-dce1d2e8ea26-kube-api-access-bq2jm\") pod \"rabbitmq-cell1-server-0\" (UID: \"11ff72c7-325b-4836-8d06-dce1d2e8ea26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.785780 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/11ff72c7-325b-4836-8d06-dce1d2e8ea26-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"11ff72c7-325b-4836-8d06-dce1d2e8ea26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.785840 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"11ff72c7-325b-4836-8d06-dce1d2e8ea26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.787222 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: E0122 16:55:42.796381 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fca875c9cea54d51ccfd1cc1dec5c30439a38813cd673c3933e7c8bd9170113c\": container with ID starting with fca875c9cea54d51ccfd1cc1dec5c30439a38813cd673c3933e7c8bd9170113c not found: ID does not exist" containerID="fca875c9cea54d51ccfd1cc1dec5c30439a38813cd673c3933e7c8bd9170113c" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.796451 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fca875c9cea54d51ccfd1cc1dec5c30439a38813cd673c3933e7c8bd9170113c"} err="failed to get container status \"fca875c9cea54d51ccfd1cc1dec5c30439a38813cd673c3933e7c8bd9170113c\": rpc error: code = NotFound desc = could not find container \"fca875c9cea54d51ccfd1cc1dec5c30439a38813cd673c3933e7c8bd9170113c\": container with ID starting with fca875c9cea54d51ccfd1cc1dec5c30439a38813cd673c3933e7c8bd9170113c not found: ID does not exist" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.796485 4758 scope.go:117] "RemoveContainer" containerID="8ef43c5864260465182f92e5fb8fcb55f0a5a865cc6f8dd8ac08a77e2cbd0e8e" Jan 22 16:55:42 crc kubenswrapper[4758]: E0122 16:55:42.797080 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ef43c5864260465182f92e5fb8fcb55f0a5a865cc6f8dd8ac08a77e2cbd0e8e\": container with ID starting with 8ef43c5864260465182f92e5fb8fcb55f0a5a865cc6f8dd8ac08a77e2cbd0e8e not found: ID does not exist" containerID="8ef43c5864260465182f92e5fb8fcb55f0a5a865cc6f8dd8ac08a77e2cbd0e8e" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.797245 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ef43c5864260465182f92e5fb8fcb55f0a5a865cc6f8dd8ac08a77e2cbd0e8e"} err="failed to get container status \"8ef43c5864260465182f92e5fb8fcb55f0a5a865cc6f8dd8ac08a77e2cbd0e8e\": rpc error: code = NotFound desc = could not find container \"8ef43c5864260465182f92e5fb8fcb55f0a5a865cc6f8dd8ac08a77e2cbd0e8e\": container with ID starting with 8ef43c5864260465182f92e5fb8fcb55f0a5a865cc6f8dd8ac08a77e2cbd0e8e not found: ID does not exist" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.797911 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.797997 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.798405 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.798831 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.799483 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-d8jxf" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.799682 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.808871 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.880084 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78374f0a-c7de-486b-9118-fe2dccc5bdca" path="/var/lib/kubelet/pods/78374f0a-c7de-486b-9118-fe2dccc5bdca/volumes" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.881063 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7805c55-6999-45a8-afd4-3fd1fa039b7a" path="/var/lib/kubelet/pods/f7805c55-6999-45a8-afd4-3fd1fa039b7a/volumes" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.881726 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.887943 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/401b6249-7451-4767-9363-89295d6224f8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"401b6249-7451-4767-9363-89295d6224f8\") " pod="openstack/rabbitmq-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.888155 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"11ff72c7-325b-4836-8d06-dce1d2e8ea26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.888302 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb7f5\" (UniqueName: \"kubernetes.io/projected/401b6249-7451-4767-9363-89295d6224f8-kube-api-access-vb7f5\") pod \"rabbitmq-server-0\" (UID: \"401b6249-7451-4767-9363-89295d6224f8\") " pod="openstack/rabbitmq-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.888397 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/401b6249-7451-4767-9363-89295d6224f8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"401b6249-7451-4767-9363-89295d6224f8\") " pod="openstack/rabbitmq-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.888491 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"401b6249-7451-4767-9363-89295d6224f8\") " pod="openstack/rabbitmq-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.890561 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/11ff72c7-325b-4836-8d06-dce1d2e8ea26-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"11ff72c7-325b-4836-8d06-dce1d2e8ea26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.890852 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/401b6249-7451-4767-9363-89295d6224f8-config-data\") pod \"rabbitmq-server-0\" (UID: \"401b6249-7451-4767-9363-89295d6224f8\") " pod="openstack/rabbitmq-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.890961 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/11ff72c7-325b-4836-8d06-dce1d2e8ea26-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"11ff72c7-325b-4836-8d06-dce1d2e8ea26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.891041 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/11ff72c7-325b-4836-8d06-dce1d2e8ea26-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"11ff72c7-325b-4836-8d06-dce1d2e8ea26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.891190 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/11ff72c7-325b-4836-8d06-dce1d2e8ea26-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"11ff72c7-325b-4836-8d06-dce1d2e8ea26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.891291 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/401b6249-7451-4767-9363-89295d6224f8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"401b6249-7451-4767-9363-89295d6224f8\") " pod="openstack/rabbitmq-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.891424 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"11ff72c7-325b-4836-8d06-dce1d2e8ea26\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.891968 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/11ff72c7-325b-4836-8d06-dce1d2e8ea26-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"11ff72c7-325b-4836-8d06-dce1d2e8ea26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.892545 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/11ff72c7-325b-4836-8d06-dce1d2e8ea26-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"11ff72c7-325b-4836-8d06-dce1d2e8ea26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.892602 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/11ff72c7-325b-4836-8d06-dce1d2e8ea26-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"11ff72c7-325b-4836-8d06-dce1d2e8ea26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.892633 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/401b6249-7451-4767-9363-89295d6224f8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"401b6249-7451-4767-9363-89295d6224f8\") " pod="openstack/rabbitmq-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.892704 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/401b6249-7451-4767-9363-89295d6224f8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"401b6249-7451-4767-9363-89295d6224f8\") " pod="openstack/rabbitmq-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.892819 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/401b6249-7451-4767-9363-89295d6224f8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"401b6249-7451-4767-9363-89295d6224f8\") " pod="openstack/rabbitmq-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.892859 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/401b6249-7451-4767-9363-89295d6224f8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"401b6249-7451-4767-9363-89295d6224f8\") " pod="openstack/rabbitmq-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.892889 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/11ff72c7-325b-4836-8d06-dce1d2e8ea26-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"11ff72c7-325b-4836-8d06-dce1d2e8ea26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.892921 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/11ff72c7-325b-4836-8d06-dce1d2e8ea26-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"11ff72c7-325b-4836-8d06-dce1d2e8ea26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.893060 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq2jm\" (UniqueName: \"kubernetes.io/projected/11ff72c7-325b-4836-8d06-dce1d2e8ea26-kube-api-access-bq2jm\") pod \"rabbitmq-cell1-server-0\" (UID: \"11ff72c7-325b-4836-8d06-dce1d2e8ea26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.893153 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/401b6249-7451-4767-9363-89295d6224f8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"401b6249-7451-4767-9363-89295d6224f8\") " pod="openstack/rabbitmq-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.893185 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/11ff72c7-325b-4836-8d06-dce1d2e8ea26-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"11ff72c7-325b-4836-8d06-dce1d2e8ea26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.893249 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/11ff72c7-325b-4836-8d06-dce1d2e8ea26-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"11ff72c7-325b-4836-8d06-dce1d2e8ea26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.894205 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/11ff72c7-325b-4836-8d06-dce1d2e8ea26-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"11ff72c7-325b-4836-8d06-dce1d2e8ea26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.894221 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/11ff72c7-325b-4836-8d06-dce1d2e8ea26-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"11ff72c7-325b-4836-8d06-dce1d2e8ea26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.894484 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/11ff72c7-325b-4836-8d06-dce1d2e8ea26-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"11ff72c7-325b-4836-8d06-dce1d2e8ea26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.897581 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/11ff72c7-325b-4836-8d06-dce1d2e8ea26-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"11ff72c7-325b-4836-8d06-dce1d2e8ea26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.897649 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/11ff72c7-325b-4836-8d06-dce1d2e8ea26-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"11ff72c7-325b-4836-8d06-dce1d2e8ea26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.905094 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/11ff72c7-325b-4836-8d06-dce1d2e8ea26-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"11ff72c7-325b-4836-8d06-dce1d2e8ea26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.911534 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq2jm\" (UniqueName: \"kubernetes.io/projected/11ff72c7-325b-4836-8d06-dce1d2e8ea26-kube-api-access-bq2jm\") pod \"rabbitmq-cell1-server-0\" (UID: \"11ff72c7-325b-4836-8d06-dce1d2e8ea26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.912462 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/11ff72c7-325b-4836-8d06-dce1d2e8ea26-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"11ff72c7-325b-4836-8d06-dce1d2e8ea26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.940098 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"11ff72c7-325b-4836-8d06-dce1d2e8ea26\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.994707 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/401b6249-7451-4767-9363-89295d6224f8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"401b6249-7451-4767-9363-89295d6224f8\") " pod="openstack/rabbitmq-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.994801 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/401b6249-7451-4767-9363-89295d6224f8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"401b6249-7451-4767-9363-89295d6224f8\") " pod="openstack/rabbitmq-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.994842 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb7f5\" (UniqueName: \"kubernetes.io/projected/401b6249-7451-4767-9363-89295d6224f8-kube-api-access-vb7f5\") pod \"rabbitmq-server-0\" (UID: \"401b6249-7451-4767-9363-89295d6224f8\") " pod="openstack/rabbitmq-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.994864 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/401b6249-7451-4767-9363-89295d6224f8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"401b6249-7451-4767-9363-89295d6224f8\") " pod="openstack/rabbitmq-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.994890 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"401b6249-7451-4767-9363-89295d6224f8\") " pod="openstack/rabbitmq-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.994934 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/401b6249-7451-4767-9363-89295d6224f8-config-data\") pod \"rabbitmq-server-0\" (UID: \"401b6249-7451-4767-9363-89295d6224f8\") " pod="openstack/rabbitmq-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.994993 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/401b6249-7451-4767-9363-89295d6224f8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"401b6249-7451-4767-9363-89295d6224f8\") " pod="openstack/rabbitmq-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.995026 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/401b6249-7451-4767-9363-89295d6224f8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"401b6249-7451-4767-9363-89295d6224f8\") " pod="openstack/rabbitmq-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.995042 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/401b6249-7451-4767-9363-89295d6224f8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"401b6249-7451-4767-9363-89295d6224f8\") " pod="openstack/rabbitmq-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.995084 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/401b6249-7451-4767-9363-89295d6224f8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"401b6249-7451-4767-9363-89295d6224f8\") " pod="openstack/rabbitmq-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.995102 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/401b6249-7451-4767-9363-89295d6224f8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"401b6249-7451-4767-9363-89295d6224f8\") " pod="openstack/rabbitmq-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.995877 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/401b6249-7451-4767-9363-89295d6224f8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"401b6249-7451-4767-9363-89295d6224f8\") " pod="openstack/rabbitmq-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.996270 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"401b6249-7451-4767-9363-89295d6224f8\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.996398 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/401b6249-7451-4767-9363-89295d6224f8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"401b6249-7451-4767-9363-89295d6224f8\") " pod="openstack/rabbitmq-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.996487 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/401b6249-7451-4767-9363-89295d6224f8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"401b6249-7451-4767-9363-89295d6224f8\") " pod="openstack/rabbitmq-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.996661 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/401b6249-7451-4767-9363-89295d6224f8-config-data\") pod \"rabbitmq-server-0\" (UID: \"401b6249-7451-4767-9363-89295d6224f8\") " pod="openstack/rabbitmq-server-0" Jan 22 16:55:42 crc kubenswrapper[4758]: I0122 16:55:42.998461 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/401b6249-7451-4767-9363-89295d6224f8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"401b6249-7451-4767-9363-89295d6224f8\") " pod="openstack/rabbitmq-server-0" Jan 22 16:55:43 crc kubenswrapper[4758]: I0122 16:55:43.000613 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/401b6249-7451-4767-9363-89295d6224f8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"401b6249-7451-4767-9363-89295d6224f8\") " pod="openstack/rabbitmq-server-0" Jan 22 16:55:43 crc kubenswrapper[4758]: I0122 16:55:43.000816 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/401b6249-7451-4767-9363-89295d6224f8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"401b6249-7451-4767-9363-89295d6224f8\") " pod="openstack/rabbitmq-server-0" Jan 22 16:55:43 crc kubenswrapper[4758]: I0122 16:55:43.000961 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/401b6249-7451-4767-9363-89295d6224f8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"401b6249-7451-4767-9363-89295d6224f8\") " pod="openstack/rabbitmq-server-0" Jan 22 16:55:43 crc kubenswrapper[4758]: I0122 16:55:43.005319 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/401b6249-7451-4767-9363-89295d6224f8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"401b6249-7451-4767-9363-89295d6224f8\") " pod="openstack/rabbitmq-server-0" Jan 22 16:55:43 crc kubenswrapper[4758]: I0122 16:55:43.016828 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb7f5\" (UniqueName: \"kubernetes.io/projected/401b6249-7451-4767-9363-89295d6224f8-kube-api-access-vb7f5\") pod \"rabbitmq-server-0\" (UID: \"401b6249-7451-4767-9363-89295d6224f8\") " pod="openstack/rabbitmq-server-0" Jan 22 16:55:43 crc kubenswrapper[4758]: I0122 16:55:43.047968 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"401b6249-7451-4767-9363-89295d6224f8\") " pod="openstack/rabbitmq-server-0" Jan 22 16:55:43 crc kubenswrapper[4758]: I0122 16:55:43.125240 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:55:43 crc kubenswrapper[4758]: I0122 16:55:43.190089 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 22 16:55:43 crc kubenswrapper[4758]: I0122 16:55:43.691469 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 16:55:43 crc kubenswrapper[4758]: I0122 16:55:43.774989 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 16:55:43 crc kubenswrapper[4758]: W0122 16:55:43.782924 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod401b6249_7451_4767_9363_89295d6224f8.slice/crio-3921cf6773d1dbe5aef0f68930d95847bc856f7eb7a39df0e6b5092333daddc8 WatchSource:0}: Error finding container 3921cf6773d1dbe5aef0f68930d95847bc856f7eb7a39df0e6b5092333daddc8: Status 404 returned error can't find the container with id 3921cf6773d1dbe5aef0f68930d95847bc856f7eb7a39df0e6b5092333daddc8 Jan 22 16:55:44 crc kubenswrapper[4758]: I0122 16:55:44.445232 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"401b6249-7451-4767-9363-89295d6224f8","Type":"ContainerStarted","Data":"3921cf6773d1dbe5aef0f68930d95847bc856f7eb7a39df0e6b5092333daddc8"} Jan 22 16:55:44 crc kubenswrapper[4758]: I0122 16:55:44.447465 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"11ff72c7-325b-4836-8d06-dce1d2e8ea26","Type":"ContainerStarted","Data":"e97a6fcf0062567cd65093d63bac044fb187d7617ddff70311aa71437bb472ac"} Jan 22 16:55:45 crc kubenswrapper[4758]: I0122 16:55:45.460246 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"401b6249-7451-4767-9363-89295d6224f8","Type":"ContainerStarted","Data":"baf7291c599816122f79680d6ed602db325ca5372b9303927e9c4c29216309e2"} Jan 22 16:55:45 crc kubenswrapper[4758]: I0122 16:55:45.462542 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"11ff72c7-325b-4836-8d06-dce1d2e8ea26","Type":"ContainerStarted","Data":"d2c8d460a8b16357f75832d9e83c904ac9386e766806b7538836dc2dcc106902"} Jan 22 16:55:52 crc kubenswrapper[4758]: I0122 16:55:52.273522 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58f5d66ff5-hh7r6"] Jan 22 16:55:52 crc kubenswrapper[4758]: I0122 16:55:52.279534 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58f5d66ff5-hh7r6" Jan 22 16:55:52 crc kubenswrapper[4758]: I0122 16:55:52.283590 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 22 16:55:52 crc kubenswrapper[4758]: I0122 16:55:52.287577 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58f5d66ff5-hh7r6"] Jan 22 16:55:52 crc kubenswrapper[4758]: I0122 16:55:52.417259 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de0e9d55-6d9b-4443-9b2d-c8c7d245f95e-ovsdbserver-nb\") pod \"dnsmasq-dns-58f5d66ff5-hh7r6\" (UID: \"de0e9d55-6d9b-4443-9b2d-c8c7d245f95e\") " pod="openstack/dnsmasq-dns-58f5d66ff5-hh7r6" Jan 22 16:55:52 crc kubenswrapper[4758]: I0122 16:55:52.417687 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de0e9d55-6d9b-4443-9b2d-c8c7d245f95e-config\") pod \"dnsmasq-dns-58f5d66ff5-hh7r6\" (UID: \"de0e9d55-6d9b-4443-9b2d-c8c7d245f95e\") " pod="openstack/dnsmasq-dns-58f5d66ff5-hh7r6" Jan 22 16:55:52 crc kubenswrapper[4758]: I0122 16:55:52.417817 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de0e9d55-6d9b-4443-9b2d-c8c7d245f95e-dns-svc\") pod \"dnsmasq-dns-58f5d66ff5-hh7r6\" (UID: \"de0e9d55-6d9b-4443-9b2d-c8c7d245f95e\") " pod="openstack/dnsmasq-dns-58f5d66ff5-hh7r6" Jan 22 16:55:52 crc kubenswrapper[4758]: I0122 16:55:52.417904 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/de0e9d55-6d9b-4443-9b2d-c8c7d245f95e-openstack-edpm-ipam\") pod \"dnsmasq-dns-58f5d66ff5-hh7r6\" (UID: \"de0e9d55-6d9b-4443-9b2d-c8c7d245f95e\") " pod="openstack/dnsmasq-dns-58f5d66ff5-hh7r6" Jan 22 16:55:52 crc kubenswrapper[4758]: I0122 16:55:52.417965 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9s7h\" (UniqueName: \"kubernetes.io/projected/de0e9d55-6d9b-4443-9b2d-c8c7d245f95e-kube-api-access-s9s7h\") pod \"dnsmasq-dns-58f5d66ff5-hh7r6\" (UID: \"de0e9d55-6d9b-4443-9b2d-c8c7d245f95e\") " pod="openstack/dnsmasq-dns-58f5d66ff5-hh7r6" Jan 22 16:55:52 crc kubenswrapper[4758]: I0122 16:55:52.418044 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de0e9d55-6d9b-4443-9b2d-c8c7d245f95e-ovsdbserver-sb\") pod \"dnsmasq-dns-58f5d66ff5-hh7r6\" (UID: \"de0e9d55-6d9b-4443-9b2d-c8c7d245f95e\") " pod="openstack/dnsmasq-dns-58f5d66ff5-hh7r6" Jan 22 16:55:52 crc kubenswrapper[4758]: I0122 16:55:52.418151 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de0e9d55-6d9b-4443-9b2d-c8c7d245f95e-dns-swift-storage-0\") pod \"dnsmasq-dns-58f5d66ff5-hh7r6\" (UID: \"de0e9d55-6d9b-4443-9b2d-c8c7d245f95e\") " pod="openstack/dnsmasq-dns-58f5d66ff5-hh7r6" Jan 22 16:55:52 crc kubenswrapper[4758]: I0122 16:55:52.519412 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de0e9d55-6d9b-4443-9b2d-c8c7d245f95e-ovsdbserver-sb\") pod \"dnsmasq-dns-58f5d66ff5-hh7r6\" (UID: \"de0e9d55-6d9b-4443-9b2d-c8c7d245f95e\") " pod="openstack/dnsmasq-dns-58f5d66ff5-hh7r6" Jan 22 16:55:52 crc kubenswrapper[4758]: I0122 16:55:52.519472 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de0e9d55-6d9b-4443-9b2d-c8c7d245f95e-dns-swift-storage-0\") pod \"dnsmasq-dns-58f5d66ff5-hh7r6\" (UID: \"de0e9d55-6d9b-4443-9b2d-c8c7d245f95e\") " pod="openstack/dnsmasq-dns-58f5d66ff5-hh7r6" Jan 22 16:55:52 crc kubenswrapper[4758]: I0122 16:55:52.519544 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de0e9d55-6d9b-4443-9b2d-c8c7d245f95e-ovsdbserver-nb\") pod \"dnsmasq-dns-58f5d66ff5-hh7r6\" (UID: \"de0e9d55-6d9b-4443-9b2d-c8c7d245f95e\") " pod="openstack/dnsmasq-dns-58f5d66ff5-hh7r6" Jan 22 16:55:52 crc kubenswrapper[4758]: I0122 16:55:52.519623 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de0e9d55-6d9b-4443-9b2d-c8c7d245f95e-config\") pod \"dnsmasq-dns-58f5d66ff5-hh7r6\" (UID: \"de0e9d55-6d9b-4443-9b2d-c8c7d245f95e\") " pod="openstack/dnsmasq-dns-58f5d66ff5-hh7r6" Jan 22 16:55:52 crc kubenswrapper[4758]: I0122 16:55:52.519662 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de0e9d55-6d9b-4443-9b2d-c8c7d245f95e-dns-svc\") pod \"dnsmasq-dns-58f5d66ff5-hh7r6\" (UID: \"de0e9d55-6d9b-4443-9b2d-c8c7d245f95e\") " pod="openstack/dnsmasq-dns-58f5d66ff5-hh7r6" Jan 22 16:55:52 crc kubenswrapper[4758]: I0122 16:55:52.519679 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/de0e9d55-6d9b-4443-9b2d-c8c7d245f95e-openstack-edpm-ipam\") pod \"dnsmasq-dns-58f5d66ff5-hh7r6\" (UID: \"de0e9d55-6d9b-4443-9b2d-c8c7d245f95e\") " pod="openstack/dnsmasq-dns-58f5d66ff5-hh7r6" Jan 22 16:55:52 crc kubenswrapper[4758]: I0122 16:55:52.519697 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9s7h\" (UniqueName: \"kubernetes.io/projected/de0e9d55-6d9b-4443-9b2d-c8c7d245f95e-kube-api-access-s9s7h\") pod \"dnsmasq-dns-58f5d66ff5-hh7r6\" (UID: \"de0e9d55-6d9b-4443-9b2d-c8c7d245f95e\") " pod="openstack/dnsmasq-dns-58f5d66ff5-hh7r6" Jan 22 16:55:52 crc kubenswrapper[4758]: I0122 16:55:52.520582 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de0e9d55-6d9b-4443-9b2d-c8c7d245f95e-ovsdbserver-sb\") pod \"dnsmasq-dns-58f5d66ff5-hh7r6\" (UID: \"de0e9d55-6d9b-4443-9b2d-c8c7d245f95e\") " pod="openstack/dnsmasq-dns-58f5d66ff5-hh7r6" Jan 22 16:55:52 crc kubenswrapper[4758]: I0122 16:55:52.520606 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de0e9d55-6d9b-4443-9b2d-c8c7d245f95e-dns-swift-storage-0\") pod \"dnsmasq-dns-58f5d66ff5-hh7r6\" (UID: \"de0e9d55-6d9b-4443-9b2d-c8c7d245f95e\") " pod="openstack/dnsmasq-dns-58f5d66ff5-hh7r6" Jan 22 16:55:52 crc kubenswrapper[4758]: I0122 16:55:52.521562 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/de0e9d55-6d9b-4443-9b2d-c8c7d245f95e-openstack-edpm-ipam\") pod \"dnsmasq-dns-58f5d66ff5-hh7r6\" (UID: \"de0e9d55-6d9b-4443-9b2d-c8c7d245f95e\") " pod="openstack/dnsmasq-dns-58f5d66ff5-hh7r6" Jan 22 16:55:52 crc kubenswrapper[4758]: I0122 16:55:52.522030 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de0e9d55-6d9b-4443-9b2d-c8c7d245f95e-config\") pod \"dnsmasq-dns-58f5d66ff5-hh7r6\" (UID: \"de0e9d55-6d9b-4443-9b2d-c8c7d245f95e\") " pod="openstack/dnsmasq-dns-58f5d66ff5-hh7r6" Jan 22 16:55:52 crc kubenswrapper[4758]: I0122 16:55:52.522085 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de0e9d55-6d9b-4443-9b2d-c8c7d245f95e-dns-svc\") pod \"dnsmasq-dns-58f5d66ff5-hh7r6\" (UID: \"de0e9d55-6d9b-4443-9b2d-c8c7d245f95e\") " pod="openstack/dnsmasq-dns-58f5d66ff5-hh7r6" Jan 22 16:55:52 crc kubenswrapper[4758]: I0122 16:55:52.522143 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de0e9d55-6d9b-4443-9b2d-c8c7d245f95e-ovsdbserver-nb\") pod \"dnsmasq-dns-58f5d66ff5-hh7r6\" (UID: \"de0e9d55-6d9b-4443-9b2d-c8c7d245f95e\") " pod="openstack/dnsmasq-dns-58f5d66ff5-hh7r6" Jan 22 16:55:52 crc kubenswrapper[4758]: I0122 16:55:52.545570 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9s7h\" (UniqueName: \"kubernetes.io/projected/de0e9d55-6d9b-4443-9b2d-c8c7d245f95e-kube-api-access-s9s7h\") pod \"dnsmasq-dns-58f5d66ff5-hh7r6\" (UID: \"de0e9d55-6d9b-4443-9b2d-c8c7d245f95e\") " pod="openstack/dnsmasq-dns-58f5d66ff5-hh7r6" Jan 22 16:55:52 crc kubenswrapper[4758]: I0122 16:55:52.599756 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58f5d66ff5-hh7r6" Jan 22 16:55:53 crc kubenswrapper[4758]: I0122 16:55:53.246645 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58f5d66ff5-hh7r6"] Jan 22 16:55:53 crc kubenswrapper[4758]: I0122 16:55:53.547140 4758 generic.go:334] "Generic (PLEG): container finished" podID="de0e9d55-6d9b-4443-9b2d-c8c7d245f95e" containerID="0ebfba90678bdb0f5a40812e3b83611134c7bb8a5280e3707e1a6ed4102a63ce" exitCode=0 Jan 22 16:55:53 crc kubenswrapper[4758]: I0122 16:55:53.547199 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58f5d66ff5-hh7r6" event={"ID":"de0e9d55-6d9b-4443-9b2d-c8c7d245f95e","Type":"ContainerDied","Data":"0ebfba90678bdb0f5a40812e3b83611134c7bb8a5280e3707e1a6ed4102a63ce"} Jan 22 16:55:53 crc kubenswrapper[4758]: I0122 16:55:53.547248 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58f5d66ff5-hh7r6" event={"ID":"de0e9d55-6d9b-4443-9b2d-c8c7d245f95e","Type":"ContainerStarted","Data":"a8485cfcffcb62346c72555051e18e798aef2d113f2dba09f0b6b698f97069a7"} Jan 22 16:55:54 crc kubenswrapper[4758]: I0122 16:55:54.559177 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58f5d66ff5-hh7r6" event={"ID":"de0e9d55-6d9b-4443-9b2d-c8c7d245f95e","Type":"ContainerStarted","Data":"c4d63cf9a7c63ba7dc385c7eb5535a079c8fd9d79bb8ab9a1a680b32c359e687"} Jan 22 16:55:54 crc kubenswrapper[4758]: I0122 16:55:54.559450 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58f5d66ff5-hh7r6" Jan 22 16:55:54 crc kubenswrapper[4758]: I0122 16:55:54.581162 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58f5d66ff5-hh7r6" podStartSLOduration=2.581139526 podStartE2EDuration="2.581139526s" podCreationTimestamp="2026-01-22 16:55:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:55:54.580134979 +0000 UTC m=+1576.063474304" watchObservedRunningTime="2026-01-22 16:55:54.581139526 +0000 UTC m=+1576.064478811" Jan 22 16:55:58 crc kubenswrapper[4758]: I0122 16:55:58.751653 4758 scope.go:117] "RemoveContainer" containerID="92e82d494863e6b13a19497ff09c9f8ac71ec272cebf5a6eb177b0c911031b15" Jan 22 16:56:02 crc kubenswrapper[4758]: I0122 16:56:02.601693 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58f5d66ff5-hh7r6" Jan 22 16:56:02 crc kubenswrapper[4758]: I0122 16:56:02.678443 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578cd76f49-qt7ds"] Jan 22 16:56:02 crc kubenswrapper[4758]: I0122 16:56:02.678721 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-578cd76f49-qt7ds" podUID="a23c56d2-baa4-4aac-b2d2-25da6724e3b1" containerName="dnsmasq-dns" containerID="cri-o://b0a15ecad05d92f9048b1b064bb20a654958a4887a838c4b6ae7e3cf23611ea8" gracePeriod=10 Jan 22 16:56:02 crc kubenswrapper[4758]: I0122 16:56:02.930679 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7884569b4f-9q84h"] Jan 22 16:56:02 crc kubenswrapper[4758]: I0122 16:56:02.932953 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7884569b4f-9q84h" Jan 22 16:56:02 crc kubenswrapper[4758]: I0122 16:56:02.963185 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7884569b4f-9q84h"] Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.052131 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/33e06aca-f569-49e5-8849-8677661defe4-openstack-edpm-ipam\") pod \"dnsmasq-dns-7884569b4f-9q84h\" (UID: \"33e06aca-f569-49e5-8849-8677661defe4\") " pod="openstack/dnsmasq-dns-7884569b4f-9q84h" Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.052210 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tpjf\" (UniqueName: \"kubernetes.io/projected/33e06aca-f569-49e5-8849-8677661defe4-kube-api-access-2tpjf\") pod \"dnsmasq-dns-7884569b4f-9q84h\" (UID: \"33e06aca-f569-49e5-8849-8677661defe4\") " pod="openstack/dnsmasq-dns-7884569b4f-9q84h" Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.052239 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33e06aca-f569-49e5-8849-8677661defe4-ovsdbserver-nb\") pod \"dnsmasq-dns-7884569b4f-9q84h\" (UID: \"33e06aca-f569-49e5-8849-8677661defe4\") " pod="openstack/dnsmasq-dns-7884569b4f-9q84h" Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.052339 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33e06aca-f569-49e5-8849-8677661defe4-dns-svc\") pod \"dnsmasq-dns-7884569b4f-9q84h\" (UID: \"33e06aca-f569-49e5-8849-8677661defe4\") " pod="openstack/dnsmasq-dns-7884569b4f-9q84h" Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.052412 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/33e06aca-f569-49e5-8849-8677661defe4-dns-swift-storage-0\") pod \"dnsmasq-dns-7884569b4f-9q84h\" (UID: \"33e06aca-f569-49e5-8849-8677661defe4\") " pod="openstack/dnsmasq-dns-7884569b4f-9q84h" Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.052434 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33e06aca-f569-49e5-8849-8677661defe4-ovsdbserver-sb\") pod \"dnsmasq-dns-7884569b4f-9q84h\" (UID: \"33e06aca-f569-49e5-8849-8677661defe4\") " pod="openstack/dnsmasq-dns-7884569b4f-9q84h" Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.052467 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33e06aca-f569-49e5-8849-8677661defe4-config\") pod \"dnsmasq-dns-7884569b4f-9q84h\" (UID: \"33e06aca-f569-49e5-8849-8677661defe4\") " pod="openstack/dnsmasq-dns-7884569b4f-9q84h" Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.154344 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/33e06aca-f569-49e5-8849-8677661defe4-openstack-edpm-ipam\") pod \"dnsmasq-dns-7884569b4f-9q84h\" (UID: \"33e06aca-f569-49e5-8849-8677661defe4\") " pod="openstack/dnsmasq-dns-7884569b4f-9q84h" Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.154471 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tpjf\" (UniqueName: \"kubernetes.io/projected/33e06aca-f569-49e5-8849-8677661defe4-kube-api-access-2tpjf\") pod \"dnsmasq-dns-7884569b4f-9q84h\" (UID: \"33e06aca-f569-49e5-8849-8677661defe4\") " pod="openstack/dnsmasq-dns-7884569b4f-9q84h" Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.154518 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33e06aca-f569-49e5-8849-8677661defe4-ovsdbserver-nb\") pod \"dnsmasq-dns-7884569b4f-9q84h\" (UID: \"33e06aca-f569-49e5-8849-8677661defe4\") " pod="openstack/dnsmasq-dns-7884569b4f-9q84h" Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.154559 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33e06aca-f569-49e5-8849-8677661defe4-dns-svc\") pod \"dnsmasq-dns-7884569b4f-9q84h\" (UID: \"33e06aca-f569-49e5-8849-8677661defe4\") " pod="openstack/dnsmasq-dns-7884569b4f-9q84h" Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.154597 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/33e06aca-f569-49e5-8849-8677661defe4-dns-swift-storage-0\") pod \"dnsmasq-dns-7884569b4f-9q84h\" (UID: \"33e06aca-f569-49e5-8849-8677661defe4\") " pod="openstack/dnsmasq-dns-7884569b4f-9q84h" Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.154638 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33e06aca-f569-49e5-8849-8677661defe4-ovsdbserver-sb\") pod \"dnsmasq-dns-7884569b4f-9q84h\" (UID: \"33e06aca-f569-49e5-8849-8677661defe4\") " pod="openstack/dnsmasq-dns-7884569b4f-9q84h" Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.154673 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33e06aca-f569-49e5-8849-8677661defe4-config\") pod \"dnsmasq-dns-7884569b4f-9q84h\" (UID: \"33e06aca-f569-49e5-8849-8677661defe4\") " pod="openstack/dnsmasq-dns-7884569b4f-9q84h" Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.155652 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33e06aca-f569-49e5-8849-8677661defe4-dns-svc\") pod \"dnsmasq-dns-7884569b4f-9q84h\" (UID: \"33e06aca-f569-49e5-8849-8677661defe4\") " pod="openstack/dnsmasq-dns-7884569b4f-9q84h" Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.155691 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33e06aca-f569-49e5-8849-8677661defe4-ovsdbserver-nb\") pod \"dnsmasq-dns-7884569b4f-9q84h\" (UID: \"33e06aca-f569-49e5-8849-8677661defe4\") " pod="openstack/dnsmasq-dns-7884569b4f-9q84h" Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.155700 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/33e06aca-f569-49e5-8849-8677661defe4-openstack-edpm-ipam\") pod \"dnsmasq-dns-7884569b4f-9q84h\" (UID: \"33e06aca-f569-49e5-8849-8677661defe4\") " pod="openstack/dnsmasq-dns-7884569b4f-9q84h" Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.156075 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33e06aca-f569-49e5-8849-8677661defe4-config\") pod \"dnsmasq-dns-7884569b4f-9q84h\" (UID: \"33e06aca-f569-49e5-8849-8677661defe4\") " pod="openstack/dnsmasq-dns-7884569b4f-9q84h" Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.156146 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33e06aca-f569-49e5-8849-8677661defe4-ovsdbserver-sb\") pod \"dnsmasq-dns-7884569b4f-9q84h\" (UID: \"33e06aca-f569-49e5-8849-8677661defe4\") " pod="openstack/dnsmasq-dns-7884569b4f-9q84h" Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.156849 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/33e06aca-f569-49e5-8849-8677661defe4-dns-swift-storage-0\") pod \"dnsmasq-dns-7884569b4f-9q84h\" (UID: \"33e06aca-f569-49e5-8849-8677661defe4\") " pod="openstack/dnsmasq-dns-7884569b4f-9q84h" Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.181007 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tpjf\" (UniqueName: \"kubernetes.io/projected/33e06aca-f569-49e5-8849-8677661defe4-kube-api-access-2tpjf\") pod \"dnsmasq-dns-7884569b4f-9q84h\" (UID: \"33e06aca-f569-49e5-8849-8677661defe4\") " pod="openstack/dnsmasq-dns-7884569b4f-9q84h" Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.253033 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7884569b4f-9q84h" Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.360182 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578cd76f49-qt7ds" Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.460511 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a23c56d2-baa4-4aac-b2d2-25da6724e3b1-dns-swift-storage-0\") pod \"a23c56d2-baa4-4aac-b2d2-25da6724e3b1\" (UID: \"a23c56d2-baa4-4aac-b2d2-25da6724e3b1\") " Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.460583 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcv77\" (UniqueName: \"kubernetes.io/projected/a23c56d2-baa4-4aac-b2d2-25da6724e3b1-kube-api-access-pcv77\") pod \"a23c56d2-baa4-4aac-b2d2-25da6724e3b1\" (UID: \"a23c56d2-baa4-4aac-b2d2-25da6724e3b1\") " Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.460708 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a23c56d2-baa4-4aac-b2d2-25da6724e3b1-ovsdbserver-nb\") pod \"a23c56d2-baa4-4aac-b2d2-25da6724e3b1\" (UID: \"a23c56d2-baa4-4aac-b2d2-25da6724e3b1\") " Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.460919 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a23c56d2-baa4-4aac-b2d2-25da6724e3b1-dns-svc\") pod \"a23c56d2-baa4-4aac-b2d2-25da6724e3b1\" (UID: \"a23c56d2-baa4-4aac-b2d2-25da6724e3b1\") " Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.460960 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a23c56d2-baa4-4aac-b2d2-25da6724e3b1-config\") pod \"a23c56d2-baa4-4aac-b2d2-25da6724e3b1\" (UID: \"a23c56d2-baa4-4aac-b2d2-25da6724e3b1\") " Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.460987 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a23c56d2-baa4-4aac-b2d2-25da6724e3b1-ovsdbserver-sb\") pod \"a23c56d2-baa4-4aac-b2d2-25da6724e3b1\" (UID: \"a23c56d2-baa4-4aac-b2d2-25da6724e3b1\") " Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.475223 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a23c56d2-baa4-4aac-b2d2-25da6724e3b1-kube-api-access-pcv77" (OuterVolumeSpecName: "kube-api-access-pcv77") pod "a23c56d2-baa4-4aac-b2d2-25da6724e3b1" (UID: "a23c56d2-baa4-4aac-b2d2-25da6724e3b1"). InnerVolumeSpecName "kube-api-access-pcv77". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.521215 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a23c56d2-baa4-4aac-b2d2-25da6724e3b1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a23c56d2-baa4-4aac-b2d2-25da6724e3b1" (UID: "a23c56d2-baa4-4aac-b2d2-25da6724e3b1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.541806 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a23c56d2-baa4-4aac-b2d2-25da6724e3b1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a23c56d2-baa4-4aac-b2d2-25da6724e3b1" (UID: "a23c56d2-baa4-4aac-b2d2-25da6724e3b1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.556946 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a23c56d2-baa4-4aac-b2d2-25da6724e3b1-config" (OuterVolumeSpecName: "config") pod "a23c56d2-baa4-4aac-b2d2-25da6724e3b1" (UID: "a23c56d2-baa4-4aac-b2d2-25da6724e3b1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.560305 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a23c56d2-baa4-4aac-b2d2-25da6724e3b1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a23c56d2-baa4-4aac-b2d2-25da6724e3b1" (UID: "a23c56d2-baa4-4aac-b2d2-25da6724e3b1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.566003 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a23c56d2-baa4-4aac-b2d2-25da6724e3b1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a23c56d2-baa4-4aac-b2d2-25da6724e3b1" (UID: "a23c56d2-baa4-4aac-b2d2-25da6724e3b1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.588014 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a23c56d2-baa4-4aac-b2d2-25da6724e3b1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.588061 4758 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a23c56d2-baa4-4aac-b2d2-25da6724e3b1-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.588071 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a23c56d2-baa4-4aac-b2d2-25da6724e3b1-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.588078 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a23c56d2-baa4-4aac-b2d2-25da6724e3b1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.588088 4758 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a23c56d2-baa4-4aac-b2d2-25da6724e3b1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.588098 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcv77\" (UniqueName: \"kubernetes.io/projected/a23c56d2-baa4-4aac-b2d2-25da6724e3b1-kube-api-access-pcv77\") on node \"crc\" DevicePath \"\"" Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.649237 4758 generic.go:334] "Generic (PLEG): container finished" podID="a23c56d2-baa4-4aac-b2d2-25da6724e3b1" containerID="b0a15ecad05d92f9048b1b064bb20a654958a4887a838c4b6ae7e3cf23611ea8" exitCode=0 Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.649289 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578cd76f49-qt7ds" event={"ID":"a23c56d2-baa4-4aac-b2d2-25da6724e3b1","Type":"ContainerDied","Data":"b0a15ecad05d92f9048b1b064bb20a654958a4887a838c4b6ae7e3cf23611ea8"} Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.649309 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578cd76f49-qt7ds" Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.649328 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578cd76f49-qt7ds" event={"ID":"a23c56d2-baa4-4aac-b2d2-25da6724e3b1","Type":"ContainerDied","Data":"928815a0333de4a7af4bf2510656f815e92c41cc62800fe9bcb779c6dada9133"} Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.649353 4758 scope.go:117] "RemoveContainer" containerID="b0a15ecad05d92f9048b1b064bb20a654958a4887a838c4b6ae7e3cf23611ea8" Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.672755 4758 scope.go:117] "RemoveContainer" containerID="bd06e2155cdc7c965c3a4e8c71278433b7a8df33700304b82efe1a8ec8b329e5" Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.694430 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578cd76f49-qt7ds"] Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.702495 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-578cd76f49-qt7ds"] Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.712534 4758 scope.go:117] "RemoveContainer" containerID="b0a15ecad05d92f9048b1b064bb20a654958a4887a838c4b6ae7e3cf23611ea8" Jan 22 16:56:03 crc kubenswrapper[4758]: E0122 16:56:03.712988 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0a15ecad05d92f9048b1b064bb20a654958a4887a838c4b6ae7e3cf23611ea8\": container with ID starting with b0a15ecad05d92f9048b1b064bb20a654958a4887a838c4b6ae7e3cf23611ea8 not found: ID does not exist" containerID="b0a15ecad05d92f9048b1b064bb20a654958a4887a838c4b6ae7e3cf23611ea8" Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.713021 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0a15ecad05d92f9048b1b064bb20a654958a4887a838c4b6ae7e3cf23611ea8"} err="failed to get container status \"b0a15ecad05d92f9048b1b064bb20a654958a4887a838c4b6ae7e3cf23611ea8\": rpc error: code = NotFound desc = could not find container \"b0a15ecad05d92f9048b1b064bb20a654958a4887a838c4b6ae7e3cf23611ea8\": container with ID starting with b0a15ecad05d92f9048b1b064bb20a654958a4887a838c4b6ae7e3cf23611ea8 not found: ID does not exist" Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.713050 4758 scope.go:117] "RemoveContainer" containerID="bd06e2155cdc7c965c3a4e8c71278433b7a8df33700304b82efe1a8ec8b329e5" Jan 22 16:56:03 crc kubenswrapper[4758]: E0122 16:56:03.713358 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd06e2155cdc7c965c3a4e8c71278433b7a8df33700304b82efe1a8ec8b329e5\": container with ID starting with bd06e2155cdc7c965c3a4e8c71278433b7a8df33700304b82efe1a8ec8b329e5 not found: ID does not exist" containerID="bd06e2155cdc7c965c3a4e8c71278433b7a8df33700304b82efe1a8ec8b329e5" Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.713407 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd06e2155cdc7c965c3a4e8c71278433b7a8df33700304b82efe1a8ec8b329e5"} err="failed to get container status \"bd06e2155cdc7c965c3a4e8c71278433b7a8df33700304b82efe1a8ec8b329e5\": rpc error: code = NotFound desc = could not find container \"bd06e2155cdc7c965c3a4e8c71278433b7a8df33700304b82efe1a8ec8b329e5\": container with ID starting with bd06e2155cdc7c965c3a4e8c71278433b7a8df33700304b82efe1a8ec8b329e5 not found: ID does not exist" Jan 22 16:56:03 crc kubenswrapper[4758]: I0122 16:56:03.814921 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7884569b4f-9q84h"] Jan 22 16:56:03 crc kubenswrapper[4758]: W0122 16:56:03.819968 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33e06aca_f569_49e5_8849_8677661defe4.slice/crio-4a82b300d5ab7c04936dbfa2c79d83f036790e04edd8ae1627fae1107391c0bc WatchSource:0}: Error finding container 4a82b300d5ab7c04936dbfa2c79d83f036790e04edd8ae1627fae1107391c0bc: Status 404 returned error can't find the container with id 4a82b300d5ab7c04936dbfa2c79d83f036790e04edd8ae1627fae1107391c0bc Jan 22 16:56:04 crc kubenswrapper[4758]: I0122 16:56:04.659244 4758 generic.go:334] "Generic (PLEG): container finished" podID="33e06aca-f569-49e5-8849-8677661defe4" containerID="e2a0404d5f6f8be5be8a69e6a93a5fe4f826ec5a7acd4caa2098427ee002bcb9" exitCode=0 Jan 22 16:56:04 crc kubenswrapper[4758]: I0122 16:56:04.659363 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7884569b4f-9q84h" event={"ID":"33e06aca-f569-49e5-8849-8677661defe4","Type":"ContainerDied","Data":"e2a0404d5f6f8be5be8a69e6a93a5fe4f826ec5a7acd4caa2098427ee002bcb9"} Jan 22 16:56:04 crc kubenswrapper[4758]: I0122 16:56:04.659629 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7884569b4f-9q84h" event={"ID":"33e06aca-f569-49e5-8849-8677661defe4","Type":"ContainerStarted","Data":"4a82b300d5ab7c04936dbfa2c79d83f036790e04edd8ae1627fae1107391c0bc"} Jan 22 16:56:04 crc kubenswrapper[4758]: I0122 16:56:04.819510 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a23c56d2-baa4-4aac-b2d2-25da6724e3b1" path="/var/lib/kubelet/pods/a23c56d2-baa4-4aac-b2d2-25da6724e3b1/volumes" Jan 22 16:56:05 crc kubenswrapper[4758]: I0122 16:56:05.681954 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7884569b4f-9q84h" event={"ID":"33e06aca-f569-49e5-8849-8677661defe4","Type":"ContainerStarted","Data":"e8d551537a837b146ae14de9ab144159f9720cc8fa082a5bdd32f12aff189b78"} Jan 22 16:56:05 crc kubenswrapper[4758]: I0122 16:56:05.682432 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7884569b4f-9q84h" Jan 22 16:56:05 crc kubenswrapper[4758]: I0122 16:56:05.703630 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7884569b4f-9q84h" podStartSLOduration=3.703613708 podStartE2EDuration="3.703613708s" podCreationTimestamp="2026-01-22 16:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:56:05.701096819 +0000 UTC m=+1587.184436104" watchObservedRunningTime="2026-01-22 16:56:05.703613708 +0000 UTC m=+1587.186952993" Jan 22 16:56:13 crc kubenswrapper[4758]: I0122 16:56:13.255155 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7884569b4f-9q84h" Jan 22 16:56:13 crc kubenswrapper[4758]: I0122 16:56:13.359766 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58f5d66ff5-hh7r6"] Jan 22 16:56:13 crc kubenswrapper[4758]: I0122 16:56:13.360028 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58f5d66ff5-hh7r6" podUID="de0e9d55-6d9b-4443-9b2d-c8c7d245f95e" containerName="dnsmasq-dns" containerID="cri-o://c4d63cf9a7c63ba7dc385c7eb5535a079c8fd9d79bb8ab9a1a680b32c359e687" gracePeriod=10 Jan 22 16:56:13 crc kubenswrapper[4758]: I0122 16:56:13.770734 4758 generic.go:334] "Generic (PLEG): container finished" podID="de0e9d55-6d9b-4443-9b2d-c8c7d245f95e" containerID="c4d63cf9a7c63ba7dc385c7eb5535a079c8fd9d79bb8ab9a1a680b32c359e687" exitCode=0 Jan 22 16:56:13 crc kubenswrapper[4758]: I0122 16:56:13.770889 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58f5d66ff5-hh7r6" event={"ID":"de0e9d55-6d9b-4443-9b2d-c8c7d245f95e","Type":"ContainerDied","Data":"c4d63cf9a7c63ba7dc385c7eb5535a079c8fd9d79bb8ab9a1a680b32c359e687"} Jan 22 16:56:14 crc kubenswrapper[4758]: I0122 16:56:14.346139 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58f5d66ff5-hh7r6" Jan 22 16:56:14 crc kubenswrapper[4758]: I0122 16:56:14.525041 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de0e9d55-6d9b-4443-9b2d-c8c7d245f95e-dns-svc\") pod \"de0e9d55-6d9b-4443-9b2d-c8c7d245f95e\" (UID: \"de0e9d55-6d9b-4443-9b2d-c8c7d245f95e\") " Jan 22 16:56:14 crc kubenswrapper[4758]: I0122 16:56:14.525359 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de0e9d55-6d9b-4443-9b2d-c8c7d245f95e-config\") pod \"de0e9d55-6d9b-4443-9b2d-c8c7d245f95e\" (UID: \"de0e9d55-6d9b-4443-9b2d-c8c7d245f95e\") " Jan 22 16:56:14 crc kubenswrapper[4758]: I0122 16:56:14.525525 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de0e9d55-6d9b-4443-9b2d-c8c7d245f95e-ovsdbserver-sb\") pod \"de0e9d55-6d9b-4443-9b2d-c8c7d245f95e\" (UID: \"de0e9d55-6d9b-4443-9b2d-c8c7d245f95e\") " Jan 22 16:56:14 crc kubenswrapper[4758]: I0122 16:56:14.525576 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/de0e9d55-6d9b-4443-9b2d-c8c7d245f95e-openstack-edpm-ipam\") pod \"de0e9d55-6d9b-4443-9b2d-c8c7d245f95e\" (UID: \"de0e9d55-6d9b-4443-9b2d-c8c7d245f95e\") " Jan 22 16:56:14 crc kubenswrapper[4758]: I0122 16:56:14.525630 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9s7h\" (UniqueName: \"kubernetes.io/projected/de0e9d55-6d9b-4443-9b2d-c8c7d245f95e-kube-api-access-s9s7h\") pod \"de0e9d55-6d9b-4443-9b2d-c8c7d245f95e\" (UID: \"de0e9d55-6d9b-4443-9b2d-c8c7d245f95e\") " Jan 22 16:56:14 crc kubenswrapper[4758]: I0122 16:56:14.525660 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de0e9d55-6d9b-4443-9b2d-c8c7d245f95e-dns-swift-storage-0\") pod \"de0e9d55-6d9b-4443-9b2d-c8c7d245f95e\" (UID: \"de0e9d55-6d9b-4443-9b2d-c8c7d245f95e\") " Jan 22 16:56:14 crc kubenswrapper[4758]: I0122 16:56:14.525716 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de0e9d55-6d9b-4443-9b2d-c8c7d245f95e-ovsdbserver-nb\") pod \"de0e9d55-6d9b-4443-9b2d-c8c7d245f95e\" (UID: \"de0e9d55-6d9b-4443-9b2d-c8c7d245f95e\") " Jan 22 16:56:14 crc kubenswrapper[4758]: I0122 16:56:14.530806 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de0e9d55-6d9b-4443-9b2d-c8c7d245f95e-kube-api-access-s9s7h" (OuterVolumeSpecName: "kube-api-access-s9s7h") pod "de0e9d55-6d9b-4443-9b2d-c8c7d245f95e" (UID: "de0e9d55-6d9b-4443-9b2d-c8c7d245f95e"). InnerVolumeSpecName "kube-api-access-s9s7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:56:14 crc kubenswrapper[4758]: I0122 16:56:14.591972 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de0e9d55-6d9b-4443-9b2d-c8c7d245f95e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "de0e9d55-6d9b-4443-9b2d-c8c7d245f95e" (UID: "de0e9d55-6d9b-4443-9b2d-c8c7d245f95e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:56:14 crc kubenswrapper[4758]: I0122 16:56:14.594247 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de0e9d55-6d9b-4443-9b2d-c8c7d245f95e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "de0e9d55-6d9b-4443-9b2d-c8c7d245f95e" (UID: "de0e9d55-6d9b-4443-9b2d-c8c7d245f95e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:56:14 crc kubenswrapper[4758]: I0122 16:56:14.596529 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de0e9d55-6d9b-4443-9b2d-c8c7d245f95e-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "de0e9d55-6d9b-4443-9b2d-c8c7d245f95e" (UID: "de0e9d55-6d9b-4443-9b2d-c8c7d245f95e"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:56:14 crc kubenswrapper[4758]: I0122 16:56:14.603843 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de0e9d55-6d9b-4443-9b2d-c8c7d245f95e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "de0e9d55-6d9b-4443-9b2d-c8c7d245f95e" (UID: "de0e9d55-6d9b-4443-9b2d-c8c7d245f95e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:56:14 crc kubenswrapper[4758]: I0122 16:56:14.612514 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de0e9d55-6d9b-4443-9b2d-c8c7d245f95e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "de0e9d55-6d9b-4443-9b2d-c8c7d245f95e" (UID: "de0e9d55-6d9b-4443-9b2d-c8c7d245f95e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:56:14 crc kubenswrapper[4758]: I0122 16:56:14.613466 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de0e9d55-6d9b-4443-9b2d-c8c7d245f95e-config" (OuterVolumeSpecName: "config") pod "de0e9d55-6d9b-4443-9b2d-c8c7d245f95e" (UID: "de0e9d55-6d9b-4443-9b2d-c8c7d245f95e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:56:14 crc kubenswrapper[4758]: I0122 16:56:14.629036 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9s7h\" (UniqueName: \"kubernetes.io/projected/de0e9d55-6d9b-4443-9b2d-c8c7d245f95e-kube-api-access-s9s7h\") on node \"crc\" DevicePath \"\"" Jan 22 16:56:14 crc kubenswrapper[4758]: I0122 16:56:14.629070 4758 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de0e9d55-6d9b-4443-9b2d-c8c7d245f95e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 16:56:14 crc kubenswrapper[4758]: I0122 16:56:14.629082 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de0e9d55-6d9b-4443-9b2d-c8c7d245f95e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 16:56:14 crc kubenswrapper[4758]: I0122 16:56:14.629092 4758 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de0e9d55-6d9b-4443-9b2d-c8c7d245f95e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 16:56:14 crc kubenswrapper[4758]: I0122 16:56:14.629101 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de0e9d55-6d9b-4443-9b2d-c8c7d245f95e-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:56:14 crc kubenswrapper[4758]: I0122 16:56:14.629110 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de0e9d55-6d9b-4443-9b2d-c8c7d245f95e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 16:56:14 crc kubenswrapper[4758]: I0122 16:56:14.629120 4758 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/de0e9d55-6d9b-4443-9b2d-c8c7d245f95e-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 16:56:14 crc kubenswrapper[4758]: I0122 16:56:14.785120 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58f5d66ff5-hh7r6" event={"ID":"de0e9d55-6d9b-4443-9b2d-c8c7d245f95e","Type":"ContainerDied","Data":"a8485cfcffcb62346c72555051e18e798aef2d113f2dba09f0b6b698f97069a7"} Jan 22 16:56:14 crc kubenswrapper[4758]: I0122 16:56:14.785190 4758 scope.go:117] "RemoveContainer" containerID="c4d63cf9a7c63ba7dc385c7eb5535a079c8fd9d79bb8ab9a1a680b32c359e687" Jan 22 16:56:14 crc kubenswrapper[4758]: I0122 16:56:14.785537 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58f5d66ff5-hh7r6" Jan 22 16:56:14 crc kubenswrapper[4758]: I0122 16:56:14.810450 4758 scope.go:117] "RemoveContainer" containerID="0ebfba90678bdb0f5a40812e3b83611134c7bb8a5280e3707e1a6ed4102a63ce" Jan 22 16:56:14 crc kubenswrapper[4758]: I0122 16:56:14.834060 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58f5d66ff5-hh7r6"] Jan 22 16:56:14 crc kubenswrapper[4758]: I0122 16:56:14.863793 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58f5d66ff5-hh7r6"] Jan 22 16:56:16 crc kubenswrapper[4758]: I0122 16:56:16.861124 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de0e9d55-6d9b-4443-9b2d-c8c7d245f95e" path="/var/lib/kubelet/pods/de0e9d55-6d9b-4443-9b2d-c8c7d245f95e/volumes" Jan 22 16:56:18 crc kubenswrapper[4758]: I0122 16:56:18.891322 4758 generic.go:334] "Generic (PLEG): container finished" podID="11ff72c7-325b-4836-8d06-dce1d2e8ea26" containerID="d2c8d460a8b16357f75832d9e83c904ac9386e766806b7538836dc2dcc106902" exitCode=0 Jan 22 16:56:18 crc kubenswrapper[4758]: I0122 16:56:18.891647 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"11ff72c7-325b-4836-8d06-dce1d2e8ea26","Type":"ContainerDied","Data":"d2c8d460a8b16357f75832d9e83c904ac9386e766806b7538836dc2dcc106902"} Jan 22 16:56:18 crc kubenswrapper[4758]: I0122 16:56:18.896323 4758 generic.go:334] "Generic (PLEG): container finished" podID="401b6249-7451-4767-9363-89295d6224f8" containerID="baf7291c599816122f79680d6ed602db325ca5372b9303927e9c4c29216309e2" exitCode=0 Jan 22 16:56:18 crc kubenswrapper[4758]: I0122 16:56:18.896363 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"401b6249-7451-4767-9363-89295d6224f8","Type":"ContainerDied","Data":"baf7291c599816122f79680d6ed602db325ca5372b9303927e9c4c29216309e2"} Jan 22 16:56:19 crc kubenswrapper[4758]: I0122 16:56:19.912281 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"401b6249-7451-4767-9363-89295d6224f8","Type":"ContainerStarted","Data":"195d4f86dca0561e90509afa6c0f2c992cdf44610686a572734583ec335bd24a"} Jan 22 16:56:19 crc kubenswrapper[4758]: I0122 16:56:19.913002 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 22 16:56:19 crc kubenswrapper[4758]: I0122 16:56:19.914619 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"11ff72c7-325b-4836-8d06-dce1d2e8ea26","Type":"ContainerStarted","Data":"94496f44c9dd9556076f0c3ec035fa602461186eb7f7593d77ebbc07e9e811f5"} Jan 22 16:56:19 crc kubenswrapper[4758]: I0122 16:56:19.914810 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:56:19 crc kubenswrapper[4758]: I0122 16:56:19.942775 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.942756077 podStartE2EDuration="37.942756077s" podCreationTimestamp="2026-01-22 16:55:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:56:19.936795685 +0000 UTC m=+1601.420134980" watchObservedRunningTime="2026-01-22 16:56:19.942756077 +0000 UTC m=+1601.426095362" Jan 22 16:56:19 crc kubenswrapper[4758]: I0122 16:56:19.976124 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.976103665 podStartE2EDuration="37.976103665s" podCreationTimestamp="2026-01-22 16:55:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:56:19.965009963 +0000 UTC m=+1601.448349258" watchObservedRunningTime="2026-01-22 16:56:19.976103665 +0000 UTC m=+1601.459442950" Jan 22 16:56:32 crc kubenswrapper[4758]: I0122 16:56:32.022666 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7xb4q"] Jan 22 16:56:32 crc kubenswrapper[4758]: E0122 16:56:32.023685 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de0e9d55-6d9b-4443-9b2d-c8c7d245f95e" containerName="init" Jan 22 16:56:32 crc kubenswrapper[4758]: I0122 16:56:32.023703 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="de0e9d55-6d9b-4443-9b2d-c8c7d245f95e" containerName="init" Jan 22 16:56:32 crc kubenswrapper[4758]: E0122 16:56:32.023731 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a23c56d2-baa4-4aac-b2d2-25da6724e3b1" containerName="dnsmasq-dns" Jan 22 16:56:32 crc kubenswrapper[4758]: I0122 16:56:32.023761 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a23c56d2-baa4-4aac-b2d2-25da6724e3b1" containerName="dnsmasq-dns" Jan 22 16:56:32 crc kubenswrapper[4758]: E0122 16:56:32.023776 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de0e9d55-6d9b-4443-9b2d-c8c7d245f95e" containerName="dnsmasq-dns" Jan 22 16:56:32 crc kubenswrapper[4758]: I0122 16:56:32.023783 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="de0e9d55-6d9b-4443-9b2d-c8c7d245f95e" containerName="dnsmasq-dns" Jan 22 16:56:32 crc kubenswrapper[4758]: E0122 16:56:32.023793 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a23c56d2-baa4-4aac-b2d2-25da6724e3b1" containerName="init" Jan 22 16:56:32 crc kubenswrapper[4758]: I0122 16:56:32.023799 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a23c56d2-baa4-4aac-b2d2-25da6724e3b1" containerName="init" Jan 22 16:56:32 crc kubenswrapper[4758]: I0122 16:56:32.023982 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="de0e9d55-6d9b-4443-9b2d-c8c7d245f95e" containerName="dnsmasq-dns" Jan 22 16:56:32 crc kubenswrapper[4758]: I0122 16:56:32.024011 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a23c56d2-baa4-4aac-b2d2-25da6724e3b1" containerName="dnsmasq-dns" Jan 22 16:56:32 crc kubenswrapper[4758]: I0122 16:56:32.024682 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7xb4q" Jan 22 16:56:32 crc kubenswrapper[4758]: I0122 16:56:32.027476 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 16:56:32 crc kubenswrapper[4758]: I0122 16:56:32.028335 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5gz9n" Jan 22 16:56:32 crc kubenswrapper[4758]: I0122 16:56:32.028412 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 16:56:32 crc kubenswrapper[4758]: I0122 16:56:32.032901 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 16:56:32 crc kubenswrapper[4758]: I0122 16:56:32.053816 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7xb4q"] Jan 22 16:56:32 crc kubenswrapper[4758]: I0122 16:56:32.220894 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4d38f3f0-3531-4733-8548-950b770f2094-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7xb4q\" (UID: \"4d38f3f0-3531-4733-8548-950b770f2094\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7xb4q" Jan 22 16:56:32 crc kubenswrapper[4758]: I0122 16:56:32.220957 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d38f3f0-3531-4733-8548-950b770f2094-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7xb4q\" (UID: \"4d38f3f0-3531-4733-8548-950b770f2094\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7xb4q" Jan 22 16:56:32 crc kubenswrapper[4758]: I0122 16:56:32.221054 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d38f3f0-3531-4733-8548-950b770f2094-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7xb4q\" (UID: \"4d38f3f0-3531-4733-8548-950b770f2094\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7xb4q" Jan 22 16:56:32 crc kubenswrapper[4758]: I0122 16:56:32.221112 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpbxw\" (UniqueName: \"kubernetes.io/projected/4d38f3f0-3531-4733-8548-950b770f2094-kube-api-access-kpbxw\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7xb4q\" (UID: \"4d38f3f0-3531-4733-8548-950b770f2094\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7xb4q" Jan 22 16:56:32 crc kubenswrapper[4758]: I0122 16:56:32.324407 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpbxw\" (UniqueName: \"kubernetes.io/projected/4d38f3f0-3531-4733-8548-950b770f2094-kube-api-access-kpbxw\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7xb4q\" (UID: \"4d38f3f0-3531-4733-8548-950b770f2094\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7xb4q" Jan 22 16:56:32 crc kubenswrapper[4758]: I0122 16:56:32.324860 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4d38f3f0-3531-4733-8548-950b770f2094-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7xb4q\" (UID: \"4d38f3f0-3531-4733-8548-950b770f2094\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7xb4q" Jan 22 16:56:32 crc kubenswrapper[4758]: I0122 16:56:32.325194 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d38f3f0-3531-4733-8548-950b770f2094-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7xb4q\" (UID: \"4d38f3f0-3531-4733-8548-950b770f2094\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7xb4q" Jan 22 16:56:32 crc kubenswrapper[4758]: I0122 16:56:32.326544 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d38f3f0-3531-4733-8548-950b770f2094-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7xb4q\" (UID: \"4d38f3f0-3531-4733-8548-950b770f2094\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7xb4q" Jan 22 16:56:32 crc kubenswrapper[4758]: I0122 16:56:32.331110 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d38f3f0-3531-4733-8548-950b770f2094-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7xb4q\" (UID: \"4d38f3f0-3531-4733-8548-950b770f2094\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7xb4q" Jan 22 16:56:32 crc kubenswrapper[4758]: I0122 16:56:32.331263 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4d38f3f0-3531-4733-8548-950b770f2094-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7xb4q\" (UID: \"4d38f3f0-3531-4733-8548-950b770f2094\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7xb4q" Jan 22 16:56:32 crc kubenswrapper[4758]: I0122 16:56:32.333592 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d38f3f0-3531-4733-8548-950b770f2094-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7xb4q\" (UID: \"4d38f3f0-3531-4733-8548-950b770f2094\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7xb4q" Jan 22 16:56:32 crc kubenswrapper[4758]: I0122 16:56:32.340433 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpbxw\" (UniqueName: \"kubernetes.io/projected/4d38f3f0-3531-4733-8548-950b770f2094-kube-api-access-kpbxw\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7xb4q\" (UID: \"4d38f3f0-3531-4733-8548-950b770f2094\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7xb4q" Jan 22 16:56:32 crc kubenswrapper[4758]: I0122 16:56:32.403186 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7xb4q" Jan 22 16:56:33 crc kubenswrapper[4758]: I0122 16:56:33.089878 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7xb4q"] Jan 22 16:56:33 crc kubenswrapper[4758]: I0122 16:56:33.091856 4758 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 16:56:33 crc kubenswrapper[4758]: I0122 16:56:33.124188 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7xb4q" event={"ID":"4d38f3f0-3531-4733-8548-950b770f2094","Type":"ContainerStarted","Data":"f919eae46f546adaa70d2f9e1e46f585ba1cb07cb0b448ad07246c77f7d266ab"} Jan 22 16:56:33 crc kubenswrapper[4758]: I0122 16:56:33.127912 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="11ff72c7-325b-4836-8d06-dce1d2e8ea26" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.227:5671: connect: connection refused" Jan 22 16:56:33 crc kubenswrapper[4758]: I0122 16:56:33.192340 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="401b6249-7451-4767-9363-89295d6224f8" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.228:5671: connect: connection refused" Jan 22 16:56:43 crc kubenswrapper[4758]: I0122 16:56:43.128069 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 22 16:56:43 crc kubenswrapper[4758]: I0122 16:56:43.197904 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 22 16:56:43 crc kubenswrapper[4758]: I0122 16:56:43.837260 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:56:43 crc kubenswrapper[4758]: I0122 16:56:43.837633 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:56:44 crc kubenswrapper[4758]: I0122 16:56:44.038213 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sgz9b"] Jan 22 16:56:44 crc kubenswrapper[4758]: I0122 16:56:44.040857 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sgz9b" Jan 22 16:56:44 crc kubenswrapper[4758]: I0122 16:56:44.060849 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sgz9b"] Jan 22 16:56:44 crc kubenswrapper[4758]: I0122 16:56:44.079625 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b4ed303-532f-42b2-a60e-b8d95bd6dd08-utilities\") pod \"certified-operators-sgz9b\" (UID: \"4b4ed303-532f-42b2-a60e-b8d95bd6dd08\") " pod="openshift-marketplace/certified-operators-sgz9b" Jan 22 16:56:44 crc kubenswrapper[4758]: I0122 16:56:44.079716 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn857\" (UniqueName: \"kubernetes.io/projected/4b4ed303-532f-42b2-a60e-b8d95bd6dd08-kube-api-access-fn857\") pod \"certified-operators-sgz9b\" (UID: \"4b4ed303-532f-42b2-a60e-b8d95bd6dd08\") " pod="openshift-marketplace/certified-operators-sgz9b" Jan 22 16:56:44 crc kubenswrapper[4758]: I0122 16:56:44.079814 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b4ed303-532f-42b2-a60e-b8d95bd6dd08-catalog-content\") pod \"certified-operators-sgz9b\" (UID: \"4b4ed303-532f-42b2-a60e-b8d95bd6dd08\") " pod="openshift-marketplace/certified-operators-sgz9b" Jan 22 16:56:44 crc kubenswrapper[4758]: I0122 16:56:44.181317 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b4ed303-532f-42b2-a60e-b8d95bd6dd08-utilities\") pod \"certified-operators-sgz9b\" (UID: \"4b4ed303-532f-42b2-a60e-b8d95bd6dd08\") " pod="openshift-marketplace/certified-operators-sgz9b" Jan 22 16:56:44 crc kubenswrapper[4758]: I0122 16:56:44.181400 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn857\" (UniqueName: \"kubernetes.io/projected/4b4ed303-532f-42b2-a60e-b8d95bd6dd08-kube-api-access-fn857\") pod \"certified-operators-sgz9b\" (UID: \"4b4ed303-532f-42b2-a60e-b8d95bd6dd08\") " pod="openshift-marketplace/certified-operators-sgz9b" Jan 22 16:56:44 crc kubenswrapper[4758]: I0122 16:56:44.181452 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b4ed303-532f-42b2-a60e-b8d95bd6dd08-catalog-content\") pod \"certified-operators-sgz9b\" (UID: \"4b4ed303-532f-42b2-a60e-b8d95bd6dd08\") " pod="openshift-marketplace/certified-operators-sgz9b" Jan 22 16:56:44 crc kubenswrapper[4758]: I0122 16:56:44.181923 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b4ed303-532f-42b2-a60e-b8d95bd6dd08-utilities\") pod \"certified-operators-sgz9b\" (UID: \"4b4ed303-532f-42b2-a60e-b8d95bd6dd08\") " pod="openshift-marketplace/certified-operators-sgz9b" Jan 22 16:56:44 crc kubenswrapper[4758]: I0122 16:56:44.186764 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b4ed303-532f-42b2-a60e-b8d95bd6dd08-catalog-content\") pod \"certified-operators-sgz9b\" (UID: \"4b4ed303-532f-42b2-a60e-b8d95bd6dd08\") " pod="openshift-marketplace/certified-operators-sgz9b" Jan 22 16:56:44 crc kubenswrapper[4758]: I0122 16:56:44.204295 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn857\" (UniqueName: \"kubernetes.io/projected/4b4ed303-532f-42b2-a60e-b8d95bd6dd08-kube-api-access-fn857\") pod \"certified-operators-sgz9b\" (UID: \"4b4ed303-532f-42b2-a60e-b8d95bd6dd08\") " pod="openshift-marketplace/certified-operators-sgz9b" Jan 22 16:56:44 crc kubenswrapper[4758]: I0122 16:56:44.370961 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sgz9b" Jan 22 16:56:49 crc kubenswrapper[4758]: E0122 16:56:49.911325 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest" Jan 22 16:56:49 crc kubenswrapper[4758]: E0122 16:56:49.912348 4758 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 22 16:56:49 crc kubenswrapper[4758]: container &Container{Name:repo-setup-edpm-deployment-openstack-edpm-ipam,Image:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,Command:[],Args:[ansible-runner run /runner -p playbook.yaml -i repo-setup-edpm-deployment-openstack-edpm-ipam],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ANSIBLE_VERBOSITY,Value:2,ValueFrom:nil,},EnvVar{Name:RUNNER_PLAYBOOK,Value: Jan 22 16:56:49 crc kubenswrapper[4758]: - hosts: all Jan 22 16:56:49 crc kubenswrapper[4758]: strategy: linear Jan 22 16:56:49 crc kubenswrapper[4758]: tasks: Jan 22 16:56:49 crc kubenswrapper[4758]: - name: Enable podified-repos Jan 22 16:56:49 crc kubenswrapper[4758]: become: true Jan 22 16:56:49 crc kubenswrapper[4758]: ansible.builtin.shell: | Jan 22 16:56:49 crc kubenswrapper[4758]: set -euxo pipefail Jan 22 16:56:49 crc kubenswrapper[4758]: pushd /var/tmp Jan 22 16:56:49 crc kubenswrapper[4758]: curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz Jan 22 16:56:49 crc kubenswrapper[4758]: pushd repo-setup-main Jan 22 16:56:49 crc kubenswrapper[4758]: python3 -m venv ./venv Jan 22 16:56:49 crc kubenswrapper[4758]: PBR_VERSION=0.0.0 ./venv/bin/pip install ./ Jan 22 16:56:49 crc kubenswrapper[4758]: ./venv/bin/repo-setup current-podified -b antelope Jan 22 16:56:49 crc kubenswrapper[4758]: popd Jan 22 16:56:49 crc kubenswrapper[4758]: rm -rf repo-setup-main Jan 22 16:56:49 crc kubenswrapper[4758]: Jan 22 16:56:49 crc kubenswrapper[4758]: Jan 22 16:56:49 crc kubenswrapper[4758]: ,ValueFrom:nil,},EnvVar{Name:RUNNER_EXTRA_VARS,Value: Jan 22 16:56:49 crc kubenswrapper[4758]: edpm_override_hosts: openstack-edpm-ipam Jan 22 16:56:49 crc kubenswrapper[4758]: edpm_service_type: repo-setup Jan 22 16:56:49 crc kubenswrapper[4758]: Jan 22 16:56:49 crc kubenswrapper[4758]: Jan 22 16:56:49 crc kubenswrapper[4758]: ,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:repo-setup-combined-ca-bundle,ReadOnly:false,MountPath:/var/lib/openstack/cacerts/repo-setup,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key-openstack-edpm-ipam,ReadOnly:false,MountPath:/runner/env/ssh_key/ssh_key_openstack-edpm-ipam,SubPath:ssh_key_openstack-edpm-ipam,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:inventory,ReadOnly:false,MountPath:/runner/inventory/hosts,SubPath:inventory,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kpbxw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:openstack-aee-default-env,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod repo-setup-edpm-deployment-openstack-edpm-ipam-7xb4q_openstack(4d38f3f0-3531-4733-8548-950b770f2094): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Jan 22 16:56:49 crc kubenswrapper[4758]: > logger="UnhandledError" Jan 22 16:56:49 crc kubenswrapper[4758]: E0122 16:56:49.914372 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7xb4q" podUID="4d38f3f0-3531-4733-8548-950b770f2094" Jan 22 16:56:50 crc kubenswrapper[4758]: E0122 16:56:50.416174 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest\\\"\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7xb4q" podUID="4d38f3f0-3531-4733-8548-950b770f2094" Jan 22 16:56:50 crc kubenswrapper[4758]: I0122 16:56:50.423437 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sgz9b"] Jan 22 16:56:51 crc kubenswrapper[4758]: I0122 16:56:51.434475 4758 generic.go:334] "Generic (PLEG): container finished" podID="4b4ed303-532f-42b2-a60e-b8d95bd6dd08" containerID="5a6fbd91880f208852e2deea8f5151e587d9e0b82cdc9c1f274029910345f4aa" exitCode=0 Jan 22 16:56:51 crc kubenswrapper[4758]: I0122 16:56:51.434523 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sgz9b" event={"ID":"4b4ed303-532f-42b2-a60e-b8d95bd6dd08","Type":"ContainerDied","Data":"5a6fbd91880f208852e2deea8f5151e587d9e0b82cdc9c1f274029910345f4aa"} Jan 22 16:56:51 crc kubenswrapper[4758]: I0122 16:56:51.434550 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sgz9b" event={"ID":"4b4ed303-532f-42b2-a60e-b8d95bd6dd08","Type":"ContainerStarted","Data":"44462aa60c7af177fac69d8871f1d255e89235b51a8ff08b9ea57c95030a6b58"} Jan 22 16:56:52 crc kubenswrapper[4758]: I0122 16:56:52.746768 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2znq9"] Jan 22 16:56:52 crc kubenswrapper[4758]: I0122 16:56:52.749716 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2znq9" Jan 22 16:56:52 crc kubenswrapper[4758]: I0122 16:56:52.759995 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2znq9"] Jan 22 16:56:52 crc kubenswrapper[4758]: I0122 16:56:52.918313 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b-catalog-content\") pod \"community-operators-2znq9\" (UID: \"09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b\") " pod="openshift-marketplace/community-operators-2znq9" Jan 22 16:56:52 crc kubenswrapper[4758]: I0122 16:56:52.918395 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wshgp\" (UniqueName: \"kubernetes.io/projected/09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b-kube-api-access-wshgp\") pod \"community-operators-2znq9\" (UID: \"09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b\") " pod="openshift-marketplace/community-operators-2znq9" Jan 22 16:56:52 crc kubenswrapper[4758]: I0122 16:56:52.918726 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b-utilities\") pod \"community-operators-2znq9\" (UID: \"09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b\") " pod="openshift-marketplace/community-operators-2znq9" Jan 22 16:56:53 crc kubenswrapper[4758]: I0122 16:56:53.021022 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b-utilities\") pod \"community-operators-2znq9\" (UID: \"09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b\") " pod="openshift-marketplace/community-operators-2znq9" Jan 22 16:56:53 crc kubenswrapper[4758]: I0122 16:56:53.021205 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b-catalog-content\") pod \"community-operators-2znq9\" (UID: \"09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b\") " pod="openshift-marketplace/community-operators-2znq9" Jan 22 16:56:53 crc kubenswrapper[4758]: I0122 16:56:53.021253 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wshgp\" (UniqueName: \"kubernetes.io/projected/09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b-kube-api-access-wshgp\") pod \"community-operators-2znq9\" (UID: \"09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b\") " pod="openshift-marketplace/community-operators-2znq9" Jan 22 16:56:53 crc kubenswrapper[4758]: I0122 16:56:53.021513 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b-utilities\") pod \"community-operators-2znq9\" (UID: \"09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b\") " pod="openshift-marketplace/community-operators-2znq9" Jan 22 16:56:53 crc kubenswrapper[4758]: I0122 16:56:53.021695 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b-catalog-content\") pod \"community-operators-2znq9\" (UID: \"09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b\") " pod="openshift-marketplace/community-operators-2znq9" Jan 22 16:56:53 crc kubenswrapper[4758]: I0122 16:56:53.051062 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wshgp\" (UniqueName: \"kubernetes.io/projected/09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b-kube-api-access-wshgp\") pod \"community-operators-2znq9\" (UID: \"09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b\") " pod="openshift-marketplace/community-operators-2znq9" Jan 22 16:56:53 crc kubenswrapper[4758]: I0122 16:56:53.082274 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2znq9" Jan 22 16:56:53 crc kubenswrapper[4758]: I0122 16:56:53.722863 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2znq9"] Jan 22 16:56:54 crc kubenswrapper[4758]: I0122 16:56:54.517527 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2znq9" event={"ID":"09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b","Type":"ContainerStarted","Data":"795904d11d4df4e0a6729921b302b711a762acc01ad5ee983aeadcfb655da3d3"} Jan 22 16:56:55 crc kubenswrapper[4758]: I0122 16:56:55.530827 4758 generic.go:334] "Generic (PLEG): container finished" podID="09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b" containerID="65fb5e4bffe4319e11147090f6256fd37fa64070dc046a0f2ccf71be9c54e32d" exitCode=0 Jan 22 16:56:55 crc kubenswrapper[4758]: I0122 16:56:55.530883 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2znq9" event={"ID":"09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b","Type":"ContainerDied","Data":"65fb5e4bffe4319e11147090f6256fd37fa64070dc046a0f2ccf71be9c54e32d"} Jan 22 16:56:58 crc kubenswrapper[4758]: I0122 16:56:58.889006 4758 scope.go:117] "RemoveContainer" containerID="1d0dd193fe5f1c6b6c78b952d4d11eadc93119951988dacd1373b9ab6e7c6e1a" Jan 22 16:56:58 crc kubenswrapper[4758]: I0122 16:56:58.985464 4758 scope.go:117] "RemoveContainer" containerID="de8d835b77f0252773e03ee0b650c5bc3ff09343adeb3efb346389c76a40bd8f" Jan 22 16:56:59 crc kubenswrapper[4758]: I0122 16:56:59.586470 4758 generic.go:334] "Generic (PLEG): container finished" podID="4b4ed303-532f-42b2-a60e-b8d95bd6dd08" containerID="6095390e277642b8befc8dcb9ef2c2c3af9be4283942949f1e8a9ad6ddc85498" exitCode=0 Jan 22 16:56:59 crc kubenswrapper[4758]: I0122 16:56:59.586552 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sgz9b" event={"ID":"4b4ed303-532f-42b2-a60e-b8d95bd6dd08","Type":"ContainerDied","Data":"6095390e277642b8befc8dcb9ef2c2c3af9be4283942949f1e8a9ad6ddc85498"} Jan 22 16:56:59 crc kubenswrapper[4758]: I0122 16:56:59.589983 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2znq9" event={"ID":"09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b","Type":"ContainerStarted","Data":"702134962fd3b5bcfe7191df92ac9ac5322cacc9e048110323f22c64607d551b"} Jan 22 16:57:01 crc kubenswrapper[4758]: I0122 16:57:01.612771 4758 generic.go:334] "Generic (PLEG): container finished" podID="09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b" containerID="702134962fd3b5bcfe7191df92ac9ac5322cacc9e048110323f22c64607d551b" exitCode=0 Jan 22 16:57:01 crc kubenswrapper[4758]: I0122 16:57:01.612852 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2znq9" event={"ID":"09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b","Type":"ContainerDied","Data":"702134962fd3b5bcfe7191df92ac9ac5322cacc9e048110323f22c64607d551b"} Jan 22 16:57:02 crc kubenswrapper[4758]: I0122 16:57:02.630127 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2znq9" event={"ID":"09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b","Type":"ContainerStarted","Data":"07df3ae2b2ce621f368b44becf72d44d235a4a477a09aaff67127e124a59ef0c"} Jan 22 16:57:02 crc kubenswrapper[4758]: I0122 16:57:02.633393 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sgz9b" event={"ID":"4b4ed303-532f-42b2-a60e-b8d95bd6dd08","Type":"ContainerStarted","Data":"dd88ab9a1129810a77b2b1b975f9f9d37a8b12b55bfce106ce53248efc49376e"} Jan 22 16:57:02 crc kubenswrapper[4758]: I0122 16:57:02.674398 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2znq9" podStartSLOduration=6.1686103 podStartE2EDuration="10.674356899s" podCreationTimestamp="2026-01-22 16:56:52 +0000 UTC" firstStartedPulling="2026-01-22 16:56:57.564289939 +0000 UTC m=+1639.047629224" lastFinishedPulling="2026-01-22 16:57:02.070036538 +0000 UTC m=+1643.553375823" observedRunningTime="2026-01-22 16:57:02.658347413 +0000 UTC m=+1644.141686718" watchObservedRunningTime="2026-01-22 16:57:02.674356899 +0000 UTC m=+1644.157696184" Jan 22 16:57:03 crc kubenswrapper[4758]: I0122 16:57:03.083399 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2znq9" Jan 22 16:57:03 crc kubenswrapper[4758]: I0122 16:57:03.083454 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2znq9" Jan 22 16:57:04 crc kubenswrapper[4758]: I0122 16:57:04.138673 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-2znq9" podUID="09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b" containerName="registry-server" probeResult="failure" output=< Jan 22 16:57:04 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 22 16:57:04 crc kubenswrapper[4758]: > Jan 22 16:57:04 crc kubenswrapper[4758]: I0122 16:57:04.372168 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sgz9b" Jan 22 16:57:04 crc kubenswrapper[4758]: I0122 16:57:04.372233 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sgz9b" Jan 22 16:57:04 crc kubenswrapper[4758]: I0122 16:57:04.435229 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sgz9b" Jan 22 16:57:04 crc kubenswrapper[4758]: I0122 16:57:04.461400 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sgz9b" podStartSLOduration=9.923074764 podStartE2EDuration="20.461368017s" podCreationTimestamp="2026-01-22 16:56:44 +0000 UTC" firstStartedPulling="2026-01-22 16:56:51.438496856 +0000 UTC m=+1632.921836141" lastFinishedPulling="2026-01-22 16:57:01.976790099 +0000 UTC m=+1643.460129394" observedRunningTime="2026-01-22 16:57:02.705876387 +0000 UTC m=+1644.189215672" watchObservedRunningTime="2026-01-22 16:57:04.461368017 +0000 UTC m=+1645.944707332" Jan 22 16:57:06 crc kubenswrapper[4758]: I0122 16:57:06.237796 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 16:57:06 crc kubenswrapper[4758]: I0122 16:57:06.693906 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7xb4q" event={"ID":"4d38f3f0-3531-4733-8548-950b770f2094","Type":"ContainerStarted","Data":"eb358dbe1f692fc4aa7921b94bb3a360b29f9203b501f4e3b9972aa7c8c324fe"} Jan 22 16:57:06 crc kubenswrapper[4758]: I0122 16:57:06.711947 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7xb4q" podStartSLOduration=2.568774115 podStartE2EDuration="35.711925003s" podCreationTimestamp="2026-01-22 16:56:31 +0000 UTC" firstStartedPulling="2026-01-22 16:56:33.091579394 +0000 UTC m=+1614.574918679" lastFinishedPulling="2026-01-22 16:57:06.234730282 +0000 UTC m=+1647.718069567" observedRunningTime="2026-01-22 16:57:06.70926439 +0000 UTC m=+1648.192603695" watchObservedRunningTime="2026-01-22 16:57:06.711925003 +0000 UTC m=+1648.195264288" Jan 22 16:57:13 crc kubenswrapper[4758]: I0122 16:57:13.128725 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2znq9" Jan 22 16:57:13 crc kubenswrapper[4758]: I0122 16:57:13.180110 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2znq9" Jan 22 16:57:13 crc kubenswrapper[4758]: I0122 16:57:13.369028 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2znq9"] Jan 22 16:57:13 crc kubenswrapper[4758]: I0122 16:57:13.837528 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:57:13 crc kubenswrapper[4758]: I0122 16:57:13.837603 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:57:14 crc kubenswrapper[4758]: I0122 16:57:14.419347 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sgz9b" Jan 22 16:57:14 crc kubenswrapper[4758]: I0122 16:57:14.767332 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2znq9" podUID="09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b" containerName="registry-server" containerID="cri-o://07df3ae2b2ce621f368b44becf72d44d235a4a477a09aaff67127e124a59ef0c" gracePeriod=2 Jan 22 16:57:15 crc kubenswrapper[4758]: I0122 16:57:15.276102 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2znq9" Jan 22 16:57:15 crc kubenswrapper[4758]: I0122 16:57:15.453793 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b-catalog-content\") pod \"09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b\" (UID: \"09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b\") " Jan 22 16:57:15 crc kubenswrapper[4758]: I0122 16:57:15.453879 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b-utilities\") pod \"09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b\" (UID: \"09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b\") " Jan 22 16:57:15 crc kubenswrapper[4758]: I0122 16:57:15.453915 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wshgp\" (UniqueName: \"kubernetes.io/projected/09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b-kube-api-access-wshgp\") pod \"09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b\" (UID: \"09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b\") " Jan 22 16:57:15 crc kubenswrapper[4758]: I0122 16:57:15.454894 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b-utilities" (OuterVolumeSpecName: "utilities") pod "09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b" (UID: "09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:57:15 crc kubenswrapper[4758]: I0122 16:57:15.460051 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b-kube-api-access-wshgp" (OuterVolumeSpecName: "kube-api-access-wshgp") pod "09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b" (UID: "09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b"). InnerVolumeSpecName "kube-api-access-wshgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:57:15 crc kubenswrapper[4758]: I0122 16:57:15.512339 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b" (UID: "09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:57:15 crc kubenswrapper[4758]: I0122 16:57:15.555938 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:57:15 crc kubenswrapper[4758]: I0122 16:57:15.556139 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:57:15 crc kubenswrapper[4758]: I0122 16:57:15.556222 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wshgp\" (UniqueName: \"kubernetes.io/projected/09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b-kube-api-access-wshgp\") on node \"crc\" DevicePath \"\"" Jan 22 16:57:15 crc kubenswrapper[4758]: I0122 16:57:15.780727 4758 generic.go:334] "Generic (PLEG): container finished" podID="09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b" containerID="07df3ae2b2ce621f368b44becf72d44d235a4a477a09aaff67127e124a59ef0c" exitCode=0 Jan 22 16:57:15 crc kubenswrapper[4758]: I0122 16:57:15.780788 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2znq9" Jan 22 16:57:15 crc kubenswrapper[4758]: I0122 16:57:15.780800 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2znq9" event={"ID":"09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b","Type":"ContainerDied","Data":"07df3ae2b2ce621f368b44becf72d44d235a4a477a09aaff67127e124a59ef0c"} Jan 22 16:57:15 crc kubenswrapper[4758]: I0122 16:57:15.780834 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2znq9" event={"ID":"09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b","Type":"ContainerDied","Data":"795904d11d4df4e0a6729921b302b711a762acc01ad5ee983aeadcfb655da3d3"} Jan 22 16:57:15 crc kubenswrapper[4758]: I0122 16:57:15.780851 4758 scope.go:117] "RemoveContainer" containerID="07df3ae2b2ce621f368b44becf72d44d235a4a477a09aaff67127e124a59ef0c" Jan 22 16:57:15 crc kubenswrapper[4758]: I0122 16:57:15.814051 4758 scope.go:117] "RemoveContainer" containerID="702134962fd3b5bcfe7191df92ac9ac5322cacc9e048110323f22c64607d551b" Jan 22 16:57:15 crc kubenswrapper[4758]: I0122 16:57:15.833940 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sgz9b"] Jan 22 16:57:15 crc kubenswrapper[4758]: I0122 16:57:15.857351 4758 scope.go:117] "RemoveContainer" containerID="65fb5e4bffe4319e11147090f6256fd37fa64070dc046a0f2ccf71be9c54e32d" Jan 22 16:57:15 crc kubenswrapper[4758]: I0122 16:57:15.859120 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2znq9"] Jan 22 16:57:15 crc kubenswrapper[4758]: I0122 16:57:15.869972 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2znq9"] Jan 22 16:57:15 crc kubenswrapper[4758]: I0122 16:57:15.905204 4758 scope.go:117] "RemoveContainer" containerID="07df3ae2b2ce621f368b44becf72d44d235a4a477a09aaff67127e124a59ef0c" Jan 22 16:57:15 crc kubenswrapper[4758]: E0122 16:57:15.905923 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07df3ae2b2ce621f368b44becf72d44d235a4a477a09aaff67127e124a59ef0c\": container with ID starting with 07df3ae2b2ce621f368b44becf72d44d235a4a477a09aaff67127e124a59ef0c not found: ID does not exist" containerID="07df3ae2b2ce621f368b44becf72d44d235a4a477a09aaff67127e124a59ef0c" Jan 22 16:57:15 crc kubenswrapper[4758]: I0122 16:57:15.906002 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07df3ae2b2ce621f368b44becf72d44d235a4a477a09aaff67127e124a59ef0c"} err="failed to get container status \"07df3ae2b2ce621f368b44becf72d44d235a4a477a09aaff67127e124a59ef0c\": rpc error: code = NotFound desc = could not find container \"07df3ae2b2ce621f368b44becf72d44d235a4a477a09aaff67127e124a59ef0c\": container with ID starting with 07df3ae2b2ce621f368b44becf72d44d235a4a477a09aaff67127e124a59ef0c not found: ID does not exist" Jan 22 16:57:15 crc kubenswrapper[4758]: I0122 16:57:15.906036 4758 scope.go:117] "RemoveContainer" containerID="702134962fd3b5bcfe7191df92ac9ac5322cacc9e048110323f22c64607d551b" Jan 22 16:57:15 crc kubenswrapper[4758]: E0122 16:57:15.906856 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"702134962fd3b5bcfe7191df92ac9ac5322cacc9e048110323f22c64607d551b\": container with ID starting with 702134962fd3b5bcfe7191df92ac9ac5322cacc9e048110323f22c64607d551b not found: ID does not exist" containerID="702134962fd3b5bcfe7191df92ac9ac5322cacc9e048110323f22c64607d551b" Jan 22 16:57:15 crc kubenswrapper[4758]: I0122 16:57:15.906886 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"702134962fd3b5bcfe7191df92ac9ac5322cacc9e048110323f22c64607d551b"} err="failed to get container status \"702134962fd3b5bcfe7191df92ac9ac5322cacc9e048110323f22c64607d551b\": rpc error: code = NotFound desc = could not find container \"702134962fd3b5bcfe7191df92ac9ac5322cacc9e048110323f22c64607d551b\": container with ID starting with 702134962fd3b5bcfe7191df92ac9ac5322cacc9e048110323f22c64607d551b not found: ID does not exist" Jan 22 16:57:15 crc kubenswrapper[4758]: I0122 16:57:15.906901 4758 scope.go:117] "RemoveContainer" containerID="65fb5e4bffe4319e11147090f6256fd37fa64070dc046a0f2ccf71be9c54e32d" Jan 22 16:57:15 crc kubenswrapper[4758]: E0122 16:57:15.907264 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65fb5e4bffe4319e11147090f6256fd37fa64070dc046a0f2ccf71be9c54e32d\": container with ID starting with 65fb5e4bffe4319e11147090f6256fd37fa64070dc046a0f2ccf71be9c54e32d not found: ID does not exist" containerID="65fb5e4bffe4319e11147090f6256fd37fa64070dc046a0f2ccf71be9c54e32d" Jan 22 16:57:15 crc kubenswrapper[4758]: I0122 16:57:15.907290 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65fb5e4bffe4319e11147090f6256fd37fa64070dc046a0f2ccf71be9c54e32d"} err="failed to get container status \"65fb5e4bffe4319e11147090f6256fd37fa64070dc046a0f2ccf71be9c54e32d\": rpc error: code = NotFound desc = could not find container \"65fb5e4bffe4319e11147090f6256fd37fa64070dc046a0f2ccf71be9c54e32d\": container with ID starting with 65fb5e4bffe4319e11147090f6256fd37fa64070dc046a0f2ccf71be9c54e32d not found: ID does not exist" Jan 22 16:57:16 crc kubenswrapper[4758]: I0122 16:57:16.186550 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lwpnp"] Jan 22 16:57:16 crc kubenswrapper[4758]: I0122 16:57:16.187336 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lwpnp" podUID="d5b62b0f-9c35-46f7-b806-69b0a53eaf63" containerName="registry-server" containerID="cri-o://0e22d8dc53fe3d2d2d8b3f5f685d34eefe0ef6d6be6160d098dcdafc56bfdb22" gracePeriod=2 Jan 22 16:57:16 crc kubenswrapper[4758]: I0122 16:57:16.797721 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lwpnp" Jan 22 16:57:16 crc kubenswrapper[4758]: I0122 16:57:16.809180 4758 generic.go:334] "Generic (PLEG): container finished" podID="d5b62b0f-9c35-46f7-b806-69b0a53eaf63" containerID="0e22d8dc53fe3d2d2d8b3f5f685d34eefe0ef6d6be6160d098dcdafc56bfdb22" exitCode=0 Jan 22 16:57:16 crc kubenswrapper[4758]: I0122 16:57:16.809267 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lwpnp" Jan 22 16:57:16 crc kubenswrapper[4758]: I0122 16:57:16.829544 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b" path="/var/lib/kubelet/pods/09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b/volumes" Jan 22 16:57:16 crc kubenswrapper[4758]: I0122 16:57:16.833471 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lwpnp" event={"ID":"d5b62b0f-9c35-46f7-b806-69b0a53eaf63","Type":"ContainerDied","Data":"0e22d8dc53fe3d2d2d8b3f5f685d34eefe0ef6d6be6160d098dcdafc56bfdb22"} Jan 22 16:57:16 crc kubenswrapper[4758]: I0122 16:57:16.833521 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lwpnp" event={"ID":"d5b62b0f-9c35-46f7-b806-69b0a53eaf63","Type":"ContainerDied","Data":"06c076e6ccfd35bb44586790e8926666b64a0e5d389a2a44de3dd5bdecdaee28"} Jan 22 16:57:16 crc kubenswrapper[4758]: I0122 16:57:16.833545 4758 scope.go:117] "RemoveContainer" containerID="0e22d8dc53fe3d2d2d8b3f5f685d34eefe0ef6d6be6160d098dcdafc56bfdb22" Jan 22 16:57:16 crc kubenswrapper[4758]: I0122 16:57:16.875477 4758 scope.go:117] "RemoveContainer" containerID="ed7eecd9014522436c4d19d926b9044455795b5aaec98a3ce427dc34de0388b8" Jan 22 16:57:16 crc kubenswrapper[4758]: I0122 16:57:16.932940 4758 scope.go:117] "RemoveContainer" containerID="cc995b7de125b5acec33342e520661ff1ba86d63f44b1900bd6d3fd53b7bbf75" Jan 22 16:57:16 crc kubenswrapper[4758]: I0122 16:57:16.989129 4758 scope.go:117] "RemoveContainer" containerID="0e22d8dc53fe3d2d2d8b3f5f685d34eefe0ef6d6be6160d098dcdafc56bfdb22" Jan 22 16:57:16 crc kubenswrapper[4758]: E0122 16:57:16.990779 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e22d8dc53fe3d2d2d8b3f5f685d34eefe0ef6d6be6160d098dcdafc56bfdb22\": container with ID starting with 0e22d8dc53fe3d2d2d8b3f5f685d34eefe0ef6d6be6160d098dcdafc56bfdb22 not found: ID does not exist" containerID="0e22d8dc53fe3d2d2d8b3f5f685d34eefe0ef6d6be6160d098dcdafc56bfdb22" Jan 22 16:57:16 crc kubenswrapper[4758]: I0122 16:57:16.990821 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e22d8dc53fe3d2d2d8b3f5f685d34eefe0ef6d6be6160d098dcdafc56bfdb22"} err="failed to get container status \"0e22d8dc53fe3d2d2d8b3f5f685d34eefe0ef6d6be6160d098dcdafc56bfdb22\": rpc error: code = NotFound desc = could not find container \"0e22d8dc53fe3d2d2d8b3f5f685d34eefe0ef6d6be6160d098dcdafc56bfdb22\": container with ID starting with 0e22d8dc53fe3d2d2d8b3f5f685d34eefe0ef6d6be6160d098dcdafc56bfdb22 not found: ID does not exist" Jan 22 16:57:16 crc kubenswrapper[4758]: I0122 16:57:16.990851 4758 scope.go:117] "RemoveContainer" containerID="ed7eecd9014522436c4d19d926b9044455795b5aaec98a3ce427dc34de0388b8" Jan 22 16:57:16 crc kubenswrapper[4758]: E0122 16:57:16.991218 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed7eecd9014522436c4d19d926b9044455795b5aaec98a3ce427dc34de0388b8\": container with ID starting with ed7eecd9014522436c4d19d926b9044455795b5aaec98a3ce427dc34de0388b8 not found: ID does not exist" containerID="ed7eecd9014522436c4d19d926b9044455795b5aaec98a3ce427dc34de0388b8" Jan 22 16:57:16 crc kubenswrapper[4758]: I0122 16:57:16.991294 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed7eecd9014522436c4d19d926b9044455795b5aaec98a3ce427dc34de0388b8"} err="failed to get container status \"ed7eecd9014522436c4d19d926b9044455795b5aaec98a3ce427dc34de0388b8\": rpc error: code = NotFound desc = could not find container \"ed7eecd9014522436c4d19d926b9044455795b5aaec98a3ce427dc34de0388b8\": container with ID starting with ed7eecd9014522436c4d19d926b9044455795b5aaec98a3ce427dc34de0388b8 not found: ID does not exist" Jan 22 16:57:16 crc kubenswrapper[4758]: I0122 16:57:16.991332 4758 scope.go:117] "RemoveContainer" containerID="cc995b7de125b5acec33342e520661ff1ba86d63f44b1900bd6d3fd53b7bbf75" Jan 22 16:57:16 crc kubenswrapper[4758]: E0122 16:57:16.994992 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc995b7de125b5acec33342e520661ff1ba86d63f44b1900bd6d3fd53b7bbf75\": container with ID starting with cc995b7de125b5acec33342e520661ff1ba86d63f44b1900bd6d3fd53b7bbf75 not found: ID does not exist" containerID="cc995b7de125b5acec33342e520661ff1ba86d63f44b1900bd6d3fd53b7bbf75" Jan 22 16:57:16 crc kubenswrapper[4758]: I0122 16:57:16.995062 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc995b7de125b5acec33342e520661ff1ba86d63f44b1900bd6d3fd53b7bbf75"} err="failed to get container status \"cc995b7de125b5acec33342e520661ff1ba86d63f44b1900bd6d3fd53b7bbf75\": rpc error: code = NotFound desc = could not find container \"cc995b7de125b5acec33342e520661ff1ba86d63f44b1900bd6d3fd53b7bbf75\": container with ID starting with cc995b7de125b5acec33342e520661ff1ba86d63f44b1900bd6d3fd53b7bbf75 not found: ID does not exist" Jan 22 16:57:16 crc kubenswrapper[4758]: I0122 16:57:16.997547 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-488rc\" (UniqueName: \"kubernetes.io/projected/d5b62b0f-9c35-46f7-b806-69b0a53eaf63-kube-api-access-488rc\") pod \"d5b62b0f-9c35-46f7-b806-69b0a53eaf63\" (UID: \"d5b62b0f-9c35-46f7-b806-69b0a53eaf63\") " Jan 22 16:57:16 crc kubenswrapper[4758]: I0122 16:57:16.997733 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5b62b0f-9c35-46f7-b806-69b0a53eaf63-catalog-content\") pod \"d5b62b0f-9c35-46f7-b806-69b0a53eaf63\" (UID: \"d5b62b0f-9c35-46f7-b806-69b0a53eaf63\") " Jan 22 16:57:16 crc kubenswrapper[4758]: I0122 16:57:16.997901 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5b62b0f-9c35-46f7-b806-69b0a53eaf63-utilities\") pod \"d5b62b0f-9c35-46f7-b806-69b0a53eaf63\" (UID: \"d5b62b0f-9c35-46f7-b806-69b0a53eaf63\") " Jan 22 16:57:17 crc kubenswrapper[4758]: I0122 16:57:17.015219 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5b62b0f-9c35-46f7-b806-69b0a53eaf63-utilities" (OuterVolumeSpecName: "utilities") pod "d5b62b0f-9c35-46f7-b806-69b0a53eaf63" (UID: "d5b62b0f-9c35-46f7-b806-69b0a53eaf63"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:57:17 crc kubenswrapper[4758]: I0122 16:57:17.017036 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5b62b0f-9c35-46f7-b806-69b0a53eaf63-kube-api-access-488rc" (OuterVolumeSpecName: "kube-api-access-488rc") pod "d5b62b0f-9c35-46f7-b806-69b0a53eaf63" (UID: "d5b62b0f-9c35-46f7-b806-69b0a53eaf63"). InnerVolumeSpecName "kube-api-access-488rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:57:17 crc kubenswrapper[4758]: I0122 16:57:17.095206 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5b62b0f-9c35-46f7-b806-69b0a53eaf63-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d5b62b0f-9c35-46f7-b806-69b0a53eaf63" (UID: "d5b62b0f-9c35-46f7-b806-69b0a53eaf63"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:57:17 crc kubenswrapper[4758]: I0122 16:57:17.100503 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5b62b0f-9c35-46f7-b806-69b0a53eaf63-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:57:17 crc kubenswrapper[4758]: I0122 16:57:17.100543 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-488rc\" (UniqueName: \"kubernetes.io/projected/d5b62b0f-9c35-46f7-b806-69b0a53eaf63-kube-api-access-488rc\") on node \"crc\" DevicePath \"\"" Jan 22 16:57:17 crc kubenswrapper[4758]: I0122 16:57:17.100559 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5b62b0f-9c35-46f7-b806-69b0a53eaf63-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:57:17 crc kubenswrapper[4758]: I0122 16:57:17.185984 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lwpnp"] Jan 22 16:57:17 crc kubenswrapper[4758]: I0122 16:57:17.199837 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lwpnp"] Jan 22 16:57:18 crc kubenswrapper[4758]: I0122 16:57:18.818598 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5b62b0f-9c35-46f7-b806-69b0a53eaf63" path="/var/lib/kubelet/pods/d5b62b0f-9c35-46f7-b806-69b0a53eaf63/volumes" Jan 22 16:57:18 crc kubenswrapper[4758]: I0122 16:57:18.828569 4758 generic.go:334] "Generic (PLEG): container finished" podID="4d38f3f0-3531-4733-8548-950b770f2094" containerID="eb358dbe1f692fc4aa7921b94bb3a360b29f9203b501f4e3b9972aa7c8c324fe" exitCode=0 Jan 22 16:57:18 crc kubenswrapper[4758]: I0122 16:57:18.828617 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7xb4q" event={"ID":"4d38f3f0-3531-4733-8548-950b770f2094","Type":"ContainerDied","Data":"eb358dbe1f692fc4aa7921b94bb3a360b29f9203b501f4e3b9972aa7c8c324fe"} Jan 22 16:57:20 crc kubenswrapper[4758]: I0122 16:57:20.339950 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7xb4q" Jan 22 16:57:20 crc kubenswrapper[4758]: I0122 16:57:20.473559 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4d38f3f0-3531-4733-8548-950b770f2094-ssh-key-openstack-edpm-ipam\") pod \"4d38f3f0-3531-4733-8548-950b770f2094\" (UID: \"4d38f3f0-3531-4733-8548-950b770f2094\") " Jan 22 16:57:20 crc kubenswrapper[4758]: I0122 16:57:20.473720 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d38f3f0-3531-4733-8548-950b770f2094-inventory\") pod \"4d38f3f0-3531-4733-8548-950b770f2094\" (UID: \"4d38f3f0-3531-4733-8548-950b770f2094\") " Jan 22 16:57:20 crc kubenswrapper[4758]: I0122 16:57:20.473819 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpbxw\" (UniqueName: \"kubernetes.io/projected/4d38f3f0-3531-4733-8548-950b770f2094-kube-api-access-kpbxw\") pod \"4d38f3f0-3531-4733-8548-950b770f2094\" (UID: \"4d38f3f0-3531-4733-8548-950b770f2094\") " Jan 22 16:57:20 crc kubenswrapper[4758]: I0122 16:57:20.474038 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d38f3f0-3531-4733-8548-950b770f2094-repo-setup-combined-ca-bundle\") pod \"4d38f3f0-3531-4733-8548-950b770f2094\" (UID: \"4d38f3f0-3531-4733-8548-950b770f2094\") " Jan 22 16:57:20 crc kubenswrapper[4758]: I0122 16:57:20.481033 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d38f3f0-3531-4733-8548-950b770f2094-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "4d38f3f0-3531-4733-8548-950b770f2094" (UID: "4d38f3f0-3531-4733-8548-950b770f2094"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:57:20 crc kubenswrapper[4758]: I0122 16:57:20.490626 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d38f3f0-3531-4733-8548-950b770f2094-kube-api-access-kpbxw" (OuterVolumeSpecName: "kube-api-access-kpbxw") pod "4d38f3f0-3531-4733-8548-950b770f2094" (UID: "4d38f3f0-3531-4733-8548-950b770f2094"). InnerVolumeSpecName "kube-api-access-kpbxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:57:20 crc kubenswrapper[4758]: I0122 16:57:20.504603 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d38f3f0-3531-4733-8548-950b770f2094-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4d38f3f0-3531-4733-8548-950b770f2094" (UID: "4d38f3f0-3531-4733-8548-950b770f2094"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:57:20 crc kubenswrapper[4758]: I0122 16:57:20.506754 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d38f3f0-3531-4733-8548-950b770f2094-inventory" (OuterVolumeSpecName: "inventory") pod "4d38f3f0-3531-4733-8548-950b770f2094" (UID: "4d38f3f0-3531-4733-8548-950b770f2094"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:57:20 crc kubenswrapper[4758]: I0122 16:57:20.576474 4758 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d38f3f0-3531-4733-8548-950b770f2094-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:57:20 crc kubenswrapper[4758]: I0122 16:57:20.576521 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4d38f3f0-3531-4733-8548-950b770f2094-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 16:57:20 crc kubenswrapper[4758]: I0122 16:57:20.576537 4758 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d38f3f0-3531-4733-8548-950b770f2094-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 16:57:20 crc kubenswrapper[4758]: I0122 16:57:20.576551 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpbxw\" (UniqueName: \"kubernetes.io/projected/4d38f3f0-3531-4733-8548-950b770f2094-kube-api-access-kpbxw\") on node \"crc\" DevicePath \"\"" Jan 22 16:57:20 crc kubenswrapper[4758]: I0122 16:57:20.851413 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7xb4q" event={"ID":"4d38f3f0-3531-4733-8548-950b770f2094","Type":"ContainerDied","Data":"f919eae46f546adaa70d2f9e1e46f585ba1cb07cb0b448ad07246c77f7d266ab"} Jan 22 16:57:20 crc kubenswrapper[4758]: I0122 16:57:20.851462 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f919eae46f546adaa70d2f9e1e46f585ba1cb07cb0b448ad07246c77f7d266ab" Jan 22 16:57:20 crc kubenswrapper[4758]: I0122 16:57:20.851479 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7xb4q" Jan 22 16:57:20 crc kubenswrapper[4758]: I0122 16:57:20.936855 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-7knhk"] Jan 22 16:57:20 crc kubenswrapper[4758]: E0122 16:57:20.937867 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5b62b0f-9c35-46f7-b806-69b0a53eaf63" containerName="extract-content" Jan 22 16:57:20 crc kubenswrapper[4758]: I0122 16:57:20.937883 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5b62b0f-9c35-46f7-b806-69b0a53eaf63" containerName="extract-content" Jan 22 16:57:20 crc kubenswrapper[4758]: E0122 16:57:20.937895 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b" containerName="extract-utilities" Jan 22 16:57:20 crc kubenswrapper[4758]: I0122 16:57:20.937903 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b" containerName="extract-utilities" Jan 22 16:57:20 crc kubenswrapper[4758]: E0122 16:57:20.937937 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b" containerName="registry-server" Jan 22 16:57:20 crc kubenswrapper[4758]: I0122 16:57:20.937944 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b" containerName="registry-server" Jan 22 16:57:20 crc kubenswrapper[4758]: E0122 16:57:20.937965 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5b62b0f-9c35-46f7-b806-69b0a53eaf63" containerName="extract-utilities" Jan 22 16:57:20 crc kubenswrapper[4758]: I0122 16:57:20.937971 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5b62b0f-9c35-46f7-b806-69b0a53eaf63" containerName="extract-utilities" Jan 22 16:57:20 crc kubenswrapper[4758]: E0122 16:57:20.937978 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d38f3f0-3531-4733-8548-950b770f2094" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 22 16:57:20 crc kubenswrapper[4758]: I0122 16:57:20.937987 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d38f3f0-3531-4733-8548-950b770f2094" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 22 16:57:20 crc kubenswrapper[4758]: E0122 16:57:20.938006 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b" containerName="extract-content" Jan 22 16:57:20 crc kubenswrapper[4758]: I0122 16:57:20.938011 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b" containerName="extract-content" Jan 22 16:57:20 crc kubenswrapper[4758]: E0122 16:57:20.938036 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5b62b0f-9c35-46f7-b806-69b0a53eaf63" containerName="registry-server" Jan 22 16:57:20 crc kubenswrapper[4758]: I0122 16:57:20.938042 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5b62b0f-9c35-46f7-b806-69b0a53eaf63" containerName="registry-server" Jan 22 16:57:20 crc kubenswrapper[4758]: I0122 16:57:20.938418 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5b62b0f-9c35-46f7-b806-69b0a53eaf63" containerName="registry-server" Jan 22 16:57:20 crc kubenswrapper[4758]: I0122 16:57:20.938456 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="09f4a8ef-c28b-4fb5-99a9-f7bb4ca7fd7b" containerName="registry-server" Jan 22 16:57:20 crc kubenswrapper[4758]: I0122 16:57:20.938474 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d38f3f0-3531-4733-8548-950b770f2094" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 22 16:57:20 crc kubenswrapper[4758]: I0122 16:57:20.939462 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7knhk" Jan 22 16:57:20 crc kubenswrapper[4758]: I0122 16:57:20.953499 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5gz9n" Jan 22 16:57:20 crc kubenswrapper[4758]: I0122 16:57:20.953757 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 16:57:20 crc kubenswrapper[4758]: I0122 16:57:20.953977 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 16:57:20 crc kubenswrapper[4758]: I0122 16:57:20.954164 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 16:57:20 crc kubenswrapper[4758]: I0122 16:57:20.983591 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-7knhk"] Jan 22 16:57:21 crc kubenswrapper[4758]: I0122 16:57:21.091391 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4xk2\" (UniqueName: \"kubernetes.io/projected/37ddbe64-608a-4aac-9d84-f18a622cf3f4-kube-api-access-z4xk2\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7knhk\" (UID: \"37ddbe64-608a-4aac-9d84-f18a622cf3f4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7knhk" Jan 22 16:57:21 crc kubenswrapper[4758]: I0122 16:57:21.091443 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37ddbe64-608a-4aac-9d84-f18a622cf3f4-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7knhk\" (UID: \"37ddbe64-608a-4aac-9d84-f18a622cf3f4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7knhk" Jan 22 16:57:21 crc kubenswrapper[4758]: I0122 16:57:21.091472 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37ddbe64-608a-4aac-9d84-f18a622cf3f4-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7knhk\" (UID: \"37ddbe64-608a-4aac-9d84-f18a622cf3f4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7knhk" Jan 22 16:57:21 crc kubenswrapper[4758]: I0122 16:57:21.193856 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4xk2\" (UniqueName: \"kubernetes.io/projected/37ddbe64-608a-4aac-9d84-f18a622cf3f4-kube-api-access-z4xk2\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7knhk\" (UID: \"37ddbe64-608a-4aac-9d84-f18a622cf3f4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7knhk" Jan 22 16:57:21 crc kubenswrapper[4758]: I0122 16:57:21.194104 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37ddbe64-608a-4aac-9d84-f18a622cf3f4-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7knhk\" (UID: \"37ddbe64-608a-4aac-9d84-f18a622cf3f4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7knhk" Jan 22 16:57:21 crc kubenswrapper[4758]: I0122 16:57:21.194204 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37ddbe64-608a-4aac-9d84-f18a622cf3f4-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7knhk\" (UID: \"37ddbe64-608a-4aac-9d84-f18a622cf3f4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7knhk" Jan 22 16:57:21 crc kubenswrapper[4758]: I0122 16:57:21.197487 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37ddbe64-608a-4aac-9d84-f18a622cf3f4-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7knhk\" (UID: \"37ddbe64-608a-4aac-9d84-f18a622cf3f4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7knhk" Jan 22 16:57:21 crc kubenswrapper[4758]: I0122 16:57:21.198292 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37ddbe64-608a-4aac-9d84-f18a622cf3f4-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7knhk\" (UID: \"37ddbe64-608a-4aac-9d84-f18a622cf3f4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7knhk" Jan 22 16:57:21 crc kubenswrapper[4758]: I0122 16:57:21.212189 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4xk2\" (UniqueName: \"kubernetes.io/projected/37ddbe64-608a-4aac-9d84-f18a622cf3f4-kube-api-access-z4xk2\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7knhk\" (UID: \"37ddbe64-608a-4aac-9d84-f18a622cf3f4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7knhk" Jan 22 16:57:21 crc kubenswrapper[4758]: I0122 16:57:21.294541 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7knhk" Jan 22 16:57:21 crc kubenswrapper[4758]: I0122 16:57:21.926304 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-7knhk"] Jan 22 16:57:22 crc kubenswrapper[4758]: I0122 16:57:22.869548 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7knhk" event={"ID":"37ddbe64-608a-4aac-9d84-f18a622cf3f4","Type":"ContainerStarted","Data":"9fe4a6d237abf29a8b88ed47e44d19e311d059f433e49115ddceb84401659665"} Jan 22 16:57:22 crc kubenswrapper[4758]: I0122 16:57:22.869943 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7knhk" event={"ID":"37ddbe64-608a-4aac-9d84-f18a622cf3f4","Type":"ContainerStarted","Data":"d90240c9f9d82d5cec399e98fddcc03ca0d2f5d44f70c519b836e8720e46fd74"} Jan 22 16:57:22 crc kubenswrapper[4758]: I0122 16:57:22.896065 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7knhk" podStartSLOduration=2.437264972 podStartE2EDuration="2.894619952s" podCreationTimestamp="2026-01-22 16:57:20 +0000 UTC" firstStartedPulling="2026-01-22 16:57:21.935242395 +0000 UTC m=+1663.418581680" lastFinishedPulling="2026-01-22 16:57:22.392597365 +0000 UTC m=+1663.875936660" observedRunningTime="2026-01-22 16:57:22.887193639 +0000 UTC m=+1664.370532944" watchObservedRunningTime="2026-01-22 16:57:22.894619952 +0000 UTC m=+1664.377959237" Jan 22 16:57:25 crc kubenswrapper[4758]: I0122 16:57:25.904342 4758 generic.go:334] "Generic (PLEG): container finished" podID="37ddbe64-608a-4aac-9d84-f18a622cf3f4" containerID="9fe4a6d237abf29a8b88ed47e44d19e311d059f433e49115ddceb84401659665" exitCode=0 Jan 22 16:57:25 crc kubenswrapper[4758]: I0122 16:57:25.904436 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7knhk" event={"ID":"37ddbe64-608a-4aac-9d84-f18a622cf3f4","Type":"ContainerDied","Data":"9fe4a6d237abf29a8b88ed47e44d19e311d059f433e49115ddceb84401659665"} Jan 22 16:57:27 crc kubenswrapper[4758]: I0122 16:57:27.363022 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7knhk" Jan 22 16:57:27 crc kubenswrapper[4758]: I0122 16:57:27.519844 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37ddbe64-608a-4aac-9d84-f18a622cf3f4-ssh-key-openstack-edpm-ipam\") pod \"37ddbe64-608a-4aac-9d84-f18a622cf3f4\" (UID: \"37ddbe64-608a-4aac-9d84-f18a622cf3f4\") " Jan 22 16:57:27 crc kubenswrapper[4758]: I0122 16:57:27.519958 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37ddbe64-608a-4aac-9d84-f18a622cf3f4-inventory\") pod \"37ddbe64-608a-4aac-9d84-f18a622cf3f4\" (UID: \"37ddbe64-608a-4aac-9d84-f18a622cf3f4\") " Jan 22 16:57:27 crc kubenswrapper[4758]: I0122 16:57:27.520074 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4xk2\" (UniqueName: \"kubernetes.io/projected/37ddbe64-608a-4aac-9d84-f18a622cf3f4-kube-api-access-z4xk2\") pod \"37ddbe64-608a-4aac-9d84-f18a622cf3f4\" (UID: \"37ddbe64-608a-4aac-9d84-f18a622cf3f4\") " Jan 22 16:57:27 crc kubenswrapper[4758]: I0122 16:57:27.530236 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37ddbe64-608a-4aac-9d84-f18a622cf3f4-kube-api-access-z4xk2" (OuterVolumeSpecName: "kube-api-access-z4xk2") pod "37ddbe64-608a-4aac-9d84-f18a622cf3f4" (UID: "37ddbe64-608a-4aac-9d84-f18a622cf3f4"). InnerVolumeSpecName "kube-api-access-z4xk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:57:27 crc kubenswrapper[4758]: I0122 16:57:27.563938 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37ddbe64-608a-4aac-9d84-f18a622cf3f4-inventory" (OuterVolumeSpecName: "inventory") pod "37ddbe64-608a-4aac-9d84-f18a622cf3f4" (UID: "37ddbe64-608a-4aac-9d84-f18a622cf3f4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:57:27 crc kubenswrapper[4758]: I0122 16:57:27.573896 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37ddbe64-608a-4aac-9d84-f18a622cf3f4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "37ddbe64-608a-4aac-9d84-f18a622cf3f4" (UID: "37ddbe64-608a-4aac-9d84-f18a622cf3f4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:57:27 crc kubenswrapper[4758]: I0122 16:57:27.623037 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37ddbe64-608a-4aac-9d84-f18a622cf3f4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 16:57:27 crc kubenswrapper[4758]: I0122 16:57:27.623079 4758 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37ddbe64-608a-4aac-9d84-f18a622cf3f4-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 16:57:27 crc kubenswrapper[4758]: I0122 16:57:27.623095 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4xk2\" (UniqueName: \"kubernetes.io/projected/37ddbe64-608a-4aac-9d84-f18a622cf3f4-kube-api-access-z4xk2\") on node \"crc\" DevicePath \"\"" Jan 22 16:57:27 crc kubenswrapper[4758]: I0122 16:57:27.929480 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7knhk" event={"ID":"37ddbe64-608a-4aac-9d84-f18a622cf3f4","Type":"ContainerDied","Data":"d90240c9f9d82d5cec399e98fddcc03ca0d2f5d44f70c519b836e8720e46fd74"} Jan 22 16:57:27 crc kubenswrapper[4758]: I0122 16:57:27.929523 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7knhk" Jan 22 16:57:27 crc kubenswrapper[4758]: I0122 16:57:27.929542 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d90240c9f9d82d5cec399e98fddcc03ca0d2f5d44f70c519b836e8720e46fd74" Jan 22 16:57:28 crc kubenswrapper[4758]: I0122 16:57:28.055685 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bzvws"] Jan 22 16:57:28 crc kubenswrapper[4758]: E0122 16:57:28.056356 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37ddbe64-608a-4aac-9d84-f18a622cf3f4" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 22 16:57:28 crc kubenswrapper[4758]: I0122 16:57:28.056383 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="37ddbe64-608a-4aac-9d84-f18a622cf3f4" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 22 16:57:28 crc kubenswrapper[4758]: I0122 16:57:28.056586 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="37ddbe64-608a-4aac-9d84-f18a622cf3f4" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 22 16:57:28 crc kubenswrapper[4758]: I0122 16:57:28.057348 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bzvws" Jan 22 16:57:28 crc kubenswrapper[4758]: I0122 16:57:28.059631 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 16:57:28 crc kubenswrapper[4758]: I0122 16:57:28.060941 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 16:57:28 crc kubenswrapper[4758]: I0122 16:57:28.064040 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 16:57:28 crc kubenswrapper[4758]: I0122 16:57:28.064056 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5gz9n" Jan 22 16:57:28 crc kubenswrapper[4758]: I0122 16:57:28.072540 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bzvws"] Jan 22 16:57:28 crc kubenswrapper[4758]: I0122 16:57:28.133176 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s24wc\" (UniqueName: \"kubernetes.io/projected/7b0250c2-eb08-4c81-9d0b-788f1746df63-kube-api-access-s24wc\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bzvws\" (UID: \"7b0250c2-eb08-4c81-9d0b-788f1746df63\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bzvws" Jan 22 16:57:28 crc kubenswrapper[4758]: I0122 16:57:28.133223 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7b0250c2-eb08-4c81-9d0b-788f1746df63-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bzvws\" (UID: \"7b0250c2-eb08-4c81-9d0b-788f1746df63\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bzvws" Jan 22 16:57:28 crc kubenswrapper[4758]: I0122 16:57:28.133364 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b0250c2-eb08-4c81-9d0b-788f1746df63-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bzvws\" (UID: \"7b0250c2-eb08-4c81-9d0b-788f1746df63\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bzvws" Jan 22 16:57:28 crc kubenswrapper[4758]: I0122 16:57:28.133393 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b0250c2-eb08-4c81-9d0b-788f1746df63-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bzvws\" (UID: \"7b0250c2-eb08-4c81-9d0b-788f1746df63\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bzvws" Jan 22 16:57:28 crc kubenswrapper[4758]: I0122 16:57:28.234900 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b0250c2-eb08-4c81-9d0b-788f1746df63-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bzvws\" (UID: \"7b0250c2-eb08-4c81-9d0b-788f1746df63\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bzvws" Jan 22 16:57:28 crc kubenswrapper[4758]: I0122 16:57:28.234950 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b0250c2-eb08-4c81-9d0b-788f1746df63-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bzvws\" (UID: \"7b0250c2-eb08-4c81-9d0b-788f1746df63\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bzvws" Jan 22 16:57:28 crc kubenswrapper[4758]: I0122 16:57:28.235033 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s24wc\" (UniqueName: \"kubernetes.io/projected/7b0250c2-eb08-4c81-9d0b-788f1746df63-kube-api-access-s24wc\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bzvws\" (UID: \"7b0250c2-eb08-4c81-9d0b-788f1746df63\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bzvws" Jan 22 16:57:28 crc kubenswrapper[4758]: I0122 16:57:28.235057 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7b0250c2-eb08-4c81-9d0b-788f1746df63-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bzvws\" (UID: \"7b0250c2-eb08-4c81-9d0b-788f1746df63\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bzvws" Jan 22 16:57:28 crc kubenswrapper[4758]: I0122 16:57:28.238473 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b0250c2-eb08-4c81-9d0b-788f1746df63-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bzvws\" (UID: \"7b0250c2-eb08-4c81-9d0b-788f1746df63\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bzvws" Jan 22 16:57:28 crc kubenswrapper[4758]: I0122 16:57:28.238598 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b0250c2-eb08-4c81-9d0b-788f1746df63-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bzvws\" (UID: \"7b0250c2-eb08-4c81-9d0b-788f1746df63\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bzvws" Jan 22 16:57:28 crc kubenswrapper[4758]: I0122 16:57:28.245150 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7b0250c2-eb08-4c81-9d0b-788f1746df63-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bzvws\" (UID: \"7b0250c2-eb08-4c81-9d0b-788f1746df63\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bzvws" Jan 22 16:57:28 crc kubenswrapper[4758]: I0122 16:57:28.251974 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s24wc\" (UniqueName: \"kubernetes.io/projected/7b0250c2-eb08-4c81-9d0b-788f1746df63-kube-api-access-s24wc\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bzvws\" (UID: \"7b0250c2-eb08-4c81-9d0b-788f1746df63\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bzvws" Jan 22 16:57:28 crc kubenswrapper[4758]: I0122 16:57:28.375193 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bzvws" Jan 22 16:57:28 crc kubenswrapper[4758]: I0122 16:57:28.944825 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bzvws"] Jan 22 16:57:29 crc kubenswrapper[4758]: I0122 16:57:29.961395 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bzvws" event={"ID":"7b0250c2-eb08-4c81-9d0b-788f1746df63","Type":"ContainerStarted","Data":"d720edadaaa32e6ffb5c4a67f78fb79deddae333c1ccc0a81e6188add965f934"} Jan 22 16:57:29 crc kubenswrapper[4758]: I0122 16:57:29.961945 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bzvws" event={"ID":"7b0250c2-eb08-4c81-9d0b-788f1746df63","Type":"ContainerStarted","Data":"86275fdb7db6f9cd93cb4012999cfb2faaa828970f223a22f07b4bcacb8abe9f"} Jan 22 16:57:29 crc kubenswrapper[4758]: I0122 16:57:29.982305 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bzvws" podStartSLOduration=1.5863180799999999 podStartE2EDuration="1.982285549s" podCreationTimestamp="2026-01-22 16:57:28 +0000 UTC" firstStartedPulling="2026-01-22 16:57:28.944451326 +0000 UTC m=+1670.427790621" lastFinishedPulling="2026-01-22 16:57:29.340418815 +0000 UTC m=+1670.823758090" observedRunningTime="2026-01-22 16:57:29.979232716 +0000 UTC m=+1671.462572011" watchObservedRunningTime="2026-01-22 16:57:29.982285549 +0000 UTC m=+1671.465624834" Jan 22 16:57:43 crc kubenswrapper[4758]: I0122 16:57:43.837110 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:57:43 crc kubenswrapper[4758]: I0122 16:57:43.837713 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:57:43 crc kubenswrapper[4758]: I0122 16:57:43.837784 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" Jan 22 16:57:43 crc kubenswrapper[4758]: I0122 16:57:43.838568 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9fbb4e4b642afb97b44eb564377795a5aede8a06f9d628acf1dc7fd06d2240ab"} pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 16:57:43 crc kubenswrapper[4758]: I0122 16:57:43.838614 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" containerID="cri-o://9fbb4e4b642afb97b44eb564377795a5aede8a06f9d628acf1dc7fd06d2240ab" gracePeriod=600 Jan 22 16:57:43 crc kubenswrapper[4758]: E0122 16:57:43.961095 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 16:57:44 crc kubenswrapper[4758]: I0122 16:57:44.089979 4758 generic.go:334] "Generic (PLEG): container finished" podID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerID="9fbb4e4b642afb97b44eb564377795a5aede8a06f9d628acf1dc7fd06d2240ab" exitCode=0 Jan 22 16:57:44 crc kubenswrapper[4758]: I0122 16:57:44.090033 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerDied","Data":"9fbb4e4b642afb97b44eb564377795a5aede8a06f9d628acf1dc7fd06d2240ab"} Jan 22 16:57:44 crc kubenswrapper[4758]: I0122 16:57:44.090076 4758 scope.go:117] "RemoveContainer" containerID="199c6be88db26753015fa9e30b754aa271b4aa087623fd5be9e93878eddbc087" Jan 22 16:57:44 crc kubenswrapper[4758]: I0122 16:57:44.090660 4758 scope.go:117] "RemoveContainer" containerID="9fbb4e4b642afb97b44eb564377795a5aede8a06f9d628acf1dc7fd06d2240ab" Jan 22 16:57:44 crc kubenswrapper[4758]: E0122 16:57:44.091067 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 16:57:57 crc kubenswrapper[4758]: I0122 16:57:57.808166 4758 scope.go:117] "RemoveContainer" containerID="9fbb4e4b642afb97b44eb564377795a5aede8a06f9d628acf1dc7fd06d2240ab" Jan 22 16:57:57 crc kubenswrapper[4758]: E0122 16:57:57.808931 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 16:57:59 crc kubenswrapper[4758]: I0122 16:57:59.123622 4758 scope.go:117] "RemoveContainer" containerID="47f3726f7387eb4ba5b0e7d9659e764c8691d09a28b4e2e4cf0e7bb995fe1b82" Jan 22 16:58:08 crc kubenswrapper[4758]: I0122 16:58:08.808312 4758 scope.go:117] "RemoveContainer" containerID="9fbb4e4b642afb97b44eb564377795a5aede8a06f9d628acf1dc7fd06d2240ab" Jan 22 16:58:08 crc kubenswrapper[4758]: E0122 16:58:08.809074 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 16:58:20 crc kubenswrapper[4758]: I0122 16:58:20.809118 4758 scope.go:117] "RemoveContainer" containerID="9fbb4e4b642afb97b44eb564377795a5aede8a06f9d628acf1dc7fd06d2240ab" Jan 22 16:58:20 crc kubenswrapper[4758]: E0122 16:58:20.810222 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 16:58:35 crc kubenswrapper[4758]: I0122 16:58:35.807803 4758 scope.go:117] "RemoveContainer" containerID="9fbb4e4b642afb97b44eb564377795a5aede8a06f9d628acf1dc7fd06d2240ab" Jan 22 16:58:35 crc kubenswrapper[4758]: E0122 16:58:35.808492 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 16:58:47 crc kubenswrapper[4758]: I0122 16:58:47.808132 4758 scope.go:117] "RemoveContainer" containerID="9fbb4e4b642afb97b44eb564377795a5aede8a06f9d628acf1dc7fd06d2240ab" Jan 22 16:58:47 crc kubenswrapper[4758]: E0122 16:58:47.809298 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 16:58:58 crc kubenswrapper[4758]: I0122 16:58:58.808551 4758 scope.go:117] "RemoveContainer" containerID="9fbb4e4b642afb97b44eb564377795a5aede8a06f9d628acf1dc7fd06d2240ab" Jan 22 16:58:58 crc kubenswrapper[4758]: E0122 16:58:58.809506 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 16:58:59 crc kubenswrapper[4758]: I0122 16:58:59.238026 4758 scope.go:117] "RemoveContainer" containerID="04709b65415b5ce55c5e501fd59e6359307278c8ee978a585a593c53c836b627" Jan 22 16:58:59 crc kubenswrapper[4758]: I0122 16:58:59.275309 4758 scope.go:117] "RemoveContainer" containerID="f2426189211acadfd3582be9b2a5a2092dba629617ca09b0e836b6e6e3773f47" Jan 22 16:58:59 crc kubenswrapper[4758]: I0122 16:58:59.304090 4758 scope.go:117] "RemoveContainer" containerID="9ed7716389bb42108c02f6a05b6d832a0b8d104d94073ae0784b73206992c4da" Jan 22 16:58:59 crc kubenswrapper[4758]: I0122 16:58:59.353200 4758 scope.go:117] "RemoveContainer" containerID="67f04c8edd0bfadf7999eb3e60499af7612f6aba062524c649cf701fd1c49e86" Jan 22 16:58:59 crc kubenswrapper[4758]: I0122 16:58:59.371815 4758 scope.go:117] "RemoveContainer" containerID="a242bb86d02a02912959476d1e89c5801e3e8b0a179d33e8ede7e504d5a32eae" Jan 22 16:59:13 crc kubenswrapper[4758]: I0122 16:59:13.809109 4758 scope.go:117] "RemoveContainer" containerID="9fbb4e4b642afb97b44eb564377795a5aede8a06f9d628acf1dc7fd06d2240ab" Jan 22 16:59:13 crc kubenswrapper[4758]: E0122 16:59:13.809994 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 16:59:24 crc kubenswrapper[4758]: I0122 16:59:24.808447 4758 scope.go:117] "RemoveContainer" containerID="9fbb4e4b642afb97b44eb564377795a5aede8a06f9d628acf1dc7fd06d2240ab" Jan 22 16:59:24 crc kubenswrapper[4758]: E0122 16:59:24.809269 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 16:59:36 crc kubenswrapper[4758]: I0122 16:59:36.808804 4758 scope.go:117] "RemoveContainer" containerID="9fbb4e4b642afb97b44eb564377795a5aede8a06f9d628acf1dc7fd06d2240ab" Jan 22 16:59:36 crc kubenswrapper[4758]: E0122 16:59:36.809995 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 16:59:50 crc kubenswrapper[4758]: I0122 16:59:50.809079 4758 scope.go:117] "RemoveContainer" containerID="9fbb4e4b642afb97b44eb564377795a5aede8a06f9d628acf1dc7fd06d2240ab" Jan 22 16:59:50 crc kubenswrapper[4758]: E0122 16:59:50.809971 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:00:00 crc kubenswrapper[4758]: I0122 17:00:00.155895 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485020-7b4nk"] Jan 22 17:00:00 crc kubenswrapper[4758]: I0122 17:00:00.158354 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485020-7b4nk" Jan 22 17:00:00 crc kubenswrapper[4758]: I0122 17:00:00.160949 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 17:00:00 crc kubenswrapper[4758]: I0122 17:00:00.161329 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 17:00:00 crc kubenswrapper[4758]: I0122 17:00:00.167065 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485020-7b4nk"] Jan 22 17:00:00 crc kubenswrapper[4758]: I0122 17:00:00.237936 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m94j\" (UniqueName: \"kubernetes.io/projected/db2c0313-5662-42cc-bb1d-6a3d53379b40-kube-api-access-5m94j\") pod \"collect-profiles-29485020-7b4nk\" (UID: \"db2c0313-5662-42cc-bb1d-6a3d53379b40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485020-7b4nk" Jan 22 17:00:00 crc kubenswrapper[4758]: I0122 17:00:00.238059 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/db2c0313-5662-42cc-bb1d-6a3d53379b40-secret-volume\") pod \"collect-profiles-29485020-7b4nk\" (UID: \"db2c0313-5662-42cc-bb1d-6a3d53379b40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485020-7b4nk" Jan 22 17:00:00 crc kubenswrapper[4758]: I0122 17:00:00.238147 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db2c0313-5662-42cc-bb1d-6a3d53379b40-config-volume\") pod \"collect-profiles-29485020-7b4nk\" (UID: \"db2c0313-5662-42cc-bb1d-6a3d53379b40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485020-7b4nk" Jan 22 17:00:00 crc kubenswrapper[4758]: I0122 17:00:00.339593 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db2c0313-5662-42cc-bb1d-6a3d53379b40-config-volume\") pod \"collect-profiles-29485020-7b4nk\" (UID: \"db2c0313-5662-42cc-bb1d-6a3d53379b40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485020-7b4nk" Jan 22 17:00:00 crc kubenswrapper[4758]: I0122 17:00:00.339702 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m94j\" (UniqueName: \"kubernetes.io/projected/db2c0313-5662-42cc-bb1d-6a3d53379b40-kube-api-access-5m94j\") pod \"collect-profiles-29485020-7b4nk\" (UID: \"db2c0313-5662-42cc-bb1d-6a3d53379b40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485020-7b4nk" Jan 22 17:00:00 crc kubenswrapper[4758]: I0122 17:00:00.339821 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/db2c0313-5662-42cc-bb1d-6a3d53379b40-secret-volume\") pod \"collect-profiles-29485020-7b4nk\" (UID: \"db2c0313-5662-42cc-bb1d-6a3d53379b40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485020-7b4nk" Jan 22 17:00:00 crc kubenswrapper[4758]: I0122 17:00:00.340884 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db2c0313-5662-42cc-bb1d-6a3d53379b40-config-volume\") pod \"collect-profiles-29485020-7b4nk\" (UID: \"db2c0313-5662-42cc-bb1d-6a3d53379b40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485020-7b4nk" Jan 22 17:00:00 crc kubenswrapper[4758]: I0122 17:00:00.345764 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/db2c0313-5662-42cc-bb1d-6a3d53379b40-secret-volume\") pod \"collect-profiles-29485020-7b4nk\" (UID: \"db2c0313-5662-42cc-bb1d-6a3d53379b40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485020-7b4nk" Jan 22 17:00:00 crc kubenswrapper[4758]: I0122 17:00:00.358528 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m94j\" (UniqueName: \"kubernetes.io/projected/db2c0313-5662-42cc-bb1d-6a3d53379b40-kube-api-access-5m94j\") pod \"collect-profiles-29485020-7b4nk\" (UID: \"db2c0313-5662-42cc-bb1d-6a3d53379b40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485020-7b4nk" Jan 22 17:00:00 crc kubenswrapper[4758]: I0122 17:00:00.487320 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485020-7b4nk" Jan 22 17:00:01 crc kubenswrapper[4758]: I0122 17:00:01.108712 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485020-7b4nk"] Jan 22 17:00:01 crc kubenswrapper[4758]: I0122 17:00:01.808899 4758 scope.go:117] "RemoveContainer" containerID="9fbb4e4b642afb97b44eb564377795a5aede8a06f9d628acf1dc7fd06d2240ab" Jan 22 17:00:01 crc kubenswrapper[4758]: E0122 17:00:01.809611 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:00:02 crc kubenswrapper[4758]: I0122 17:00:02.115425 4758 generic.go:334] "Generic (PLEG): container finished" podID="db2c0313-5662-42cc-bb1d-6a3d53379b40" containerID="039d08746163c267120e43d8643da740fdb10bbd0d750cdc23358467be8cc8f8" exitCode=0 Jan 22 17:00:02 crc kubenswrapper[4758]: I0122 17:00:02.115491 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485020-7b4nk" event={"ID":"db2c0313-5662-42cc-bb1d-6a3d53379b40","Type":"ContainerDied","Data":"039d08746163c267120e43d8643da740fdb10bbd0d750cdc23358467be8cc8f8"} Jan 22 17:00:02 crc kubenswrapper[4758]: I0122 17:00:02.115543 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485020-7b4nk" event={"ID":"db2c0313-5662-42cc-bb1d-6a3d53379b40","Type":"ContainerStarted","Data":"81b139c84e200c6eee924d1b9b36c1456956ddc8a56dcbd5460c1cf9bff0ffe5"} Jan 22 17:00:03 crc kubenswrapper[4758]: I0122 17:00:03.610181 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485020-7b4nk" Jan 22 17:00:03 crc kubenswrapper[4758]: I0122 17:00:03.743476 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db2c0313-5662-42cc-bb1d-6a3d53379b40-config-volume\") pod \"db2c0313-5662-42cc-bb1d-6a3d53379b40\" (UID: \"db2c0313-5662-42cc-bb1d-6a3d53379b40\") " Jan 22 17:00:03 crc kubenswrapper[4758]: I0122 17:00:03.743683 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5m94j\" (UniqueName: \"kubernetes.io/projected/db2c0313-5662-42cc-bb1d-6a3d53379b40-kube-api-access-5m94j\") pod \"db2c0313-5662-42cc-bb1d-6a3d53379b40\" (UID: \"db2c0313-5662-42cc-bb1d-6a3d53379b40\") " Jan 22 17:00:03 crc kubenswrapper[4758]: I0122 17:00:03.743873 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/db2c0313-5662-42cc-bb1d-6a3d53379b40-secret-volume\") pod \"db2c0313-5662-42cc-bb1d-6a3d53379b40\" (UID: \"db2c0313-5662-42cc-bb1d-6a3d53379b40\") " Jan 22 17:00:03 crc kubenswrapper[4758]: I0122 17:00:03.744430 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db2c0313-5662-42cc-bb1d-6a3d53379b40-config-volume" (OuterVolumeSpecName: "config-volume") pod "db2c0313-5662-42cc-bb1d-6a3d53379b40" (UID: "db2c0313-5662-42cc-bb1d-6a3d53379b40"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 17:00:03 crc kubenswrapper[4758]: I0122 17:00:03.749470 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db2c0313-5662-42cc-bb1d-6a3d53379b40-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "db2c0313-5662-42cc-bb1d-6a3d53379b40" (UID: "db2c0313-5662-42cc-bb1d-6a3d53379b40"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:00:03 crc kubenswrapper[4758]: I0122 17:00:03.749815 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db2c0313-5662-42cc-bb1d-6a3d53379b40-kube-api-access-5m94j" (OuterVolumeSpecName: "kube-api-access-5m94j") pod "db2c0313-5662-42cc-bb1d-6a3d53379b40" (UID: "db2c0313-5662-42cc-bb1d-6a3d53379b40"). InnerVolumeSpecName "kube-api-access-5m94j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:00:03 crc kubenswrapper[4758]: I0122 17:00:03.846124 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5m94j\" (UniqueName: \"kubernetes.io/projected/db2c0313-5662-42cc-bb1d-6a3d53379b40-kube-api-access-5m94j\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:03 crc kubenswrapper[4758]: I0122 17:00:03.846169 4758 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/db2c0313-5662-42cc-bb1d-6a3d53379b40-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:03 crc kubenswrapper[4758]: I0122 17:00:03.846187 4758 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db2c0313-5662-42cc-bb1d-6a3d53379b40-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:04 crc kubenswrapper[4758]: I0122 17:00:04.139877 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485020-7b4nk" event={"ID":"db2c0313-5662-42cc-bb1d-6a3d53379b40","Type":"ContainerDied","Data":"81b139c84e200c6eee924d1b9b36c1456956ddc8a56dcbd5460c1cf9bff0ffe5"} Jan 22 17:00:04 crc kubenswrapper[4758]: I0122 17:00:04.139928 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81b139c84e200c6eee924d1b9b36c1456956ddc8a56dcbd5460c1cf9bff0ffe5" Jan 22 17:00:04 crc kubenswrapper[4758]: I0122 17:00:04.140232 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485020-7b4nk" Jan 22 17:00:16 crc kubenswrapper[4758]: I0122 17:00:16.808581 4758 scope.go:117] "RemoveContainer" containerID="9fbb4e4b642afb97b44eb564377795a5aede8a06f9d628acf1dc7fd06d2240ab" Jan 22 17:00:16 crc kubenswrapper[4758]: E0122 17:00:16.809717 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:00:23 crc kubenswrapper[4758]: I0122 17:00:23.057885 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-create-xjlxr"] Jan 22 17:00:23 crc kubenswrapper[4758]: I0122 17:00:23.073865 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-create-xjlxr"] Jan 22 17:00:24 crc kubenswrapper[4758]: I0122 17:00:24.033260 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-30be-account-create-update-g9s9d"] Jan 22 17:00:24 crc kubenswrapper[4758]: I0122 17:00:24.044506 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-30be-account-create-update-g9s9d"] Jan 22 17:00:24 crc kubenswrapper[4758]: I0122 17:00:24.822098 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29ddf744-aa02-471f-b73c-930924240fa9" path="/var/lib/kubelet/pods/29ddf744-aa02-471f-b73c-930924240fa9/volumes" Jan 22 17:00:24 crc kubenswrapper[4758]: I0122 17:00:24.823011 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7b6d798-7571-43c0-8202-0634015602ff" path="/var/lib/kubelet/pods/e7b6d798-7571-43c0-8202-0634015602ff/volumes" Jan 22 17:00:29 crc kubenswrapper[4758]: I0122 17:00:29.808535 4758 scope.go:117] "RemoveContainer" containerID="9fbb4e4b642afb97b44eb564377795a5aede8a06f9d628acf1dc7fd06d2240ab" Jan 22 17:00:29 crc kubenswrapper[4758]: E0122 17:00:29.809188 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:00:31 crc kubenswrapper[4758]: I0122 17:00:31.039942 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-l5sww"] Jan 22 17:00:31 crc kubenswrapper[4758]: I0122 17:00:31.078916 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-c3ab-account-create-update-4wv2p"] Jan 22 17:00:31 crc kubenswrapper[4758]: I0122 17:00:31.091249 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-l8rd6"] Jan 22 17:00:31 crc kubenswrapper[4758]: I0122 17:00:31.099543 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-l5sww"] Jan 22 17:00:31 crc kubenswrapper[4758]: I0122 17:00:31.110688 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-l8rd6"] Jan 22 17:00:31 crc kubenswrapper[4758]: I0122 17:00:31.120765 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-afd0-account-create-update-jtfps"] Jan 22 17:00:31 crc kubenswrapper[4758]: I0122 17:00:31.130111 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-afd0-account-create-update-jtfps"] Jan 22 17:00:31 crc kubenswrapper[4758]: I0122 17:00:31.139327 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-c3ab-account-create-update-4wv2p"] Jan 22 17:00:32 crc kubenswrapper[4758]: I0122 17:00:32.822428 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6088aa85-eb17-48ba-badd-ea46ba4333bb" path="/var/lib/kubelet/pods/6088aa85-eb17-48ba-badd-ea46ba4333bb/volumes" Jan 22 17:00:32 crc kubenswrapper[4758]: I0122 17:00:32.823294 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f206eab-3576-41d8-b0b8-abbf89628582" path="/var/lib/kubelet/pods/6f206eab-3576-41d8-b0b8-abbf89628582/volumes" Jan 22 17:00:32 crc kubenswrapper[4758]: I0122 17:00:32.824166 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e50e041-f10a-43dc-9ba9-1b8adf5d0296" path="/var/lib/kubelet/pods/8e50e041-f10a-43dc-9ba9-1b8adf5d0296/volumes" Jan 22 17:00:32 crc kubenswrapper[4758]: I0122 17:00:32.824993 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b17a8111-b550-4c28-98bf-fe568e5f35f5" path="/var/lib/kubelet/pods/b17a8111-b550-4c28-98bf-fe568e5f35f5/volumes" Jan 22 17:00:39 crc kubenswrapper[4758]: I0122 17:00:39.045177 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-nkd9g"] Jan 22 17:00:39 crc kubenswrapper[4758]: I0122 17:00:39.054094 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-nkd9g"] Jan 22 17:00:40 crc kubenswrapper[4758]: I0122 17:00:40.643606 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-m4f84"] Jan 22 17:00:40 crc kubenswrapper[4758]: E0122 17:00:40.644642 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db2c0313-5662-42cc-bb1d-6a3d53379b40" containerName="collect-profiles" Jan 22 17:00:40 crc kubenswrapper[4758]: I0122 17:00:40.644663 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="db2c0313-5662-42cc-bb1d-6a3d53379b40" containerName="collect-profiles" Jan 22 17:00:40 crc kubenswrapper[4758]: I0122 17:00:40.644971 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="db2c0313-5662-42cc-bb1d-6a3d53379b40" containerName="collect-profiles" Jan 22 17:00:40 crc kubenswrapper[4758]: I0122 17:00:40.647428 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m4f84" Jan 22 17:00:40 crc kubenswrapper[4758]: I0122 17:00:40.664406 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m4f84"] Jan 22 17:00:40 crc kubenswrapper[4758]: I0122 17:00:40.758370 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rsfh\" (UniqueName: \"kubernetes.io/projected/466f9d08-4979-410b-88dc-106c7fcef5a7-kube-api-access-8rsfh\") pod \"redhat-operators-m4f84\" (UID: \"466f9d08-4979-410b-88dc-106c7fcef5a7\") " pod="openshift-marketplace/redhat-operators-m4f84" Jan 22 17:00:40 crc kubenswrapper[4758]: I0122 17:00:40.758530 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/466f9d08-4979-410b-88dc-106c7fcef5a7-utilities\") pod \"redhat-operators-m4f84\" (UID: \"466f9d08-4979-410b-88dc-106c7fcef5a7\") " pod="openshift-marketplace/redhat-operators-m4f84" Jan 22 17:00:40 crc kubenswrapper[4758]: I0122 17:00:40.758580 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/466f9d08-4979-410b-88dc-106c7fcef5a7-catalog-content\") pod \"redhat-operators-m4f84\" (UID: \"466f9d08-4979-410b-88dc-106c7fcef5a7\") " pod="openshift-marketplace/redhat-operators-m4f84" Jan 22 17:00:40 crc kubenswrapper[4758]: I0122 17:00:40.820931 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62ee1a0a-56ed-4b6f-9331-9794a22bf5dd" path="/var/lib/kubelet/pods/62ee1a0a-56ed-4b6f-9331-9794a22bf5dd/volumes" Jan 22 17:00:40 crc kubenswrapper[4758]: I0122 17:00:40.860319 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rsfh\" (UniqueName: \"kubernetes.io/projected/466f9d08-4979-410b-88dc-106c7fcef5a7-kube-api-access-8rsfh\") pod \"redhat-operators-m4f84\" (UID: \"466f9d08-4979-410b-88dc-106c7fcef5a7\") " pod="openshift-marketplace/redhat-operators-m4f84" Jan 22 17:00:40 crc kubenswrapper[4758]: I0122 17:00:40.860460 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/466f9d08-4979-410b-88dc-106c7fcef5a7-utilities\") pod \"redhat-operators-m4f84\" (UID: \"466f9d08-4979-410b-88dc-106c7fcef5a7\") " pod="openshift-marketplace/redhat-operators-m4f84" Jan 22 17:00:40 crc kubenswrapper[4758]: I0122 17:00:40.860515 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/466f9d08-4979-410b-88dc-106c7fcef5a7-catalog-content\") pod \"redhat-operators-m4f84\" (UID: \"466f9d08-4979-410b-88dc-106c7fcef5a7\") " pod="openshift-marketplace/redhat-operators-m4f84" Jan 22 17:00:40 crc kubenswrapper[4758]: I0122 17:00:40.861132 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/466f9d08-4979-410b-88dc-106c7fcef5a7-catalog-content\") pod \"redhat-operators-m4f84\" (UID: \"466f9d08-4979-410b-88dc-106c7fcef5a7\") " pod="openshift-marketplace/redhat-operators-m4f84" Jan 22 17:00:40 crc kubenswrapper[4758]: I0122 17:00:40.861763 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/466f9d08-4979-410b-88dc-106c7fcef5a7-utilities\") pod \"redhat-operators-m4f84\" (UID: \"466f9d08-4979-410b-88dc-106c7fcef5a7\") " pod="openshift-marketplace/redhat-operators-m4f84" Jan 22 17:00:40 crc kubenswrapper[4758]: I0122 17:00:40.885437 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rsfh\" (UniqueName: \"kubernetes.io/projected/466f9d08-4979-410b-88dc-106c7fcef5a7-kube-api-access-8rsfh\") pod \"redhat-operators-m4f84\" (UID: \"466f9d08-4979-410b-88dc-106c7fcef5a7\") " pod="openshift-marketplace/redhat-operators-m4f84" Jan 22 17:00:40 crc kubenswrapper[4758]: I0122 17:00:40.984534 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m4f84" Jan 22 17:00:41 crc kubenswrapper[4758]: I0122 17:00:41.589085 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m4f84"] Jan 22 17:00:42 crc kubenswrapper[4758]: I0122 17:00:42.575108 4758 generic.go:334] "Generic (PLEG): container finished" podID="466f9d08-4979-410b-88dc-106c7fcef5a7" containerID="e027242baa7996826b2d40859f38e220d228f208131fd3d5914dc45f003f1c3d" exitCode=0 Jan 22 17:00:42 crc kubenswrapper[4758]: I0122 17:00:42.575179 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m4f84" event={"ID":"466f9d08-4979-410b-88dc-106c7fcef5a7","Type":"ContainerDied","Data":"e027242baa7996826b2d40859f38e220d228f208131fd3d5914dc45f003f1c3d"} Jan 22 17:00:42 crc kubenswrapper[4758]: I0122 17:00:42.575433 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m4f84" event={"ID":"466f9d08-4979-410b-88dc-106c7fcef5a7","Type":"ContainerStarted","Data":"78c05fb19baae93d11bb30a2c1cb57ba63897a3d61ce7f9b1f892d0b993881f0"} Jan 22 17:00:42 crc kubenswrapper[4758]: I0122 17:00:42.808040 4758 scope.go:117] "RemoveContainer" containerID="9fbb4e4b642afb97b44eb564377795a5aede8a06f9d628acf1dc7fd06d2240ab" Jan 22 17:00:42 crc kubenswrapper[4758]: E0122 17:00:42.808634 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:00:43 crc kubenswrapper[4758]: I0122 17:00:43.587312 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m4f84" event={"ID":"466f9d08-4979-410b-88dc-106c7fcef5a7","Type":"ContainerStarted","Data":"06e509f7cf069c01b0500819a040af79f9773275716bf0202c15d7b6947afc82"} Jan 22 17:00:43 crc kubenswrapper[4758]: I0122 17:00:43.856353 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gmcqc"] Jan 22 17:00:43 crc kubenswrapper[4758]: I0122 17:00:43.859430 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gmcqc" Jan 22 17:00:43 crc kubenswrapper[4758]: I0122 17:00:43.868580 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gmcqc"] Jan 22 17:00:43 crc kubenswrapper[4758]: I0122 17:00:43.951480 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b945d1ce-7e02-4280-9197-d91a149dba2d-utilities\") pod \"redhat-marketplace-gmcqc\" (UID: \"b945d1ce-7e02-4280-9197-d91a149dba2d\") " pod="openshift-marketplace/redhat-marketplace-gmcqc" Jan 22 17:00:43 crc kubenswrapper[4758]: I0122 17:00:43.951557 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b945d1ce-7e02-4280-9197-d91a149dba2d-catalog-content\") pod \"redhat-marketplace-gmcqc\" (UID: \"b945d1ce-7e02-4280-9197-d91a149dba2d\") " pod="openshift-marketplace/redhat-marketplace-gmcqc" Jan 22 17:00:43 crc kubenswrapper[4758]: I0122 17:00:43.951620 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9m9b\" (UniqueName: \"kubernetes.io/projected/b945d1ce-7e02-4280-9197-d91a149dba2d-kube-api-access-w9m9b\") pod \"redhat-marketplace-gmcqc\" (UID: \"b945d1ce-7e02-4280-9197-d91a149dba2d\") " pod="openshift-marketplace/redhat-marketplace-gmcqc" Jan 22 17:00:44 crc kubenswrapper[4758]: I0122 17:00:44.052706 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b945d1ce-7e02-4280-9197-d91a149dba2d-utilities\") pod \"redhat-marketplace-gmcqc\" (UID: \"b945d1ce-7e02-4280-9197-d91a149dba2d\") " pod="openshift-marketplace/redhat-marketplace-gmcqc" Jan 22 17:00:44 crc kubenswrapper[4758]: I0122 17:00:44.052801 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b945d1ce-7e02-4280-9197-d91a149dba2d-catalog-content\") pod \"redhat-marketplace-gmcqc\" (UID: \"b945d1ce-7e02-4280-9197-d91a149dba2d\") " pod="openshift-marketplace/redhat-marketplace-gmcqc" Jan 22 17:00:44 crc kubenswrapper[4758]: I0122 17:00:44.052854 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9m9b\" (UniqueName: \"kubernetes.io/projected/b945d1ce-7e02-4280-9197-d91a149dba2d-kube-api-access-w9m9b\") pod \"redhat-marketplace-gmcqc\" (UID: \"b945d1ce-7e02-4280-9197-d91a149dba2d\") " pod="openshift-marketplace/redhat-marketplace-gmcqc" Jan 22 17:00:44 crc kubenswrapper[4758]: I0122 17:00:44.053136 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b945d1ce-7e02-4280-9197-d91a149dba2d-utilities\") pod \"redhat-marketplace-gmcqc\" (UID: \"b945d1ce-7e02-4280-9197-d91a149dba2d\") " pod="openshift-marketplace/redhat-marketplace-gmcqc" Jan 22 17:00:44 crc kubenswrapper[4758]: I0122 17:00:44.053424 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b945d1ce-7e02-4280-9197-d91a149dba2d-catalog-content\") pod \"redhat-marketplace-gmcqc\" (UID: \"b945d1ce-7e02-4280-9197-d91a149dba2d\") " pod="openshift-marketplace/redhat-marketplace-gmcqc" Jan 22 17:00:44 crc kubenswrapper[4758]: I0122 17:00:44.078863 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9m9b\" (UniqueName: \"kubernetes.io/projected/b945d1ce-7e02-4280-9197-d91a149dba2d-kube-api-access-w9m9b\") pod \"redhat-marketplace-gmcqc\" (UID: \"b945d1ce-7e02-4280-9197-d91a149dba2d\") " pod="openshift-marketplace/redhat-marketplace-gmcqc" Jan 22 17:00:44 crc kubenswrapper[4758]: I0122 17:00:44.178731 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gmcqc" Jan 22 17:00:44 crc kubenswrapper[4758]: W0122 17:00:44.716199 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb945d1ce_7e02_4280_9197_d91a149dba2d.slice/crio-63d34f35b674fa3123ff76af7295b0d6221e17d20c92d0451dfeaaa9386255bb WatchSource:0}: Error finding container 63d34f35b674fa3123ff76af7295b0d6221e17d20c92d0451dfeaaa9386255bb: Status 404 returned error can't find the container with id 63d34f35b674fa3123ff76af7295b0d6221e17d20c92d0451dfeaaa9386255bb Jan 22 17:00:44 crc kubenswrapper[4758]: I0122 17:00:44.721852 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gmcqc"] Jan 22 17:00:45 crc kubenswrapper[4758]: I0122 17:00:45.607841 4758 generic.go:334] "Generic (PLEG): container finished" podID="b945d1ce-7e02-4280-9197-d91a149dba2d" containerID="b53cba428a00ac3663a89bb2cc51e28ae292269cc64c972f999ae4884728ed9c" exitCode=0 Jan 22 17:00:45 crc kubenswrapper[4758]: I0122 17:00:45.607944 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gmcqc" event={"ID":"b945d1ce-7e02-4280-9197-d91a149dba2d","Type":"ContainerDied","Data":"b53cba428a00ac3663a89bb2cc51e28ae292269cc64c972f999ae4884728ed9c"} Jan 22 17:00:45 crc kubenswrapper[4758]: I0122 17:00:45.608220 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gmcqc" event={"ID":"b945d1ce-7e02-4280-9197-d91a149dba2d","Type":"ContainerStarted","Data":"63d34f35b674fa3123ff76af7295b0d6221e17d20c92d0451dfeaaa9386255bb"} Jan 22 17:00:47 crc kubenswrapper[4758]: I0122 17:00:47.045561 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-2zsn9"] Jan 22 17:00:47 crc kubenswrapper[4758]: I0122 17:00:47.059240 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-2zsn9"] Jan 22 17:00:48 crc kubenswrapper[4758]: I0122 17:00:48.045818 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b2f3-account-create-update-fw588"] Jan 22 17:00:48 crc kubenswrapper[4758]: I0122 17:00:48.060851 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-vh5np"] Jan 22 17:00:48 crc kubenswrapper[4758]: I0122 17:00:48.069846 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-3a43-account-create-update-bljp2"] Jan 22 17:00:48 crc kubenswrapper[4758]: I0122 17:00:48.078538 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-b2f3-account-create-update-fw588"] Jan 22 17:00:48 crc kubenswrapper[4758]: I0122 17:00:48.087298 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-3a43-account-create-update-bljp2"] Jan 22 17:00:48 crc kubenswrapper[4758]: I0122 17:00:48.099858 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-vh5np"] Jan 22 17:00:48 crc kubenswrapper[4758]: I0122 17:00:48.818606 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23eda699-be19-45a4-8fac-2f3c8d1f38f6" path="/var/lib/kubelet/pods/23eda699-be19-45a4-8fac-2f3c8d1f38f6/volumes" Jan 22 17:00:48 crc kubenswrapper[4758]: I0122 17:00:48.819274 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6623f30f-8f61-4f19-962f-de3e10559547" path="/var/lib/kubelet/pods/6623f30f-8f61-4f19-962f-de3e10559547/volumes" Jan 22 17:00:48 crc kubenswrapper[4758]: I0122 17:00:48.819929 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4788613-d2cb-49ab-89de-a8c4492d02fb" path="/var/lib/kubelet/pods/d4788613-d2cb-49ab-89de-a8c4492d02fb/volumes" Jan 22 17:00:48 crc kubenswrapper[4758]: I0122 17:00:48.820621 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fff1f29f-f0e8-4fff-b964-9e9c9dc7f60b" path="/var/lib/kubelet/pods/fff1f29f-f0e8-4fff-b964-9e9c9dc7f60b/volumes" Jan 22 17:00:52 crc kubenswrapper[4758]: I0122 17:00:52.028668 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-da06-account-create-update-bbdrz"] Jan 22 17:00:52 crc kubenswrapper[4758]: I0122 17:00:52.037562 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-fc5a-account-create-update-rxqdl"] Jan 22 17:00:52 crc kubenswrapper[4758]: I0122 17:00:52.046425 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-da06-account-create-update-bbdrz"] Jan 22 17:00:52 crc kubenswrapper[4758]: I0122 17:00:52.055716 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-fc5a-account-create-update-rxqdl"] Jan 22 17:00:52 crc kubenswrapper[4758]: I0122 17:00:52.691395 4758 generic.go:334] "Generic (PLEG): container finished" podID="466f9d08-4979-410b-88dc-106c7fcef5a7" containerID="06e509f7cf069c01b0500819a040af79f9773275716bf0202c15d7b6947afc82" exitCode=0 Jan 22 17:00:52 crc kubenswrapper[4758]: I0122 17:00:52.691523 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m4f84" event={"ID":"466f9d08-4979-410b-88dc-106c7fcef5a7","Type":"ContainerDied","Data":"06e509f7cf069c01b0500819a040af79f9773275716bf0202c15d7b6947afc82"} Jan 22 17:00:52 crc kubenswrapper[4758]: I0122 17:00:52.694055 4758 generic.go:334] "Generic (PLEG): container finished" podID="b945d1ce-7e02-4280-9197-d91a149dba2d" containerID="f68599dcf7117dbb936e55e23cd3ee33ec6a177523f1d27982031fc8f3bd3582" exitCode=0 Jan 22 17:00:52 crc kubenswrapper[4758]: I0122 17:00:52.694107 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gmcqc" event={"ID":"b945d1ce-7e02-4280-9197-d91a149dba2d","Type":"ContainerDied","Data":"f68599dcf7117dbb936e55e23cd3ee33ec6a177523f1d27982031fc8f3bd3582"} Jan 22 17:00:52 crc kubenswrapper[4758]: I0122 17:00:52.820364 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09e5cd9a-2eae-49cd-b9a8-37b9d0a109dd" path="/var/lib/kubelet/pods/09e5cd9a-2eae-49cd-b9a8-37b9d0a109dd/volumes" Jan 22 17:00:52 crc kubenswrapper[4758]: I0122 17:00:52.893046 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d309a140-33cc-4a62-b068-8ebc4797ee7e" path="/var/lib/kubelet/pods/d309a140-33cc-4a62-b068-8ebc4797ee7e/volumes" Jan 22 17:00:53 crc kubenswrapper[4758]: I0122 17:00:53.706623 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gmcqc" event={"ID":"b945d1ce-7e02-4280-9197-d91a149dba2d","Type":"ContainerStarted","Data":"1fd26ab0cb280eb0c3b115a0ba9d78b344ff1ba0ea6551c5f4260bc39e6b9be5"} Jan 22 17:00:53 crc kubenswrapper[4758]: I0122 17:00:53.710239 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m4f84" event={"ID":"466f9d08-4979-410b-88dc-106c7fcef5a7","Type":"ContainerStarted","Data":"4a06d037da8a85662cb2da7f397f1dd2e6f562ed79108c0d52a3582f3e418355"} Jan 22 17:00:53 crc kubenswrapper[4758]: I0122 17:00:53.740600 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gmcqc" podStartSLOduration=3.192511829 podStartE2EDuration="10.740555522s" podCreationTimestamp="2026-01-22 17:00:43 +0000 UTC" firstStartedPulling="2026-01-22 17:00:45.60975604 +0000 UTC m=+1867.093095315" lastFinishedPulling="2026-01-22 17:00:53.157799723 +0000 UTC m=+1874.641139008" observedRunningTime="2026-01-22 17:00:53.730668553 +0000 UTC m=+1875.214007838" watchObservedRunningTime="2026-01-22 17:00:53.740555522 +0000 UTC m=+1875.223894807" Jan 22 17:00:53 crc kubenswrapper[4758]: I0122 17:00:53.753923 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-m4f84" podStartSLOduration=3.13377696 podStartE2EDuration="13.753909186s" podCreationTimestamp="2026-01-22 17:00:40 +0000 UTC" firstStartedPulling="2026-01-22 17:00:42.576961897 +0000 UTC m=+1864.060301182" lastFinishedPulling="2026-01-22 17:00:53.197094123 +0000 UTC m=+1874.680433408" observedRunningTime="2026-01-22 17:00:53.748439857 +0000 UTC m=+1875.231779142" watchObservedRunningTime="2026-01-22 17:00:53.753909186 +0000 UTC m=+1875.237248471" Jan 22 17:00:54 crc kubenswrapper[4758]: I0122 17:00:54.179655 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gmcqc" Jan 22 17:00:54 crc kubenswrapper[4758]: I0122 17:00:54.179712 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gmcqc" Jan 22 17:00:55 crc kubenswrapper[4758]: I0122 17:00:55.335279 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-gmcqc" podUID="b945d1ce-7e02-4280-9197-d91a149dba2d" containerName="registry-server" probeResult="failure" output=< Jan 22 17:00:55 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 22 17:00:55 crc kubenswrapper[4758]: > Jan 22 17:00:55 crc kubenswrapper[4758]: I0122 17:00:55.808093 4758 scope.go:117] "RemoveContainer" containerID="9fbb4e4b642afb97b44eb564377795a5aede8a06f9d628acf1dc7fd06d2240ab" Jan 22 17:00:55 crc kubenswrapper[4758]: E0122 17:00:55.808404 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:00:57 crc kubenswrapper[4758]: I0122 17:00:57.042047 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-bnlhc"] Jan 22 17:00:57 crc kubenswrapper[4758]: I0122 17:00:57.065776 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-t9c62"] Jan 22 17:00:57 crc kubenswrapper[4758]: I0122 17:00:57.080116 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-bnlhc"] Jan 22 17:00:57 crc kubenswrapper[4758]: I0122 17:00:57.089623 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-t9c62"] Jan 22 17:00:58 crc kubenswrapper[4758]: I0122 17:00:58.845851 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d7a40ed-25c4-4645-aaf7-3aa28db8a4d9" path="/var/lib/kubelet/pods/2d7a40ed-25c4-4645-aaf7-3aa28db8a4d9/volumes" Jan 22 17:00:58 crc kubenswrapper[4758]: I0122 17:00:58.847369 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6339d32-557a-4f41-9d09-47d3d469615b" path="/var/lib/kubelet/pods/d6339d32-557a-4f41-9d09-47d3d469615b/volumes" Jan 22 17:00:59 crc kubenswrapper[4758]: I0122 17:00:59.496662 4758 scope.go:117] "RemoveContainer" containerID="3fe78ddb8dfabbeacf6f86657d4b5eb27420a594a65d6447bc24e45a87b0a904" Jan 22 17:00:59 crc kubenswrapper[4758]: I0122 17:00:59.523052 4758 scope.go:117] "RemoveContainer" containerID="3eddb0ef8815d265f88e986e83965bce976c3e51fc1ceade2b1edecf333345c1" Jan 22 17:00:59 crc kubenswrapper[4758]: I0122 17:00:59.575783 4758 scope.go:117] "RemoveContainer" containerID="57dc96b60b41bef411c51e22d61f3a99adfd8b0a25b87b1c688415879ba8b0c8" Jan 22 17:00:59 crc kubenswrapper[4758]: I0122 17:00:59.638168 4758 scope.go:117] "RemoveContainer" containerID="1c0441153a28ebae1ae1fa6b73cb559e2b13d1e47a67db0d817f5d93516c0cfc" Jan 22 17:00:59 crc kubenswrapper[4758]: I0122 17:00:59.681650 4758 scope.go:117] "RemoveContainer" containerID="9524b56c17eb2be9b8ab61ae60d1b7412aa8f1c20eca9f8a67bfb978da9b521a" Jan 22 17:00:59 crc kubenswrapper[4758]: I0122 17:00:59.741078 4758 scope.go:117] "RemoveContainer" containerID="818cc3b404e4fd2fb5e62d703eb4b4c480fdc26206d367b5df320f8ff6b3df0c" Jan 22 17:00:59 crc kubenswrapper[4758]: I0122 17:00:59.794479 4758 scope.go:117] "RemoveContainer" containerID="32b2ec1d0f08d3576322d3b66eefe81d3b933190f39ae9a3e89af393bd1813be" Jan 22 17:00:59 crc kubenswrapper[4758]: I0122 17:00:59.814283 4758 scope.go:117] "RemoveContainer" containerID="716bd19f34b2ec2483d2092ba3ebbf34cdcd098584b55275af7b77ca138bc641" Jan 22 17:00:59 crc kubenswrapper[4758]: I0122 17:00:59.839364 4758 scope.go:117] "RemoveContainer" containerID="d4a5a79f8ce228446900755606f405e2fdddd5132988345e404ba3c46c6d5042" Jan 22 17:00:59 crc kubenswrapper[4758]: I0122 17:00:59.857477 4758 scope.go:117] "RemoveContainer" containerID="0d899bbe793e7a5b80e44ff2448fbbba283d3f13640a7753a1fd7485004810a6" Jan 22 17:00:59 crc kubenswrapper[4758]: I0122 17:00:59.878228 4758 scope.go:117] "RemoveContainer" containerID="f4fdfdff907b80dc70c534d77dded51a1ac543c32451c95b631ccf3415267efd" Jan 22 17:00:59 crc kubenswrapper[4758]: I0122 17:00:59.901829 4758 scope.go:117] "RemoveContainer" containerID="18e25043ab877bb2458ff5807e35169391dfd0b5fdb2cf2112b01d99b511d337" Jan 22 17:00:59 crc kubenswrapper[4758]: I0122 17:00:59.921319 4758 scope.go:117] "RemoveContainer" containerID="9a928b13a6fc9a2c3c13e543346e2f247b7402f20c59f02530e85bedd2444b50" Jan 22 17:00:59 crc kubenswrapper[4758]: I0122 17:00:59.945660 4758 scope.go:117] "RemoveContainer" containerID="e586865eb66267a3854bb5f1f73b70e9e31667c2b02fd183592cbcd018d079f7" Jan 22 17:00:59 crc kubenswrapper[4758]: I0122 17:00:59.977927 4758 scope.go:117] "RemoveContainer" containerID="64c0454d4a55b6a22687f960b3ae8567b4cf7d59a0792efca16181efdc7529ee" Jan 22 17:01:00 crc kubenswrapper[4758]: I0122 17:01:00.171676 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29485021-kphfm"] Jan 22 17:01:00 crc kubenswrapper[4758]: I0122 17:01:00.172919 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29485021-kphfm"] Jan 22 17:01:00 crc kubenswrapper[4758]: I0122 17:01:00.172998 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29485021-kphfm" Jan 22 17:01:00 crc kubenswrapper[4758]: I0122 17:01:00.217476 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d061133-6e47-4b25-951f-01e66858742e-combined-ca-bundle\") pod \"keystone-cron-29485021-kphfm\" (UID: \"5d061133-6e47-4b25-951f-01e66858742e\") " pod="openstack/keystone-cron-29485021-kphfm" Jan 22 17:01:00 crc kubenswrapper[4758]: I0122 17:01:00.217728 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5d061133-6e47-4b25-951f-01e66858742e-fernet-keys\") pod \"keystone-cron-29485021-kphfm\" (UID: \"5d061133-6e47-4b25-951f-01e66858742e\") " pod="openstack/keystone-cron-29485021-kphfm" Jan 22 17:01:00 crc kubenswrapper[4758]: I0122 17:01:00.217901 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n6kk\" (UniqueName: \"kubernetes.io/projected/5d061133-6e47-4b25-951f-01e66858742e-kube-api-access-8n6kk\") pod \"keystone-cron-29485021-kphfm\" (UID: \"5d061133-6e47-4b25-951f-01e66858742e\") " pod="openstack/keystone-cron-29485021-kphfm" Jan 22 17:01:00 crc kubenswrapper[4758]: I0122 17:01:00.217964 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d061133-6e47-4b25-951f-01e66858742e-config-data\") pod \"keystone-cron-29485021-kphfm\" (UID: \"5d061133-6e47-4b25-951f-01e66858742e\") " pod="openstack/keystone-cron-29485021-kphfm" Jan 22 17:01:00 crc kubenswrapper[4758]: I0122 17:01:00.319691 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d061133-6e47-4b25-951f-01e66858742e-combined-ca-bundle\") pod \"keystone-cron-29485021-kphfm\" (UID: \"5d061133-6e47-4b25-951f-01e66858742e\") " pod="openstack/keystone-cron-29485021-kphfm" Jan 22 17:01:00 crc kubenswrapper[4758]: I0122 17:01:00.319956 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5d061133-6e47-4b25-951f-01e66858742e-fernet-keys\") pod \"keystone-cron-29485021-kphfm\" (UID: \"5d061133-6e47-4b25-951f-01e66858742e\") " pod="openstack/keystone-cron-29485021-kphfm" Jan 22 17:01:00 crc kubenswrapper[4758]: I0122 17:01:00.320119 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n6kk\" (UniqueName: \"kubernetes.io/projected/5d061133-6e47-4b25-951f-01e66858742e-kube-api-access-8n6kk\") pod \"keystone-cron-29485021-kphfm\" (UID: \"5d061133-6e47-4b25-951f-01e66858742e\") " pod="openstack/keystone-cron-29485021-kphfm" Jan 22 17:01:00 crc kubenswrapper[4758]: I0122 17:01:00.320234 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d061133-6e47-4b25-951f-01e66858742e-config-data\") pod \"keystone-cron-29485021-kphfm\" (UID: \"5d061133-6e47-4b25-951f-01e66858742e\") " pod="openstack/keystone-cron-29485021-kphfm" Jan 22 17:01:00 crc kubenswrapper[4758]: I0122 17:01:00.325965 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5d061133-6e47-4b25-951f-01e66858742e-fernet-keys\") pod \"keystone-cron-29485021-kphfm\" (UID: \"5d061133-6e47-4b25-951f-01e66858742e\") " pod="openstack/keystone-cron-29485021-kphfm" Jan 22 17:01:00 crc kubenswrapper[4758]: I0122 17:01:00.326258 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d061133-6e47-4b25-951f-01e66858742e-config-data\") pod \"keystone-cron-29485021-kphfm\" (UID: \"5d061133-6e47-4b25-951f-01e66858742e\") " pod="openstack/keystone-cron-29485021-kphfm" Jan 22 17:01:00 crc kubenswrapper[4758]: I0122 17:01:00.327275 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d061133-6e47-4b25-951f-01e66858742e-combined-ca-bundle\") pod \"keystone-cron-29485021-kphfm\" (UID: \"5d061133-6e47-4b25-951f-01e66858742e\") " pod="openstack/keystone-cron-29485021-kphfm" Jan 22 17:01:00 crc kubenswrapper[4758]: I0122 17:01:00.335341 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n6kk\" (UniqueName: \"kubernetes.io/projected/5d061133-6e47-4b25-951f-01e66858742e-kube-api-access-8n6kk\") pod \"keystone-cron-29485021-kphfm\" (UID: \"5d061133-6e47-4b25-951f-01e66858742e\") " pod="openstack/keystone-cron-29485021-kphfm" Jan 22 17:01:00 crc kubenswrapper[4758]: I0122 17:01:00.546362 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29485021-kphfm" Jan 22 17:01:00 crc kubenswrapper[4758]: I0122 17:01:00.989189 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29485021-kphfm"] Jan 22 17:01:00 crc kubenswrapper[4758]: I0122 17:01:00.989457 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-m4f84" Jan 22 17:01:00 crc kubenswrapper[4758]: I0122 17:01:00.989473 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-m4f84" Jan 22 17:01:00 crc kubenswrapper[4758]: W0122 17:01:00.994727 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d061133_6e47_4b25_951f_01e66858742e.slice/crio-aa12dc7cca41d192b85cff41b708153a61a6edc077857fe548abbc549763d2cf WatchSource:0}: Error finding container aa12dc7cca41d192b85cff41b708153a61a6edc077857fe548abbc549763d2cf: Status 404 returned error can't find the container with id aa12dc7cca41d192b85cff41b708153a61a6edc077857fe548abbc549763d2cf Jan 22 17:01:01 crc kubenswrapper[4758]: I0122 17:01:01.055312 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-m4f84" Jan 22 17:01:01 crc kubenswrapper[4758]: I0122 17:01:01.795944 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29485021-kphfm" event={"ID":"5d061133-6e47-4b25-951f-01e66858742e","Type":"ContainerStarted","Data":"f1e06b091fbb7f5892ae2af46579c4d20a9db5f9c441036e4d94bf9f4eb3cf67"} Jan 22 17:01:01 crc kubenswrapper[4758]: I0122 17:01:01.795986 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29485021-kphfm" event={"ID":"5d061133-6e47-4b25-951f-01e66858742e","Type":"ContainerStarted","Data":"aa12dc7cca41d192b85cff41b708153a61a6edc077857fe548abbc549763d2cf"} Jan 22 17:01:01 crc kubenswrapper[4758]: I0122 17:01:01.840842 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29485021-kphfm" podStartSLOduration=1.840820592 podStartE2EDuration="1.840820592s" podCreationTimestamp="2026-01-22 17:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 17:01:01.815883713 +0000 UTC m=+1883.299222998" watchObservedRunningTime="2026-01-22 17:01:01.840820592 +0000 UTC m=+1883.324159877" Jan 22 17:01:01 crc kubenswrapper[4758]: I0122 17:01:01.875276 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-m4f84" Jan 22 17:01:01 crc kubenswrapper[4758]: I0122 17:01:01.930087 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m4f84"] Jan 22 17:01:03 crc kubenswrapper[4758]: I0122 17:01:03.824366 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-m4f84" podUID="466f9d08-4979-410b-88dc-106c7fcef5a7" containerName="registry-server" containerID="cri-o://4a06d037da8a85662cb2da7f397f1dd2e6f562ed79108c0d52a3582f3e418355" gracePeriod=2 Jan 22 17:01:04 crc kubenswrapper[4758]: I0122 17:01:04.334987 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gmcqc" Jan 22 17:01:04 crc kubenswrapper[4758]: I0122 17:01:04.384059 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m4f84" Jan 22 17:01:04 crc kubenswrapper[4758]: I0122 17:01:04.434048 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gmcqc" Jan 22 17:01:04 crc kubenswrapper[4758]: I0122 17:01:04.519470 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rsfh\" (UniqueName: \"kubernetes.io/projected/466f9d08-4979-410b-88dc-106c7fcef5a7-kube-api-access-8rsfh\") pod \"466f9d08-4979-410b-88dc-106c7fcef5a7\" (UID: \"466f9d08-4979-410b-88dc-106c7fcef5a7\") " Jan 22 17:01:04 crc kubenswrapper[4758]: I0122 17:01:04.519610 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/466f9d08-4979-410b-88dc-106c7fcef5a7-catalog-content\") pod \"466f9d08-4979-410b-88dc-106c7fcef5a7\" (UID: \"466f9d08-4979-410b-88dc-106c7fcef5a7\") " Jan 22 17:01:04 crc kubenswrapper[4758]: I0122 17:01:04.519805 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/466f9d08-4979-410b-88dc-106c7fcef5a7-utilities\") pod \"466f9d08-4979-410b-88dc-106c7fcef5a7\" (UID: \"466f9d08-4979-410b-88dc-106c7fcef5a7\") " Jan 22 17:01:04 crc kubenswrapper[4758]: I0122 17:01:04.520721 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/466f9d08-4979-410b-88dc-106c7fcef5a7-utilities" (OuterVolumeSpecName: "utilities") pod "466f9d08-4979-410b-88dc-106c7fcef5a7" (UID: "466f9d08-4979-410b-88dc-106c7fcef5a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:01:04 crc kubenswrapper[4758]: I0122 17:01:04.526996 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/466f9d08-4979-410b-88dc-106c7fcef5a7-kube-api-access-8rsfh" (OuterVolumeSpecName: "kube-api-access-8rsfh") pod "466f9d08-4979-410b-88dc-106c7fcef5a7" (UID: "466f9d08-4979-410b-88dc-106c7fcef5a7"). InnerVolumeSpecName "kube-api-access-8rsfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:01:04 crc kubenswrapper[4758]: I0122 17:01:04.621987 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/466f9d08-4979-410b-88dc-106c7fcef5a7-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:04 crc kubenswrapper[4758]: I0122 17:01:04.622024 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rsfh\" (UniqueName: \"kubernetes.io/projected/466f9d08-4979-410b-88dc-106c7fcef5a7-kube-api-access-8rsfh\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:04 crc kubenswrapper[4758]: I0122 17:01:04.664872 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/466f9d08-4979-410b-88dc-106c7fcef5a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "466f9d08-4979-410b-88dc-106c7fcef5a7" (UID: "466f9d08-4979-410b-88dc-106c7fcef5a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:01:04 crc kubenswrapper[4758]: I0122 17:01:04.724194 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/466f9d08-4979-410b-88dc-106c7fcef5a7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:04 crc kubenswrapper[4758]: I0122 17:01:04.837666 4758 generic.go:334] "Generic (PLEG): container finished" podID="466f9d08-4979-410b-88dc-106c7fcef5a7" containerID="4a06d037da8a85662cb2da7f397f1dd2e6f562ed79108c0d52a3582f3e418355" exitCode=0 Jan 22 17:01:04 crc kubenswrapper[4758]: I0122 17:01:04.837735 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m4f84" event={"ID":"466f9d08-4979-410b-88dc-106c7fcef5a7","Type":"ContainerDied","Data":"4a06d037da8a85662cb2da7f397f1dd2e6f562ed79108c0d52a3582f3e418355"} Jan 22 17:01:04 crc kubenswrapper[4758]: I0122 17:01:04.837767 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m4f84" Jan 22 17:01:04 crc kubenswrapper[4758]: I0122 17:01:04.837803 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m4f84" event={"ID":"466f9d08-4979-410b-88dc-106c7fcef5a7","Type":"ContainerDied","Data":"78c05fb19baae93d11bb30a2c1cb57ba63897a3d61ce7f9b1f892d0b993881f0"} Jan 22 17:01:04 crc kubenswrapper[4758]: I0122 17:01:04.837837 4758 scope.go:117] "RemoveContainer" containerID="4a06d037da8a85662cb2da7f397f1dd2e6f562ed79108c0d52a3582f3e418355" Jan 22 17:01:04 crc kubenswrapper[4758]: I0122 17:01:04.842095 4758 generic.go:334] "Generic (PLEG): container finished" podID="5d061133-6e47-4b25-951f-01e66858742e" containerID="f1e06b091fbb7f5892ae2af46579c4d20a9db5f9c441036e4d94bf9f4eb3cf67" exitCode=0 Jan 22 17:01:04 crc kubenswrapper[4758]: I0122 17:01:04.843246 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29485021-kphfm" event={"ID":"5d061133-6e47-4b25-951f-01e66858742e","Type":"ContainerDied","Data":"f1e06b091fbb7f5892ae2af46579c4d20a9db5f9c441036e4d94bf9f4eb3cf67"} Jan 22 17:01:04 crc kubenswrapper[4758]: I0122 17:01:04.864665 4758 scope.go:117] "RemoveContainer" containerID="06e509f7cf069c01b0500819a040af79f9773275716bf0202c15d7b6947afc82" Jan 22 17:01:04 crc kubenswrapper[4758]: I0122 17:01:04.919033 4758 scope.go:117] "RemoveContainer" containerID="e027242baa7996826b2d40859f38e220d228f208131fd3d5914dc45f003f1c3d" Jan 22 17:01:04 crc kubenswrapper[4758]: I0122 17:01:04.925766 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m4f84"] Jan 22 17:01:04 crc kubenswrapper[4758]: I0122 17:01:04.939531 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-m4f84"] Jan 22 17:01:04 crc kubenswrapper[4758]: I0122 17:01:04.972658 4758 scope.go:117] "RemoveContainer" containerID="4a06d037da8a85662cb2da7f397f1dd2e6f562ed79108c0d52a3582f3e418355" Jan 22 17:01:04 crc kubenswrapper[4758]: E0122 17:01:04.973376 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a06d037da8a85662cb2da7f397f1dd2e6f562ed79108c0d52a3582f3e418355\": container with ID starting with 4a06d037da8a85662cb2da7f397f1dd2e6f562ed79108c0d52a3582f3e418355 not found: ID does not exist" containerID="4a06d037da8a85662cb2da7f397f1dd2e6f562ed79108c0d52a3582f3e418355" Jan 22 17:01:04 crc kubenswrapper[4758]: I0122 17:01:04.973407 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a06d037da8a85662cb2da7f397f1dd2e6f562ed79108c0d52a3582f3e418355"} err="failed to get container status \"4a06d037da8a85662cb2da7f397f1dd2e6f562ed79108c0d52a3582f3e418355\": rpc error: code = NotFound desc = could not find container \"4a06d037da8a85662cb2da7f397f1dd2e6f562ed79108c0d52a3582f3e418355\": container with ID starting with 4a06d037da8a85662cb2da7f397f1dd2e6f562ed79108c0d52a3582f3e418355 not found: ID does not exist" Jan 22 17:01:04 crc kubenswrapper[4758]: I0122 17:01:04.973427 4758 scope.go:117] "RemoveContainer" containerID="06e509f7cf069c01b0500819a040af79f9773275716bf0202c15d7b6947afc82" Jan 22 17:01:04 crc kubenswrapper[4758]: E0122 17:01:04.973770 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06e509f7cf069c01b0500819a040af79f9773275716bf0202c15d7b6947afc82\": container with ID starting with 06e509f7cf069c01b0500819a040af79f9773275716bf0202c15d7b6947afc82 not found: ID does not exist" containerID="06e509f7cf069c01b0500819a040af79f9773275716bf0202c15d7b6947afc82" Jan 22 17:01:04 crc kubenswrapper[4758]: I0122 17:01:04.973802 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06e509f7cf069c01b0500819a040af79f9773275716bf0202c15d7b6947afc82"} err="failed to get container status \"06e509f7cf069c01b0500819a040af79f9773275716bf0202c15d7b6947afc82\": rpc error: code = NotFound desc = could not find container \"06e509f7cf069c01b0500819a040af79f9773275716bf0202c15d7b6947afc82\": container with ID starting with 06e509f7cf069c01b0500819a040af79f9773275716bf0202c15d7b6947afc82 not found: ID does not exist" Jan 22 17:01:04 crc kubenswrapper[4758]: I0122 17:01:04.973822 4758 scope.go:117] "RemoveContainer" containerID="e027242baa7996826b2d40859f38e220d228f208131fd3d5914dc45f003f1c3d" Jan 22 17:01:04 crc kubenswrapper[4758]: E0122 17:01:04.974111 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e027242baa7996826b2d40859f38e220d228f208131fd3d5914dc45f003f1c3d\": container with ID starting with e027242baa7996826b2d40859f38e220d228f208131fd3d5914dc45f003f1c3d not found: ID does not exist" containerID="e027242baa7996826b2d40859f38e220d228f208131fd3d5914dc45f003f1c3d" Jan 22 17:01:04 crc kubenswrapper[4758]: I0122 17:01:04.974140 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e027242baa7996826b2d40859f38e220d228f208131fd3d5914dc45f003f1c3d"} err="failed to get container status \"e027242baa7996826b2d40859f38e220d228f208131fd3d5914dc45f003f1c3d\": rpc error: code = NotFound desc = could not find container \"e027242baa7996826b2d40859f38e220d228f208131fd3d5914dc45f003f1c3d\": container with ID starting with e027242baa7996826b2d40859f38e220d228f208131fd3d5914dc45f003f1c3d not found: ID does not exist" Jan 22 17:01:06 crc kubenswrapper[4758]: I0122 17:01:06.034206 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-sync-zftxl"] Jan 22 17:01:06 crc kubenswrapper[4758]: I0122 17:01:06.043910 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-sync-zftxl"] Jan 22 17:01:06 crc kubenswrapper[4758]: I0122 17:01:06.247140 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29485021-kphfm" Jan 22 17:01:06 crc kubenswrapper[4758]: I0122 17:01:06.292082 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gmcqc"] Jan 22 17:01:06 crc kubenswrapper[4758]: I0122 17:01:06.292346 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gmcqc" podUID="b945d1ce-7e02-4280-9197-d91a149dba2d" containerName="registry-server" containerID="cri-o://1fd26ab0cb280eb0c3b115a0ba9d78b344ff1ba0ea6551c5f4260bc39e6b9be5" gracePeriod=2 Jan 22 17:01:06 crc kubenswrapper[4758]: I0122 17:01:06.378733 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d061133-6e47-4b25-951f-01e66858742e-config-data\") pod \"5d061133-6e47-4b25-951f-01e66858742e\" (UID: \"5d061133-6e47-4b25-951f-01e66858742e\") " Jan 22 17:01:06 crc kubenswrapper[4758]: I0122 17:01:06.379020 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8n6kk\" (UniqueName: \"kubernetes.io/projected/5d061133-6e47-4b25-951f-01e66858742e-kube-api-access-8n6kk\") pod \"5d061133-6e47-4b25-951f-01e66858742e\" (UID: \"5d061133-6e47-4b25-951f-01e66858742e\") " Jan 22 17:01:06 crc kubenswrapper[4758]: I0122 17:01:06.379164 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d061133-6e47-4b25-951f-01e66858742e-combined-ca-bundle\") pod \"5d061133-6e47-4b25-951f-01e66858742e\" (UID: \"5d061133-6e47-4b25-951f-01e66858742e\") " Jan 22 17:01:06 crc kubenswrapper[4758]: I0122 17:01:06.379230 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5d061133-6e47-4b25-951f-01e66858742e-fernet-keys\") pod \"5d061133-6e47-4b25-951f-01e66858742e\" (UID: \"5d061133-6e47-4b25-951f-01e66858742e\") " Jan 22 17:01:06 crc kubenswrapper[4758]: I0122 17:01:06.387400 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d061133-6e47-4b25-951f-01e66858742e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "5d061133-6e47-4b25-951f-01e66858742e" (UID: "5d061133-6e47-4b25-951f-01e66858742e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:06 crc kubenswrapper[4758]: I0122 17:01:06.399240 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d061133-6e47-4b25-951f-01e66858742e-kube-api-access-8n6kk" (OuterVolumeSpecName: "kube-api-access-8n6kk") pod "5d061133-6e47-4b25-951f-01e66858742e" (UID: "5d061133-6e47-4b25-951f-01e66858742e"). InnerVolumeSpecName "kube-api-access-8n6kk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:01:06 crc kubenswrapper[4758]: I0122 17:01:06.421089 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d061133-6e47-4b25-951f-01e66858742e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5d061133-6e47-4b25-951f-01e66858742e" (UID: "5d061133-6e47-4b25-951f-01e66858742e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:06 crc kubenswrapper[4758]: I0122 17:01:06.449024 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d061133-6e47-4b25-951f-01e66858742e-config-data" (OuterVolumeSpecName: "config-data") pod "5d061133-6e47-4b25-951f-01e66858742e" (UID: "5d061133-6e47-4b25-951f-01e66858742e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:06 crc kubenswrapper[4758]: I0122 17:01:06.482344 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d061133-6e47-4b25-951f-01e66858742e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:06 crc kubenswrapper[4758]: I0122 17:01:06.482387 4758 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5d061133-6e47-4b25-951f-01e66858742e-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:06 crc kubenswrapper[4758]: I0122 17:01:06.482396 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d061133-6e47-4b25-951f-01e66858742e-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:06 crc kubenswrapper[4758]: I0122 17:01:06.482405 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8n6kk\" (UniqueName: \"kubernetes.io/projected/5d061133-6e47-4b25-951f-01e66858742e-kube-api-access-8n6kk\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:06 crc kubenswrapper[4758]: I0122 17:01:06.737558 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gmcqc" Jan 22 17:01:06 crc kubenswrapper[4758]: I0122 17:01:06.787179 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b945d1ce-7e02-4280-9197-d91a149dba2d-catalog-content\") pod \"b945d1ce-7e02-4280-9197-d91a149dba2d\" (UID: \"b945d1ce-7e02-4280-9197-d91a149dba2d\") " Jan 22 17:01:06 crc kubenswrapper[4758]: I0122 17:01:06.787490 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9m9b\" (UniqueName: \"kubernetes.io/projected/b945d1ce-7e02-4280-9197-d91a149dba2d-kube-api-access-w9m9b\") pod \"b945d1ce-7e02-4280-9197-d91a149dba2d\" (UID: \"b945d1ce-7e02-4280-9197-d91a149dba2d\") " Jan 22 17:01:06 crc kubenswrapper[4758]: I0122 17:01:06.787532 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b945d1ce-7e02-4280-9197-d91a149dba2d-utilities\") pod \"b945d1ce-7e02-4280-9197-d91a149dba2d\" (UID: \"b945d1ce-7e02-4280-9197-d91a149dba2d\") " Jan 22 17:01:06 crc kubenswrapper[4758]: I0122 17:01:06.789079 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b945d1ce-7e02-4280-9197-d91a149dba2d-utilities" (OuterVolumeSpecName: "utilities") pod "b945d1ce-7e02-4280-9197-d91a149dba2d" (UID: "b945d1ce-7e02-4280-9197-d91a149dba2d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:01:06 crc kubenswrapper[4758]: I0122 17:01:06.803791 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b945d1ce-7e02-4280-9197-d91a149dba2d-kube-api-access-w9m9b" (OuterVolumeSpecName: "kube-api-access-w9m9b") pod "b945d1ce-7e02-4280-9197-d91a149dba2d" (UID: "b945d1ce-7e02-4280-9197-d91a149dba2d"). InnerVolumeSpecName "kube-api-access-w9m9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:01:06 crc kubenswrapper[4758]: I0122 17:01:06.812434 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b945d1ce-7e02-4280-9197-d91a149dba2d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b945d1ce-7e02-4280-9197-d91a149dba2d" (UID: "b945d1ce-7e02-4280-9197-d91a149dba2d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:01:06 crc kubenswrapper[4758]: I0122 17:01:06.825411 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b6f4b9a-54d9-440f-853b-b1e3a7b6069b" path="/var/lib/kubelet/pods/0b6f4b9a-54d9-440f-853b-b1e3a7b6069b/volumes" Jan 22 17:01:06 crc kubenswrapper[4758]: I0122 17:01:06.826077 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="466f9d08-4979-410b-88dc-106c7fcef5a7" path="/var/lib/kubelet/pods/466f9d08-4979-410b-88dc-106c7fcef5a7/volumes" Jan 22 17:01:06 crc kubenswrapper[4758]: I0122 17:01:06.863925 4758 generic.go:334] "Generic (PLEG): container finished" podID="b945d1ce-7e02-4280-9197-d91a149dba2d" containerID="1fd26ab0cb280eb0c3b115a0ba9d78b344ff1ba0ea6551c5f4260bc39e6b9be5" exitCode=0 Jan 22 17:01:06 crc kubenswrapper[4758]: I0122 17:01:06.864003 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gmcqc" event={"ID":"b945d1ce-7e02-4280-9197-d91a149dba2d","Type":"ContainerDied","Data":"1fd26ab0cb280eb0c3b115a0ba9d78b344ff1ba0ea6551c5f4260bc39e6b9be5"} Jan 22 17:01:06 crc kubenswrapper[4758]: I0122 17:01:06.864036 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gmcqc" event={"ID":"b945d1ce-7e02-4280-9197-d91a149dba2d","Type":"ContainerDied","Data":"63d34f35b674fa3123ff76af7295b0d6221e17d20c92d0451dfeaaa9386255bb"} Jan 22 17:01:06 crc kubenswrapper[4758]: I0122 17:01:06.864049 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gmcqc" Jan 22 17:01:06 crc kubenswrapper[4758]: I0122 17:01:06.864063 4758 scope.go:117] "RemoveContainer" containerID="1fd26ab0cb280eb0c3b115a0ba9d78b344ff1ba0ea6551c5f4260bc39e6b9be5" Jan 22 17:01:06 crc kubenswrapper[4758]: I0122 17:01:06.866272 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29485021-kphfm" event={"ID":"5d061133-6e47-4b25-951f-01e66858742e","Type":"ContainerDied","Data":"aa12dc7cca41d192b85cff41b708153a61a6edc077857fe548abbc549763d2cf"} Jan 22 17:01:06 crc kubenswrapper[4758]: I0122 17:01:06.866299 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa12dc7cca41d192b85cff41b708153a61a6edc077857fe548abbc549763d2cf" Jan 22 17:01:06 crc kubenswrapper[4758]: I0122 17:01:06.866429 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29485021-kphfm" Jan 22 17:01:06 crc kubenswrapper[4758]: I0122 17:01:06.891488 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9m9b\" (UniqueName: \"kubernetes.io/projected/b945d1ce-7e02-4280-9197-d91a149dba2d-kube-api-access-w9m9b\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:06 crc kubenswrapper[4758]: I0122 17:01:06.891531 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b945d1ce-7e02-4280-9197-d91a149dba2d-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:06 crc kubenswrapper[4758]: I0122 17:01:06.891543 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b945d1ce-7e02-4280-9197-d91a149dba2d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:06 crc kubenswrapper[4758]: I0122 17:01:06.895246 4758 scope.go:117] "RemoveContainer" containerID="f68599dcf7117dbb936e55e23cd3ee33ec6a177523f1d27982031fc8f3bd3582" Jan 22 17:01:06 crc kubenswrapper[4758]: I0122 17:01:06.905214 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gmcqc"] Jan 22 17:01:06 crc kubenswrapper[4758]: I0122 17:01:06.922148 4758 scope.go:117] "RemoveContainer" containerID="b53cba428a00ac3663a89bb2cc51e28ae292269cc64c972f999ae4884728ed9c" Jan 22 17:01:06 crc kubenswrapper[4758]: I0122 17:01:06.930207 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gmcqc"] Jan 22 17:01:06 crc kubenswrapper[4758]: I0122 17:01:06.943695 4758 scope.go:117] "RemoveContainer" containerID="1fd26ab0cb280eb0c3b115a0ba9d78b344ff1ba0ea6551c5f4260bc39e6b9be5" Jan 22 17:01:06 crc kubenswrapper[4758]: E0122 17:01:06.944119 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fd26ab0cb280eb0c3b115a0ba9d78b344ff1ba0ea6551c5f4260bc39e6b9be5\": container with ID starting with 1fd26ab0cb280eb0c3b115a0ba9d78b344ff1ba0ea6551c5f4260bc39e6b9be5 not found: ID does not exist" containerID="1fd26ab0cb280eb0c3b115a0ba9d78b344ff1ba0ea6551c5f4260bc39e6b9be5" Jan 22 17:01:06 crc kubenswrapper[4758]: I0122 17:01:06.944172 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fd26ab0cb280eb0c3b115a0ba9d78b344ff1ba0ea6551c5f4260bc39e6b9be5"} err="failed to get container status \"1fd26ab0cb280eb0c3b115a0ba9d78b344ff1ba0ea6551c5f4260bc39e6b9be5\": rpc error: code = NotFound desc = could not find container \"1fd26ab0cb280eb0c3b115a0ba9d78b344ff1ba0ea6551c5f4260bc39e6b9be5\": container with ID starting with 1fd26ab0cb280eb0c3b115a0ba9d78b344ff1ba0ea6551c5f4260bc39e6b9be5 not found: ID does not exist" Jan 22 17:01:06 crc kubenswrapper[4758]: I0122 17:01:06.944203 4758 scope.go:117] "RemoveContainer" containerID="f68599dcf7117dbb936e55e23cd3ee33ec6a177523f1d27982031fc8f3bd3582" Jan 22 17:01:06 crc kubenswrapper[4758]: E0122 17:01:06.944734 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f68599dcf7117dbb936e55e23cd3ee33ec6a177523f1d27982031fc8f3bd3582\": container with ID starting with f68599dcf7117dbb936e55e23cd3ee33ec6a177523f1d27982031fc8f3bd3582 not found: ID does not exist" containerID="f68599dcf7117dbb936e55e23cd3ee33ec6a177523f1d27982031fc8f3bd3582" Jan 22 17:01:06 crc kubenswrapper[4758]: I0122 17:01:06.944781 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f68599dcf7117dbb936e55e23cd3ee33ec6a177523f1d27982031fc8f3bd3582"} err="failed to get container status \"f68599dcf7117dbb936e55e23cd3ee33ec6a177523f1d27982031fc8f3bd3582\": rpc error: code = NotFound desc = could not find container \"f68599dcf7117dbb936e55e23cd3ee33ec6a177523f1d27982031fc8f3bd3582\": container with ID starting with f68599dcf7117dbb936e55e23cd3ee33ec6a177523f1d27982031fc8f3bd3582 not found: ID does not exist" Jan 22 17:01:06 crc kubenswrapper[4758]: I0122 17:01:06.944800 4758 scope.go:117] "RemoveContainer" containerID="b53cba428a00ac3663a89bb2cc51e28ae292269cc64c972f999ae4884728ed9c" Jan 22 17:01:06 crc kubenswrapper[4758]: E0122 17:01:06.945135 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b53cba428a00ac3663a89bb2cc51e28ae292269cc64c972f999ae4884728ed9c\": container with ID starting with b53cba428a00ac3663a89bb2cc51e28ae292269cc64c972f999ae4884728ed9c not found: ID does not exist" containerID="b53cba428a00ac3663a89bb2cc51e28ae292269cc64c972f999ae4884728ed9c" Jan 22 17:01:06 crc kubenswrapper[4758]: I0122 17:01:06.945176 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b53cba428a00ac3663a89bb2cc51e28ae292269cc64c972f999ae4884728ed9c"} err="failed to get container status \"b53cba428a00ac3663a89bb2cc51e28ae292269cc64c972f999ae4884728ed9c\": rpc error: code = NotFound desc = could not find container \"b53cba428a00ac3663a89bb2cc51e28ae292269cc64c972f999ae4884728ed9c\": container with ID starting with b53cba428a00ac3663a89bb2cc51e28ae292269cc64c972f999ae4884728ed9c not found: ID does not exist" Jan 22 17:01:07 crc kubenswrapper[4758]: I0122 17:01:07.046032 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-fdqxw"] Jan 22 17:01:07 crc kubenswrapper[4758]: I0122 17:01:07.057653 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-fdqxw"] Jan 22 17:01:08 crc kubenswrapper[4758]: I0122 17:01:08.819831 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b945d1ce-7e02-4280-9197-d91a149dba2d" path="/var/lib/kubelet/pods/b945d1ce-7e02-4280-9197-d91a149dba2d/volumes" Jan 22 17:01:08 crc kubenswrapper[4758]: I0122 17:01:08.820886 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c34cee78-07e7-4762-98ed-56f4f0ffc257" path="/var/lib/kubelet/pods/c34cee78-07e7-4762-98ed-56f4f0ffc257/volumes" Jan 22 17:01:09 crc kubenswrapper[4758]: I0122 17:01:09.809397 4758 scope.go:117] "RemoveContainer" containerID="9fbb4e4b642afb97b44eb564377795a5aede8a06f9d628acf1dc7fd06d2240ab" Jan 22 17:01:09 crc kubenswrapper[4758]: E0122 17:01:09.810037 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:01:21 crc kubenswrapper[4758]: I0122 17:01:21.055041 4758 generic.go:334] "Generic (PLEG): container finished" podID="7b0250c2-eb08-4c81-9d0b-788f1746df63" containerID="d720edadaaa32e6ffb5c4a67f78fb79deddae333c1ccc0a81e6188add965f934" exitCode=0 Jan 22 17:01:21 crc kubenswrapper[4758]: I0122 17:01:21.055119 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bzvws" event={"ID":"7b0250c2-eb08-4c81-9d0b-788f1746df63","Type":"ContainerDied","Data":"d720edadaaa32e6ffb5c4a67f78fb79deddae333c1ccc0a81e6188add965f934"} Jan 22 17:01:22 crc kubenswrapper[4758]: I0122 17:01:22.571517 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bzvws" Jan 22 17:01:22 crc kubenswrapper[4758]: I0122 17:01:22.769797 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b0250c2-eb08-4c81-9d0b-788f1746df63-bootstrap-combined-ca-bundle\") pod \"7b0250c2-eb08-4c81-9d0b-788f1746df63\" (UID: \"7b0250c2-eb08-4c81-9d0b-788f1746df63\") " Jan 22 17:01:22 crc kubenswrapper[4758]: I0122 17:01:22.770429 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7b0250c2-eb08-4c81-9d0b-788f1746df63-ssh-key-openstack-edpm-ipam\") pod \"7b0250c2-eb08-4c81-9d0b-788f1746df63\" (UID: \"7b0250c2-eb08-4c81-9d0b-788f1746df63\") " Jan 22 17:01:22 crc kubenswrapper[4758]: I0122 17:01:22.770617 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b0250c2-eb08-4c81-9d0b-788f1746df63-inventory\") pod \"7b0250c2-eb08-4c81-9d0b-788f1746df63\" (UID: \"7b0250c2-eb08-4c81-9d0b-788f1746df63\") " Jan 22 17:01:22 crc kubenswrapper[4758]: I0122 17:01:22.770933 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s24wc\" (UniqueName: \"kubernetes.io/projected/7b0250c2-eb08-4c81-9d0b-788f1746df63-kube-api-access-s24wc\") pod \"7b0250c2-eb08-4c81-9d0b-788f1746df63\" (UID: \"7b0250c2-eb08-4c81-9d0b-788f1746df63\") " Jan 22 17:01:22 crc kubenswrapper[4758]: I0122 17:01:22.776940 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b0250c2-eb08-4c81-9d0b-788f1746df63-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "7b0250c2-eb08-4c81-9d0b-788f1746df63" (UID: "7b0250c2-eb08-4c81-9d0b-788f1746df63"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:22 crc kubenswrapper[4758]: I0122 17:01:22.787501 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b0250c2-eb08-4c81-9d0b-788f1746df63-kube-api-access-s24wc" (OuterVolumeSpecName: "kube-api-access-s24wc") pod "7b0250c2-eb08-4c81-9d0b-788f1746df63" (UID: "7b0250c2-eb08-4c81-9d0b-788f1746df63"). InnerVolumeSpecName "kube-api-access-s24wc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:01:22 crc kubenswrapper[4758]: I0122 17:01:22.806801 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b0250c2-eb08-4c81-9d0b-788f1746df63-inventory" (OuterVolumeSpecName: "inventory") pod "7b0250c2-eb08-4c81-9d0b-788f1746df63" (UID: "7b0250c2-eb08-4c81-9d0b-788f1746df63"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:22 crc kubenswrapper[4758]: I0122 17:01:22.813228 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b0250c2-eb08-4c81-9d0b-788f1746df63-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7b0250c2-eb08-4c81-9d0b-788f1746df63" (UID: "7b0250c2-eb08-4c81-9d0b-788f1746df63"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:22 crc kubenswrapper[4758]: I0122 17:01:22.874988 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s24wc\" (UniqueName: \"kubernetes.io/projected/7b0250c2-eb08-4c81-9d0b-788f1746df63-kube-api-access-s24wc\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:22 crc kubenswrapper[4758]: I0122 17:01:22.875022 4758 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b0250c2-eb08-4c81-9d0b-788f1746df63-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:22 crc kubenswrapper[4758]: I0122 17:01:22.875032 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7b0250c2-eb08-4c81-9d0b-788f1746df63-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:22 crc kubenswrapper[4758]: I0122 17:01:22.875045 4758 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b0250c2-eb08-4c81-9d0b-788f1746df63-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:23 crc kubenswrapper[4758]: I0122 17:01:23.083773 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bzvws" event={"ID":"7b0250c2-eb08-4c81-9d0b-788f1746df63","Type":"ContainerDied","Data":"86275fdb7db6f9cd93cb4012999cfb2faaa828970f223a22f07b4bcacb8abe9f"} Jan 22 17:01:23 crc kubenswrapper[4758]: I0122 17:01:23.083819 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bzvws" Jan 22 17:01:23 crc kubenswrapper[4758]: I0122 17:01:23.083821 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86275fdb7db6f9cd93cb4012999cfb2faaa828970f223a22f07b4bcacb8abe9f" Jan 22 17:01:23 crc kubenswrapper[4758]: I0122 17:01:23.173123 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wst56"] Jan 22 17:01:23 crc kubenswrapper[4758]: E0122 17:01:23.173810 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b945d1ce-7e02-4280-9197-d91a149dba2d" containerName="registry-server" Jan 22 17:01:23 crc kubenswrapper[4758]: I0122 17:01:23.173839 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="b945d1ce-7e02-4280-9197-d91a149dba2d" containerName="registry-server" Jan 22 17:01:23 crc kubenswrapper[4758]: E0122 17:01:23.173860 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b945d1ce-7e02-4280-9197-d91a149dba2d" containerName="extract-utilities" Jan 22 17:01:23 crc kubenswrapper[4758]: I0122 17:01:23.173873 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="b945d1ce-7e02-4280-9197-d91a149dba2d" containerName="extract-utilities" Jan 22 17:01:23 crc kubenswrapper[4758]: E0122 17:01:23.173892 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="466f9d08-4979-410b-88dc-106c7fcef5a7" containerName="extract-utilities" Jan 22 17:01:23 crc kubenswrapper[4758]: I0122 17:01:23.173905 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="466f9d08-4979-410b-88dc-106c7fcef5a7" containerName="extract-utilities" Jan 22 17:01:23 crc kubenswrapper[4758]: E0122 17:01:23.173958 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b945d1ce-7e02-4280-9197-d91a149dba2d" containerName="extract-content" Jan 22 17:01:23 crc kubenswrapper[4758]: I0122 17:01:23.173995 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="b945d1ce-7e02-4280-9197-d91a149dba2d" containerName="extract-content" Jan 22 17:01:23 crc kubenswrapper[4758]: E0122 17:01:23.174023 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d061133-6e47-4b25-951f-01e66858742e" containerName="keystone-cron" Jan 22 17:01:23 crc kubenswrapper[4758]: I0122 17:01:23.174035 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d061133-6e47-4b25-951f-01e66858742e" containerName="keystone-cron" Jan 22 17:01:23 crc kubenswrapper[4758]: E0122 17:01:23.174060 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b0250c2-eb08-4c81-9d0b-788f1746df63" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 22 17:01:23 crc kubenswrapper[4758]: I0122 17:01:23.174072 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b0250c2-eb08-4c81-9d0b-788f1746df63" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 22 17:01:23 crc kubenswrapper[4758]: E0122 17:01:23.174094 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="466f9d08-4979-410b-88dc-106c7fcef5a7" containerName="extract-content" Jan 22 17:01:23 crc kubenswrapper[4758]: I0122 17:01:23.174104 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="466f9d08-4979-410b-88dc-106c7fcef5a7" containerName="extract-content" Jan 22 17:01:23 crc kubenswrapper[4758]: E0122 17:01:23.174120 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="466f9d08-4979-410b-88dc-106c7fcef5a7" containerName="registry-server" Jan 22 17:01:23 crc kubenswrapper[4758]: I0122 17:01:23.174131 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="466f9d08-4979-410b-88dc-106c7fcef5a7" containerName="registry-server" Jan 22 17:01:23 crc kubenswrapper[4758]: I0122 17:01:23.174445 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b0250c2-eb08-4c81-9d0b-788f1746df63" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 22 17:01:23 crc kubenswrapper[4758]: I0122 17:01:23.174471 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d061133-6e47-4b25-951f-01e66858742e" containerName="keystone-cron" Jan 22 17:01:23 crc kubenswrapper[4758]: I0122 17:01:23.174486 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="466f9d08-4979-410b-88dc-106c7fcef5a7" containerName="registry-server" Jan 22 17:01:23 crc kubenswrapper[4758]: I0122 17:01:23.174522 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="b945d1ce-7e02-4280-9197-d91a149dba2d" containerName="registry-server" Jan 22 17:01:23 crc kubenswrapper[4758]: I0122 17:01:23.175542 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wst56" Jan 22 17:01:23 crc kubenswrapper[4758]: I0122 17:01:23.179196 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 17:01:23 crc kubenswrapper[4758]: I0122 17:01:23.179280 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 17:01:23 crc kubenswrapper[4758]: I0122 17:01:23.179379 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 17:01:23 crc kubenswrapper[4758]: I0122 17:01:23.179402 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5gz9n" Jan 22 17:01:23 crc kubenswrapper[4758]: I0122 17:01:23.179677 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d877ce08-9a59-401c-ab3f-fc2c6905507f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wst56\" (UID: \"d877ce08-9a59-401c-ab3f-fc2c6905507f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wst56" Jan 22 17:01:23 crc kubenswrapper[4758]: I0122 17:01:23.179867 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbnkq\" (UniqueName: \"kubernetes.io/projected/d877ce08-9a59-401c-ab3f-fc2c6905507f-kube-api-access-jbnkq\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wst56\" (UID: \"d877ce08-9a59-401c-ab3f-fc2c6905507f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wst56" Jan 22 17:01:23 crc kubenswrapper[4758]: I0122 17:01:23.179931 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d877ce08-9a59-401c-ab3f-fc2c6905507f-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wst56\" (UID: \"d877ce08-9a59-401c-ab3f-fc2c6905507f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wst56" Jan 22 17:01:23 crc kubenswrapper[4758]: I0122 17:01:23.209926 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wst56"] Jan 22 17:01:23 crc kubenswrapper[4758]: I0122 17:01:23.281670 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d877ce08-9a59-401c-ab3f-fc2c6905507f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wst56\" (UID: \"d877ce08-9a59-401c-ab3f-fc2c6905507f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wst56" Jan 22 17:01:23 crc kubenswrapper[4758]: I0122 17:01:23.281830 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbnkq\" (UniqueName: \"kubernetes.io/projected/d877ce08-9a59-401c-ab3f-fc2c6905507f-kube-api-access-jbnkq\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wst56\" (UID: \"d877ce08-9a59-401c-ab3f-fc2c6905507f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wst56" Jan 22 17:01:23 crc kubenswrapper[4758]: I0122 17:01:23.281869 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d877ce08-9a59-401c-ab3f-fc2c6905507f-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wst56\" (UID: \"d877ce08-9a59-401c-ab3f-fc2c6905507f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wst56" Jan 22 17:01:23 crc kubenswrapper[4758]: I0122 17:01:23.285368 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d877ce08-9a59-401c-ab3f-fc2c6905507f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wst56\" (UID: \"d877ce08-9a59-401c-ab3f-fc2c6905507f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wst56" Jan 22 17:01:23 crc kubenswrapper[4758]: I0122 17:01:23.285405 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d877ce08-9a59-401c-ab3f-fc2c6905507f-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wst56\" (UID: \"d877ce08-9a59-401c-ab3f-fc2c6905507f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wst56" Jan 22 17:01:23 crc kubenswrapper[4758]: I0122 17:01:23.301484 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbnkq\" (UniqueName: \"kubernetes.io/projected/d877ce08-9a59-401c-ab3f-fc2c6905507f-kube-api-access-jbnkq\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wst56\" (UID: \"d877ce08-9a59-401c-ab3f-fc2c6905507f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wst56" Jan 22 17:01:23 crc kubenswrapper[4758]: I0122 17:01:23.501449 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wst56" Jan 22 17:01:24 crc kubenswrapper[4758]: I0122 17:01:24.118654 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wst56"] Jan 22 17:01:24 crc kubenswrapper[4758]: I0122 17:01:24.808790 4758 scope.go:117] "RemoveContainer" containerID="9fbb4e4b642afb97b44eb564377795a5aede8a06f9d628acf1dc7fd06d2240ab" Jan 22 17:01:24 crc kubenswrapper[4758]: E0122 17:01:24.809365 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:01:25 crc kubenswrapper[4758]: I0122 17:01:25.103228 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wst56" event={"ID":"d877ce08-9a59-401c-ab3f-fc2c6905507f","Type":"ContainerStarted","Data":"2e5ed8d3139626b73ac103e4e7b0e80fa550d607c0789ae8e05cc9bec3337f04"} Jan 22 17:01:25 crc kubenswrapper[4758]: I0122 17:01:25.103274 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wst56" event={"ID":"d877ce08-9a59-401c-ab3f-fc2c6905507f","Type":"ContainerStarted","Data":"4d411519519db900047d4055aad895f7d3911f63edf13a354735b0b67c1b8b34"} Jan 22 17:01:25 crc kubenswrapper[4758]: I0122 17:01:25.120851 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wst56" podStartSLOduration=1.564739361 podStartE2EDuration="2.120819616s" podCreationTimestamp="2026-01-22 17:01:23 +0000 UTC" firstStartedPulling="2026-01-22 17:01:24.119684857 +0000 UTC m=+1905.603024142" lastFinishedPulling="2026-01-22 17:01:24.675765102 +0000 UTC m=+1906.159104397" observedRunningTime="2026-01-22 17:01:25.114970805 +0000 UTC m=+1906.598310090" watchObservedRunningTime="2026-01-22 17:01:25.120819616 +0000 UTC m=+1906.604158901" Jan 22 17:01:36 crc kubenswrapper[4758]: I0122 17:01:36.868689 4758 scope.go:117] "RemoveContainer" containerID="9fbb4e4b642afb97b44eb564377795a5aede8a06f9d628acf1dc7fd06d2240ab" Jan 22 17:01:36 crc kubenswrapper[4758]: E0122 17:01:36.869668 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:01:50 crc kubenswrapper[4758]: I0122 17:01:50.871277 4758 scope.go:117] "RemoveContainer" containerID="9fbb4e4b642afb97b44eb564377795a5aede8a06f9d628acf1dc7fd06d2240ab" Jan 22 17:01:50 crc kubenswrapper[4758]: E0122 17:01:50.871965 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:02:00 crc kubenswrapper[4758]: I0122 17:02:00.326893 4758 scope.go:117] "RemoveContainer" containerID="599e8eeda8a41982195764e5e8bb2304e85da4d83f034cebe3ba0df1e5d9284a" Jan 22 17:02:00 crc kubenswrapper[4758]: I0122 17:02:00.381623 4758 scope.go:117] "RemoveContainer" containerID="2ccb59c8ad7c58f793fffb7731cd998a424c3cf38586390be317e0f235d90577" Jan 22 17:02:04 crc kubenswrapper[4758]: I0122 17:02:04.808935 4758 scope.go:117] "RemoveContainer" containerID="9fbb4e4b642afb97b44eb564377795a5aede8a06f9d628acf1dc7fd06d2240ab" Jan 22 17:02:04 crc kubenswrapper[4758]: E0122 17:02:04.823280 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:02:06 crc kubenswrapper[4758]: I0122 17:02:06.054697 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-2l5dg"] Jan 22 17:02:06 crc kubenswrapper[4758]: I0122 17:02:06.067484 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-2l5dg"] Jan 22 17:02:06 crc kubenswrapper[4758]: I0122 17:02:06.078306 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-c52rv"] Jan 22 17:02:06 crc kubenswrapper[4758]: I0122 17:02:06.088535 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-c52rv"] Jan 22 17:02:06 crc kubenswrapper[4758]: I0122 17:02:06.819309 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad0bebb3-f086-4c81-8210-5ff9fed77ea4" path="/var/lib/kubelet/pods/ad0bebb3-f086-4c81-8210-5ff9fed77ea4/volumes" Jan 22 17:02:06 crc kubenswrapper[4758]: I0122 17:02:06.819998 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c276b685-1d06-4272-9eeb-7b759a8bffff" path="/var/lib/kubelet/pods/c276b685-1d06-4272-9eeb-7b759a8bffff/volumes" Jan 22 17:02:10 crc kubenswrapper[4758]: I0122 17:02:10.040575 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-lv7h6"] Jan 22 17:02:10 crc kubenswrapper[4758]: I0122 17:02:10.049809 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-lv7h6"] Jan 22 17:02:10 crc kubenswrapper[4758]: I0122 17:02:10.820681 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cc69af0-0ef0-4399-9084-e81419b65acd" path="/var/lib/kubelet/pods/1cc69af0-0ef0-4399-9084-e81419b65acd/volumes" Jan 22 17:02:19 crc kubenswrapper[4758]: I0122 17:02:19.809333 4758 scope.go:117] "RemoveContainer" containerID="9fbb4e4b642afb97b44eb564377795a5aede8a06f9d628acf1dc7fd06d2240ab" Jan 22 17:02:19 crc kubenswrapper[4758]: E0122 17:02:19.810229 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:02:21 crc kubenswrapper[4758]: I0122 17:02:21.045961 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-dmssm"] Jan 22 17:02:21 crc kubenswrapper[4758]: I0122 17:02:21.069166 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-dmssm"] Jan 22 17:02:22 crc kubenswrapper[4758]: I0122 17:02:22.822922 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a5061fa-23f9-42ce-9682-a3fd99d419d7" path="/var/lib/kubelet/pods/7a5061fa-23f9-42ce-9682-a3fd99d419d7/volumes" Jan 22 17:02:34 crc kubenswrapper[4758]: I0122 17:02:34.040172 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-529mh"] Jan 22 17:02:34 crc kubenswrapper[4758]: I0122 17:02:34.055706 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-9h9hb"] Jan 22 17:02:34 crc kubenswrapper[4758]: I0122 17:02:34.069640 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-529mh"] Jan 22 17:02:34 crc kubenswrapper[4758]: I0122 17:02:34.079348 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-9h9hb"] Jan 22 17:02:34 crc kubenswrapper[4758]: I0122 17:02:34.808611 4758 scope.go:117] "RemoveContainer" containerID="9fbb4e4b642afb97b44eb564377795a5aede8a06f9d628acf1dc7fd06d2240ab" Jan 22 17:02:34 crc kubenswrapper[4758]: E0122 17:02:34.809582 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:02:34 crc kubenswrapper[4758]: I0122 17:02:34.820912 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1666997-8287-4065-bcaf-409713fc6782" path="/var/lib/kubelet/pods/b1666997-8287-4065-bcaf-409713fc6782/volumes" Jan 22 17:02:34 crc kubenswrapper[4758]: I0122 17:02:34.822546 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8fe0f21-8912-4d6c-ba4f-6600456784e1" path="/var/lib/kubelet/pods/f8fe0f21-8912-4d6c-ba4f-6600456784e1/volumes" Jan 22 17:02:49 crc kubenswrapper[4758]: I0122 17:02:49.809374 4758 scope.go:117] "RemoveContainer" containerID="9fbb4e4b642afb97b44eb564377795a5aede8a06f9d628acf1dc7fd06d2240ab" Jan 22 17:02:50 crc kubenswrapper[4758]: I0122 17:02:50.139542 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerStarted","Data":"5eead23d0d27ee914bca46bed2730995861ad0d4a38f25fb65f69db7d742ebbc"} Jan 22 17:03:00 crc kubenswrapper[4758]: I0122 17:03:00.494163 4758 scope.go:117] "RemoveContainer" containerID="9b2b3ca26420af022c92fa8fa71bffec91d0a63c273807336b9b11c84bcdab6e" Jan 22 17:03:00 crc kubenswrapper[4758]: I0122 17:03:00.556549 4758 scope.go:117] "RemoveContainer" containerID="3a7eb876a027926425012f48e1cd423431ed1fa33024a0073914b0d281905ffd" Jan 22 17:03:00 crc kubenswrapper[4758]: I0122 17:03:00.587318 4758 scope.go:117] "RemoveContainer" containerID="e372811a729ff0df8fbd6e21e7f66d2104eef17385e80d9766761eea836f31d5" Jan 22 17:03:00 crc kubenswrapper[4758]: I0122 17:03:00.662460 4758 scope.go:117] "RemoveContainer" containerID="69ee03246e17adce8ca09b0c408259f38eddae14f39bc9e644a8110b0a4bfc78" Jan 22 17:03:00 crc kubenswrapper[4758]: I0122 17:03:00.698855 4758 scope.go:117] "RemoveContainer" containerID="d3b437dad77713b4711bcd032a920a37b7499f02df7fdea656e732ca1a489d0f" Jan 22 17:03:00 crc kubenswrapper[4758]: I0122 17:03:00.754496 4758 scope.go:117] "RemoveContainer" containerID="3d90a62b483d010a7a8dc323d0a9383e4c40248ba21a44fdc6e779c4e5730570" Jan 22 17:03:12 crc kubenswrapper[4758]: I0122 17:03:12.045936 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-dpgv7"] Jan 22 17:03:12 crc kubenswrapper[4758]: I0122 17:03:12.053470 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-dpgv7"] Jan 22 17:03:12 crc kubenswrapper[4758]: I0122 17:03:12.818890 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b045db5d-f4ac-430d-a697-aeb1a8353fa3" path="/var/lib/kubelet/pods/b045db5d-f4ac-430d-a697-aeb1a8353fa3/volumes" Jan 22 17:03:13 crc kubenswrapper[4758]: I0122 17:03:13.096190 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-lgj69"] Jan 22 17:03:13 crc kubenswrapper[4758]: I0122 17:03:13.111606 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-zx7m7"] Jan 22 17:03:13 crc kubenswrapper[4758]: I0122 17:03:13.122629 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-bc4e-account-create-update-mmj4j"] Jan 22 17:03:13 crc kubenswrapper[4758]: I0122 17:03:13.131376 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-489c-account-create-update-262gf"] Jan 22 17:03:13 crc kubenswrapper[4758]: I0122 17:03:13.139861 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-e851-account-create-update-mlg8s"] Jan 22 17:03:13 crc kubenswrapper[4758]: I0122 17:03:13.148033 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-lgj69"] Jan 22 17:03:13 crc kubenswrapper[4758]: I0122 17:03:13.156874 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-bc4e-account-create-update-mmj4j"] Jan 22 17:03:13 crc kubenswrapper[4758]: I0122 17:03:13.165551 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-489c-account-create-update-262gf"] Jan 22 17:03:13 crc kubenswrapper[4758]: I0122 17:03:13.173036 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-zx7m7"] Jan 22 17:03:13 crc kubenswrapper[4758]: I0122 17:03:13.180319 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-e851-account-create-update-mlg8s"] Jan 22 17:03:14 crc kubenswrapper[4758]: I0122 17:03:14.820732 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e75be79-a61a-4e9b-92de-fc51822da088" path="/var/lib/kubelet/pods/2e75be79-a61a-4e9b-92de-fc51822da088/volumes" Jan 22 17:03:14 crc kubenswrapper[4758]: I0122 17:03:14.821806 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="581d442b-f2db-42dc-bec7-f3b0d32456fb" path="/var/lib/kubelet/pods/581d442b-f2db-42dc-bec7-f3b0d32456fb/volumes" Jan 22 17:03:14 crc kubenswrapper[4758]: I0122 17:03:14.822556 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f427bb1-80ef-4430-aad1-b2ff4b1f4370" path="/var/lib/kubelet/pods/5f427bb1-80ef-4430-aad1-b2ff4b1f4370/volumes" Jan 22 17:03:14 crc kubenswrapper[4758]: I0122 17:03:14.823177 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9d9b3df-8ebe-49a4-9a23-0aa7dfc15ea4" path="/var/lib/kubelet/pods/a9d9b3df-8ebe-49a4-9a23-0aa7dfc15ea4/volumes" Jan 22 17:03:14 crc kubenswrapper[4758]: I0122 17:03:14.824307 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef1c7aa3-019f-4178-8a2b-dbb9a69fba64" path="/var/lib/kubelet/pods/ef1c7aa3-019f-4178-8a2b-dbb9a69fba64/volumes" Jan 22 17:03:50 crc kubenswrapper[4758]: I0122 17:03:50.051820 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qgsmp"] Jan 22 17:03:50 crc kubenswrapper[4758]: I0122 17:03:50.067985 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qgsmp"] Jan 22 17:03:50 crc kubenswrapper[4758]: I0122 17:03:50.826956 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc06c7d9-b286-48cd-a359-6c18d1cc0e80" path="/var/lib/kubelet/pods/fc06c7d9-b286-48cd-a359-6c18d1cc0e80/volumes" Jan 22 17:03:55 crc kubenswrapper[4758]: I0122 17:03:55.128105 4758 generic.go:334] "Generic (PLEG): container finished" podID="d877ce08-9a59-401c-ab3f-fc2c6905507f" containerID="2e5ed8d3139626b73ac103e4e7b0e80fa550d607c0789ae8e05cc9bec3337f04" exitCode=0 Jan 22 17:03:55 crc kubenswrapper[4758]: I0122 17:03:55.128181 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wst56" event={"ID":"d877ce08-9a59-401c-ab3f-fc2c6905507f","Type":"ContainerDied","Data":"2e5ed8d3139626b73ac103e4e7b0e80fa550d607c0789ae8e05cc9bec3337f04"} Jan 22 17:03:56 crc kubenswrapper[4758]: I0122 17:03:56.562765 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wst56" Jan 22 17:03:56 crc kubenswrapper[4758]: I0122 17:03:56.650341 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d877ce08-9a59-401c-ab3f-fc2c6905507f-ssh-key-openstack-edpm-ipam\") pod \"d877ce08-9a59-401c-ab3f-fc2c6905507f\" (UID: \"d877ce08-9a59-401c-ab3f-fc2c6905507f\") " Jan 22 17:03:56 crc kubenswrapper[4758]: I0122 17:03:56.650632 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d877ce08-9a59-401c-ab3f-fc2c6905507f-inventory\") pod \"d877ce08-9a59-401c-ab3f-fc2c6905507f\" (UID: \"d877ce08-9a59-401c-ab3f-fc2c6905507f\") " Jan 22 17:03:56 crc kubenswrapper[4758]: I0122 17:03:56.650839 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbnkq\" (UniqueName: \"kubernetes.io/projected/d877ce08-9a59-401c-ab3f-fc2c6905507f-kube-api-access-jbnkq\") pod \"d877ce08-9a59-401c-ab3f-fc2c6905507f\" (UID: \"d877ce08-9a59-401c-ab3f-fc2c6905507f\") " Jan 22 17:03:56 crc kubenswrapper[4758]: I0122 17:03:56.659168 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d877ce08-9a59-401c-ab3f-fc2c6905507f-kube-api-access-jbnkq" (OuterVolumeSpecName: "kube-api-access-jbnkq") pod "d877ce08-9a59-401c-ab3f-fc2c6905507f" (UID: "d877ce08-9a59-401c-ab3f-fc2c6905507f"). InnerVolumeSpecName "kube-api-access-jbnkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:03:56 crc kubenswrapper[4758]: I0122 17:03:56.686352 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d877ce08-9a59-401c-ab3f-fc2c6905507f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d877ce08-9a59-401c-ab3f-fc2c6905507f" (UID: "d877ce08-9a59-401c-ab3f-fc2c6905507f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:03:56 crc kubenswrapper[4758]: I0122 17:03:56.698832 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d877ce08-9a59-401c-ab3f-fc2c6905507f-inventory" (OuterVolumeSpecName: "inventory") pod "d877ce08-9a59-401c-ab3f-fc2c6905507f" (UID: "d877ce08-9a59-401c-ab3f-fc2c6905507f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:03:56 crc kubenswrapper[4758]: I0122 17:03:56.755148 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbnkq\" (UniqueName: \"kubernetes.io/projected/d877ce08-9a59-401c-ab3f-fc2c6905507f-kube-api-access-jbnkq\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:56 crc kubenswrapper[4758]: I0122 17:03:56.755197 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d877ce08-9a59-401c-ab3f-fc2c6905507f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:56 crc kubenswrapper[4758]: I0122 17:03:56.755214 4758 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d877ce08-9a59-401c-ab3f-fc2c6905507f-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:57 crc kubenswrapper[4758]: I0122 17:03:57.152381 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wst56" event={"ID":"d877ce08-9a59-401c-ab3f-fc2c6905507f","Type":"ContainerDied","Data":"4d411519519db900047d4055aad895f7d3911f63edf13a354735b0b67c1b8b34"} Jan 22 17:03:57 crc kubenswrapper[4758]: I0122 17:03:57.152439 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d411519519db900047d4055aad895f7d3911f63edf13a354735b0b67c1b8b34" Jan 22 17:03:57 crc kubenswrapper[4758]: I0122 17:03:57.152482 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wst56" Jan 22 17:03:57 crc kubenswrapper[4758]: I0122 17:03:57.258026 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5jzhg"] Jan 22 17:03:57 crc kubenswrapper[4758]: E0122 17:03:57.258638 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d877ce08-9a59-401c-ab3f-fc2c6905507f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 22 17:03:57 crc kubenswrapper[4758]: I0122 17:03:57.258673 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="d877ce08-9a59-401c-ab3f-fc2c6905507f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 22 17:03:57 crc kubenswrapper[4758]: I0122 17:03:57.259059 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="d877ce08-9a59-401c-ab3f-fc2c6905507f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 22 17:03:57 crc kubenswrapper[4758]: I0122 17:03:57.260067 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5jzhg" Jan 22 17:03:57 crc kubenswrapper[4758]: I0122 17:03:57.262912 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5gz9n" Jan 22 17:03:57 crc kubenswrapper[4758]: I0122 17:03:57.263209 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 17:03:57 crc kubenswrapper[4758]: I0122 17:03:57.263414 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 17:03:57 crc kubenswrapper[4758]: I0122 17:03:57.265641 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 17:03:57 crc kubenswrapper[4758]: I0122 17:03:57.277234 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5jzhg"] Jan 22 17:03:57 crc kubenswrapper[4758]: I0122 17:03:57.366548 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrlgt\" (UniqueName: \"kubernetes.io/projected/7247ce98-99d8-4a62-87bc-6fb7696602c4-kube-api-access-rrlgt\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5jzhg\" (UID: \"7247ce98-99d8-4a62-87bc-6fb7696602c4\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5jzhg" Jan 22 17:03:57 crc kubenswrapper[4758]: I0122 17:03:57.366665 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7247ce98-99d8-4a62-87bc-6fb7696602c4-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5jzhg\" (UID: \"7247ce98-99d8-4a62-87bc-6fb7696602c4\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5jzhg" Jan 22 17:03:57 crc kubenswrapper[4758]: I0122 17:03:57.366712 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7247ce98-99d8-4a62-87bc-6fb7696602c4-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5jzhg\" (UID: \"7247ce98-99d8-4a62-87bc-6fb7696602c4\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5jzhg" Jan 22 17:03:57 crc kubenswrapper[4758]: I0122 17:03:57.469206 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7247ce98-99d8-4a62-87bc-6fb7696602c4-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5jzhg\" (UID: \"7247ce98-99d8-4a62-87bc-6fb7696602c4\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5jzhg" Jan 22 17:03:57 crc kubenswrapper[4758]: I0122 17:03:57.469981 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrlgt\" (UniqueName: \"kubernetes.io/projected/7247ce98-99d8-4a62-87bc-6fb7696602c4-kube-api-access-rrlgt\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5jzhg\" (UID: \"7247ce98-99d8-4a62-87bc-6fb7696602c4\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5jzhg" Jan 22 17:03:57 crc kubenswrapper[4758]: I0122 17:03:57.470056 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7247ce98-99d8-4a62-87bc-6fb7696602c4-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5jzhg\" (UID: \"7247ce98-99d8-4a62-87bc-6fb7696602c4\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5jzhg" Jan 22 17:03:57 crc kubenswrapper[4758]: I0122 17:03:57.477862 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7247ce98-99d8-4a62-87bc-6fb7696602c4-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5jzhg\" (UID: \"7247ce98-99d8-4a62-87bc-6fb7696602c4\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5jzhg" Jan 22 17:03:57 crc kubenswrapper[4758]: I0122 17:03:57.482510 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7247ce98-99d8-4a62-87bc-6fb7696602c4-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5jzhg\" (UID: \"7247ce98-99d8-4a62-87bc-6fb7696602c4\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5jzhg" Jan 22 17:03:57 crc kubenswrapper[4758]: I0122 17:03:57.494635 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrlgt\" (UniqueName: \"kubernetes.io/projected/7247ce98-99d8-4a62-87bc-6fb7696602c4-kube-api-access-rrlgt\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5jzhg\" (UID: \"7247ce98-99d8-4a62-87bc-6fb7696602c4\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5jzhg" Jan 22 17:03:57 crc kubenswrapper[4758]: I0122 17:03:57.615675 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5jzhg" Jan 22 17:03:58 crc kubenswrapper[4758]: I0122 17:03:58.179926 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5jzhg"] Jan 22 17:03:58 crc kubenswrapper[4758]: W0122 17:03:58.185749 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7247ce98_99d8_4a62_87bc_6fb7696602c4.slice/crio-382f47958253ecab083bcb2df10f8080fdb6ca1cc45f0ca4b5da4ce907eb5e83 WatchSource:0}: Error finding container 382f47958253ecab083bcb2df10f8080fdb6ca1cc45f0ca4b5da4ce907eb5e83: Status 404 returned error can't find the container with id 382f47958253ecab083bcb2df10f8080fdb6ca1cc45f0ca4b5da4ce907eb5e83 Jan 22 17:03:58 crc kubenswrapper[4758]: I0122 17:03:58.191503 4758 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 17:03:59 crc kubenswrapper[4758]: I0122 17:03:59.179785 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5jzhg" event={"ID":"7247ce98-99d8-4a62-87bc-6fb7696602c4","Type":"ContainerStarted","Data":"8d71375bc30ee451756c3836d4df4afc14c97be76a4b833d51ab18c3539d28e5"} Jan 22 17:03:59 crc kubenswrapper[4758]: I0122 17:03:59.180634 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5jzhg" event={"ID":"7247ce98-99d8-4a62-87bc-6fb7696602c4","Type":"ContainerStarted","Data":"382f47958253ecab083bcb2df10f8080fdb6ca1cc45f0ca4b5da4ce907eb5e83"} Jan 22 17:03:59 crc kubenswrapper[4758]: I0122 17:03:59.200669 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5jzhg" podStartSLOduration=1.705580522 podStartE2EDuration="2.200625391s" podCreationTimestamp="2026-01-22 17:03:57 +0000 UTC" firstStartedPulling="2026-01-22 17:03:58.191255008 +0000 UTC m=+2059.674594293" lastFinishedPulling="2026-01-22 17:03:58.686299827 +0000 UTC m=+2060.169639162" observedRunningTime="2026-01-22 17:03:59.197058134 +0000 UTC m=+2060.680397439" watchObservedRunningTime="2026-01-22 17:03:59.200625391 +0000 UTC m=+2060.683964686" Jan 22 17:04:00 crc kubenswrapper[4758]: I0122 17:04:00.953804 4758 scope.go:117] "RemoveContainer" containerID="0da8b121f1a90f41679d3a4d85f1f3d708c60e7ba2a50175f60392dbc18bed65" Jan 22 17:04:00 crc kubenswrapper[4758]: I0122 17:04:00.991877 4758 scope.go:117] "RemoveContainer" containerID="a7aeadf3f379101c9b8098bfc406fac71974c449092423e455debdf5153545c0" Jan 22 17:04:01 crc kubenswrapper[4758]: I0122 17:04:01.073233 4758 scope.go:117] "RemoveContainer" containerID="fe10408f917bc5a4639ba726c854eae9d6a9338197201764120a5b6ad8a4776c" Jan 22 17:04:01 crc kubenswrapper[4758]: I0122 17:04:01.109713 4758 scope.go:117] "RemoveContainer" containerID="ab96ac48962808683a539b7e299acb866974855dda2c573b47c85ff3f69f5a4b" Jan 22 17:04:01 crc kubenswrapper[4758]: I0122 17:04:01.151777 4758 scope.go:117] "RemoveContainer" containerID="17318a3dc2c6cb2ae2f943646a145e9108db8d6c009d308b0604b2731ed85f47" Jan 22 17:04:01 crc kubenswrapper[4758]: I0122 17:04:01.204473 4758 scope.go:117] "RemoveContainer" containerID="21601da3f1fa3b099a62055dd594476ed77fb3ef4a75505adb0aaba258d9abde" Jan 22 17:04:01 crc kubenswrapper[4758]: I0122 17:04:01.266343 4758 scope.go:117] "RemoveContainer" containerID="48ec60e0582253735c044eaf382f2a2d3de1738a8c388fbd002d1303b6cff8ee" Jan 22 17:04:13 crc kubenswrapper[4758]: I0122 17:04:13.068889 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-tzfkb"] Jan 22 17:04:13 crc kubenswrapper[4758]: I0122 17:04:13.088487 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-tzfkb"] Jan 22 17:04:14 crc kubenswrapper[4758]: I0122 17:04:14.831138 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18850dee-b495-42e5-87ee-915b6c822255" path="/var/lib/kubelet/pods/18850dee-b495-42e5-87ee-915b6c822255/volumes" Jan 22 17:04:26 crc kubenswrapper[4758]: I0122 17:04:26.042973 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-kzc5v"] Jan 22 17:04:26 crc kubenswrapper[4758]: I0122 17:04:26.058148 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-kzc5v"] Jan 22 17:04:26 crc kubenswrapper[4758]: I0122 17:04:26.829681 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1c17792-1219-46ca-9587-380fbaced23b" path="/var/lib/kubelet/pods/a1c17792-1219-46ca-9587-380fbaced23b/volumes" Jan 22 17:05:01 crc kubenswrapper[4758]: I0122 17:05:01.473447 4758 scope.go:117] "RemoveContainer" containerID="a0cbc6b1d72c487c50e3e6c601ea461a183173eebd3edecaf941ac8870947bbe" Jan 22 17:05:01 crc kubenswrapper[4758]: I0122 17:05:01.523444 4758 scope.go:117] "RemoveContainer" containerID="ea0f7187d9eceffdb826c1735026e3192b78e7d0a69aaa42cbed685c89cb0cd6" Jan 22 17:05:02 crc kubenswrapper[4758]: I0122 17:05:02.065765 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-vlp59"] Jan 22 17:05:02 crc kubenswrapper[4758]: I0122 17:05:02.081162 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-vlp59"] Jan 22 17:05:02 crc kubenswrapper[4758]: I0122 17:05:02.828173 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1c22116-ce0a-4806-bbf7-e514519abff0" path="/var/lib/kubelet/pods/e1c22116-ce0a-4806-bbf7-e514519abff0/volumes" Jan 22 17:05:13 crc kubenswrapper[4758]: I0122 17:05:13.837411 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 17:05:13 crc kubenswrapper[4758]: I0122 17:05:13.838148 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 17:05:43 crc kubenswrapper[4758]: I0122 17:05:43.861098 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 17:05:43 crc kubenswrapper[4758]: I0122 17:05:43.861907 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 17:05:45 crc kubenswrapper[4758]: I0122 17:05:45.467232 4758 generic.go:334] "Generic (PLEG): container finished" podID="7247ce98-99d8-4a62-87bc-6fb7696602c4" containerID="8d71375bc30ee451756c3836d4df4afc14c97be76a4b833d51ab18c3539d28e5" exitCode=0 Jan 22 17:05:45 crc kubenswrapper[4758]: I0122 17:05:45.467310 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5jzhg" event={"ID":"7247ce98-99d8-4a62-87bc-6fb7696602c4","Type":"ContainerDied","Data":"8d71375bc30ee451756c3836d4df4afc14c97be76a4b833d51ab18c3539d28e5"} Jan 22 17:05:46 crc kubenswrapper[4758]: I0122 17:05:46.909450 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5jzhg" Jan 22 17:05:47 crc kubenswrapper[4758]: I0122 17:05:47.030287 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7247ce98-99d8-4a62-87bc-6fb7696602c4-ssh-key-openstack-edpm-ipam\") pod \"7247ce98-99d8-4a62-87bc-6fb7696602c4\" (UID: \"7247ce98-99d8-4a62-87bc-6fb7696602c4\") " Jan 22 17:05:47 crc kubenswrapper[4758]: I0122 17:05:47.030482 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7247ce98-99d8-4a62-87bc-6fb7696602c4-inventory\") pod \"7247ce98-99d8-4a62-87bc-6fb7696602c4\" (UID: \"7247ce98-99d8-4a62-87bc-6fb7696602c4\") " Jan 22 17:05:47 crc kubenswrapper[4758]: I0122 17:05:47.030513 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrlgt\" (UniqueName: \"kubernetes.io/projected/7247ce98-99d8-4a62-87bc-6fb7696602c4-kube-api-access-rrlgt\") pod \"7247ce98-99d8-4a62-87bc-6fb7696602c4\" (UID: \"7247ce98-99d8-4a62-87bc-6fb7696602c4\") " Jan 22 17:05:47 crc kubenswrapper[4758]: I0122 17:05:47.039022 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7247ce98-99d8-4a62-87bc-6fb7696602c4-kube-api-access-rrlgt" (OuterVolumeSpecName: "kube-api-access-rrlgt") pod "7247ce98-99d8-4a62-87bc-6fb7696602c4" (UID: "7247ce98-99d8-4a62-87bc-6fb7696602c4"). InnerVolumeSpecName "kube-api-access-rrlgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:05:47 crc kubenswrapper[4758]: I0122 17:05:47.061912 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7247ce98-99d8-4a62-87bc-6fb7696602c4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7247ce98-99d8-4a62-87bc-6fb7696602c4" (UID: "7247ce98-99d8-4a62-87bc-6fb7696602c4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:05:47 crc kubenswrapper[4758]: I0122 17:05:47.069235 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7247ce98-99d8-4a62-87bc-6fb7696602c4-inventory" (OuterVolumeSpecName: "inventory") pod "7247ce98-99d8-4a62-87bc-6fb7696602c4" (UID: "7247ce98-99d8-4a62-87bc-6fb7696602c4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:05:47 crc kubenswrapper[4758]: I0122 17:05:47.133952 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7247ce98-99d8-4a62-87bc-6fb7696602c4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 17:05:47 crc kubenswrapper[4758]: I0122 17:05:47.134014 4758 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7247ce98-99d8-4a62-87bc-6fb7696602c4-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 17:05:47 crc kubenswrapper[4758]: I0122 17:05:47.134028 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrlgt\" (UniqueName: \"kubernetes.io/projected/7247ce98-99d8-4a62-87bc-6fb7696602c4-kube-api-access-rrlgt\") on node \"crc\" DevicePath \"\"" Jan 22 17:05:47 crc kubenswrapper[4758]: I0122 17:05:47.489530 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5jzhg" event={"ID":"7247ce98-99d8-4a62-87bc-6fb7696602c4","Type":"ContainerDied","Data":"382f47958253ecab083bcb2df10f8080fdb6ca1cc45f0ca4b5da4ce907eb5e83"} Jan 22 17:05:47 crc kubenswrapper[4758]: I0122 17:05:47.489596 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5jzhg" Jan 22 17:05:47 crc kubenswrapper[4758]: I0122 17:05:47.489598 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="382f47958253ecab083bcb2df10f8080fdb6ca1cc45f0ca4b5da4ce907eb5e83" Jan 22 17:05:47 crc kubenswrapper[4758]: I0122 17:05:47.590502 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lm46c"] Jan 22 17:05:47 crc kubenswrapper[4758]: E0122 17:05:47.590945 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7247ce98-99d8-4a62-87bc-6fb7696602c4" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 22 17:05:47 crc kubenswrapper[4758]: I0122 17:05:47.590970 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="7247ce98-99d8-4a62-87bc-6fb7696602c4" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 22 17:05:47 crc kubenswrapper[4758]: I0122 17:05:47.591192 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="7247ce98-99d8-4a62-87bc-6fb7696602c4" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 22 17:05:47 crc kubenswrapper[4758]: I0122 17:05:47.591902 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lm46c" Jan 22 17:05:47 crc kubenswrapper[4758]: I0122 17:05:47.596016 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 17:05:47 crc kubenswrapper[4758]: I0122 17:05:47.596020 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 17:05:47 crc kubenswrapper[4758]: I0122 17:05:47.596334 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 17:05:47 crc kubenswrapper[4758]: I0122 17:05:47.598964 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5gz9n" Jan 22 17:05:47 crc kubenswrapper[4758]: I0122 17:05:47.604339 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lm46c"] Jan 22 17:05:47 crc kubenswrapper[4758]: I0122 17:05:47.645031 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdmgp\" (UniqueName: \"kubernetes.io/projected/de60144d-7668-4bcf-8421-dc4b0ceedf26-kube-api-access-fdmgp\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lm46c\" (UID: \"de60144d-7668-4bcf-8421-dc4b0ceedf26\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lm46c" Jan 22 17:05:47 crc kubenswrapper[4758]: I0122 17:05:47.645165 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de60144d-7668-4bcf-8421-dc4b0ceedf26-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lm46c\" (UID: \"de60144d-7668-4bcf-8421-dc4b0ceedf26\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lm46c" Jan 22 17:05:47 crc kubenswrapper[4758]: I0122 17:05:47.645187 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de60144d-7668-4bcf-8421-dc4b0ceedf26-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lm46c\" (UID: \"de60144d-7668-4bcf-8421-dc4b0ceedf26\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lm46c" Jan 22 17:05:47 crc kubenswrapper[4758]: I0122 17:05:47.747440 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdmgp\" (UniqueName: \"kubernetes.io/projected/de60144d-7668-4bcf-8421-dc4b0ceedf26-kube-api-access-fdmgp\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lm46c\" (UID: \"de60144d-7668-4bcf-8421-dc4b0ceedf26\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lm46c" Jan 22 17:05:47 crc kubenswrapper[4758]: I0122 17:05:47.747613 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de60144d-7668-4bcf-8421-dc4b0ceedf26-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lm46c\" (UID: \"de60144d-7668-4bcf-8421-dc4b0ceedf26\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lm46c" Jan 22 17:05:47 crc kubenswrapper[4758]: I0122 17:05:47.747637 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de60144d-7668-4bcf-8421-dc4b0ceedf26-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lm46c\" (UID: \"de60144d-7668-4bcf-8421-dc4b0ceedf26\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lm46c" Jan 22 17:05:47 crc kubenswrapper[4758]: I0122 17:05:47.751436 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de60144d-7668-4bcf-8421-dc4b0ceedf26-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lm46c\" (UID: \"de60144d-7668-4bcf-8421-dc4b0ceedf26\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lm46c" Jan 22 17:05:47 crc kubenswrapper[4758]: I0122 17:05:47.752272 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de60144d-7668-4bcf-8421-dc4b0ceedf26-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lm46c\" (UID: \"de60144d-7668-4bcf-8421-dc4b0ceedf26\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lm46c" Jan 22 17:05:47 crc kubenswrapper[4758]: I0122 17:05:47.765021 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdmgp\" (UniqueName: \"kubernetes.io/projected/de60144d-7668-4bcf-8421-dc4b0ceedf26-kube-api-access-fdmgp\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lm46c\" (UID: \"de60144d-7668-4bcf-8421-dc4b0ceedf26\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lm46c" Jan 22 17:05:47 crc kubenswrapper[4758]: I0122 17:05:47.913500 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lm46c" Jan 22 17:05:48 crc kubenswrapper[4758]: I0122 17:05:48.485572 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lm46c"] Jan 22 17:05:48 crc kubenswrapper[4758]: I0122 17:05:48.501814 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lm46c" event={"ID":"de60144d-7668-4bcf-8421-dc4b0ceedf26","Type":"ContainerStarted","Data":"0bbd2b9e077c667346aff92f4c99aff973378b2e1c1146888c63be0cba6716b4"} Jan 22 17:05:49 crc kubenswrapper[4758]: I0122 17:05:49.511826 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lm46c" event={"ID":"de60144d-7668-4bcf-8421-dc4b0ceedf26","Type":"ContainerStarted","Data":"242bdeef28806855af8c1e81ec4fddb63c086716d5eed5bd410a8b12acee5c54"} Jan 22 17:05:49 crc kubenswrapper[4758]: I0122 17:05:49.526854 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lm46c" podStartSLOduration=2.038937922 podStartE2EDuration="2.526818956s" podCreationTimestamp="2026-01-22 17:05:47 +0000 UTC" firstStartedPulling="2026-01-22 17:05:48.486194652 +0000 UTC m=+2169.969533937" lastFinishedPulling="2026-01-22 17:05:48.974075686 +0000 UTC m=+2170.457414971" observedRunningTime="2026-01-22 17:05:49.525595094 +0000 UTC m=+2171.008934389" watchObservedRunningTime="2026-01-22 17:05:49.526818956 +0000 UTC m=+2171.010158251" Jan 22 17:05:54 crc kubenswrapper[4758]: I0122 17:05:54.558449 4758 generic.go:334] "Generic (PLEG): container finished" podID="de60144d-7668-4bcf-8421-dc4b0ceedf26" containerID="242bdeef28806855af8c1e81ec4fddb63c086716d5eed5bd410a8b12acee5c54" exitCode=0 Jan 22 17:05:54 crc kubenswrapper[4758]: I0122 17:05:54.558539 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lm46c" event={"ID":"de60144d-7668-4bcf-8421-dc4b0ceedf26","Type":"ContainerDied","Data":"242bdeef28806855af8c1e81ec4fddb63c086716d5eed5bd410a8b12acee5c54"} Jan 22 17:05:56 crc kubenswrapper[4758]: I0122 17:05:56.000292 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lm46c" Jan 22 17:05:56 crc kubenswrapper[4758]: I0122 17:05:56.143863 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de60144d-7668-4bcf-8421-dc4b0ceedf26-inventory\") pod \"de60144d-7668-4bcf-8421-dc4b0ceedf26\" (UID: \"de60144d-7668-4bcf-8421-dc4b0ceedf26\") " Jan 22 17:05:56 crc kubenswrapper[4758]: I0122 17:05:56.143921 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de60144d-7668-4bcf-8421-dc4b0ceedf26-ssh-key-openstack-edpm-ipam\") pod \"de60144d-7668-4bcf-8421-dc4b0ceedf26\" (UID: \"de60144d-7668-4bcf-8421-dc4b0ceedf26\") " Jan 22 17:05:56 crc kubenswrapper[4758]: I0122 17:05:56.144051 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdmgp\" (UniqueName: \"kubernetes.io/projected/de60144d-7668-4bcf-8421-dc4b0ceedf26-kube-api-access-fdmgp\") pod \"de60144d-7668-4bcf-8421-dc4b0ceedf26\" (UID: \"de60144d-7668-4bcf-8421-dc4b0ceedf26\") " Jan 22 17:05:56 crc kubenswrapper[4758]: I0122 17:05:56.150212 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de60144d-7668-4bcf-8421-dc4b0ceedf26-kube-api-access-fdmgp" (OuterVolumeSpecName: "kube-api-access-fdmgp") pod "de60144d-7668-4bcf-8421-dc4b0ceedf26" (UID: "de60144d-7668-4bcf-8421-dc4b0ceedf26"). InnerVolumeSpecName "kube-api-access-fdmgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:05:56 crc kubenswrapper[4758]: I0122 17:05:56.202504 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de60144d-7668-4bcf-8421-dc4b0ceedf26-inventory" (OuterVolumeSpecName: "inventory") pod "de60144d-7668-4bcf-8421-dc4b0ceedf26" (UID: "de60144d-7668-4bcf-8421-dc4b0ceedf26"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:05:56 crc kubenswrapper[4758]: I0122 17:05:56.203668 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de60144d-7668-4bcf-8421-dc4b0ceedf26-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "de60144d-7668-4bcf-8421-dc4b0ceedf26" (UID: "de60144d-7668-4bcf-8421-dc4b0ceedf26"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:05:56 crc kubenswrapper[4758]: I0122 17:05:56.245926 4758 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de60144d-7668-4bcf-8421-dc4b0ceedf26-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 17:05:56 crc kubenswrapper[4758]: I0122 17:05:56.245960 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de60144d-7668-4bcf-8421-dc4b0ceedf26-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 17:05:56 crc kubenswrapper[4758]: I0122 17:05:56.245974 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdmgp\" (UniqueName: \"kubernetes.io/projected/de60144d-7668-4bcf-8421-dc4b0ceedf26-kube-api-access-fdmgp\") on node \"crc\" DevicePath \"\"" Jan 22 17:05:56 crc kubenswrapper[4758]: I0122 17:05:56.579254 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lm46c" event={"ID":"de60144d-7668-4bcf-8421-dc4b0ceedf26","Type":"ContainerDied","Data":"0bbd2b9e077c667346aff92f4c99aff973378b2e1c1146888c63be0cba6716b4"} Jan 22 17:05:56 crc kubenswrapper[4758]: I0122 17:05:56.579545 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bbd2b9e077c667346aff92f4c99aff973378b2e1c1146888c63be0cba6716b4" Jan 22 17:05:56 crc kubenswrapper[4758]: I0122 17:05:56.579283 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lm46c" Jan 22 17:05:56 crc kubenswrapper[4758]: I0122 17:05:56.666455 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-mpdz7"] Jan 22 17:05:56 crc kubenswrapper[4758]: E0122 17:05:56.667482 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de60144d-7668-4bcf-8421-dc4b0ceedf26" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 22 17:05:56 crc kubenswrapper[4758]: I0122 17:05:56.667514 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="de60144d-7668-4bcf-8421-dc4b0ceedf26" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 22 17:05:56 crc kubenswrapper[4758]: I0122 17:05:56.667818 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="de60144d-7668-4bcf-8421-dc4b0ceedf26" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 22 17:05:56 crc kubenswrapper[4758]: I0122 17:05:56.668811 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mpdz7" Jan 22 17:05:56 crc kubenswrapper[4758]: I0122 17:05:56.671643 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 17:05:56 crc kubenswrapper[4758]: I0122 17:05:56.671909 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5gz9n" Jan 22 17:05:56 crc kubenswrapper[4758]: I0122 17:05:56.672024 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 17:05:56 crc kubenswrapper[4758]: I0122 17:05:56.672194 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 17:05:56 crc kubenswrapper[4758]: I0122 17:05:56.682306 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-mpdz7"] Jan 22 17:05:56 crc kubenswrapper[4758]: I0122 17:05:56.756246 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1a46b6a5-f2c3-49cc-b49e-8fcee32c1b9c-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mpdz7\" (UID: \"1a46b6a5-f2c3-49cc-b49e-8fcee32c1b9c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mpdz7" Jan 22 17:05:56 crc kubenswrapper[4758]: I0122 17:05:56.756452 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a46b6a5-f2c3-49cc-b49e-8fcee32c1b9c-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mpdz7\" (UID: \"1a46b6a5-f2c3-49cc-b49e-8fcee32c1b9c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mpdz7" Jan 22 17:05:56 crc kubenswrapper[4758]: I0122 17:05:56.756509 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvdvh\" (UniqueName: \"kubernetes.io/projected/1a46b6a5-f2c3-49cc-b49e-8fcee32c1b9c-kube-api-access-nvdvh\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mpdz7\" (UID: \"1a46b6a5-f2c3-49cc-b49e-8fcee32c1b9c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mpdz7" Jan 22 17:05:56 crc kubenswrapper[4758]: I0122 17:05:56.858577 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a46b6a5-f2c3-49cc-b49e-8fcee32c1b9c-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mpdz7\" (UID: \"1a46b6a5-f2c3-49cc-b49e-8fcee32c1b9c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mpdz7" Jan 22 17:05:56 crc kubenswrapper[4758]: I0122 17:05:56.858635 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvdvh\" (UniqueName: \"kubernetes.io/projected/1a46b6a5-f2c3-49cc-b49e-8fcee32c1b9c-kube-api-access-nvdvh\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mpdz7\" (UID: \"1a46b6a5-f2c3-49cc-b49e-8fcee32c1b9c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mpdz7" Jan 22 17:05:56 crc kubenswrapper[4758]: I0122 17:05:56.858866 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1a46b6a5-f2c3-49cc-b49e-8fcee32c1b9c-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mpdz7\" (UID: \"1a46b6a5-f2c3-49cc-b49e-8fcee32c1b9c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mpdz7" Jan 22 17:05:56 crc kubenswrapper[4758]: I0122 17:05:56.870913 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a46b6a5-f2c3-49cc-b49e-8fcee32c1b9c-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mpdz7\" (UID: \"1a46b6a5-f2c3-49cc-b49e-8fcee32c1b9c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mpdz7" Jan 22 17:05:56 crc kubenswrapper[4758]: I0122 17:05:56.870961 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1a46b6a5-f2c3-49cc-b49e-8fcee32c1b9c-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mpdz7\" (UID: \"1a46b6a5-f2c3-49cc-b49e-8fcee32c1b9c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mpdz7" Jan 22 17:05:56 crc kubenswrapper[4758]: I0122 17:05:56.876901 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvdvh\" (UniqueName: \"kubernetes.io/projected/1a46b6a5-f2c3-49cc-b49e-8fcee32c1b9c-kube-api-access-nvdvh\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mpdz7\" (UID: \"1a46b6a5-f2c3-49cc-b49e-8fcee32c1b9c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mpdz7" Jan 22 17:05:56 crc kubenswrapper[4758]: I0122 17:05:56.995465 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mpdz7" Jan 22 17:05:57 crc kubenswrapper[4758]: I0122 17:05:57.598075 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-mpdz7"] Jan 22 17:05:58 crc kubenswrapper[4758]: I0122 17:05:58.620570 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mpdz7" event={"ID":"1a46b6a5-f2c3-49cc-b49e-8fcee32c1b9c","Type":"ContainerStarted","Data":"400bcb2751e94869318fb93f1e6c0c567a1c520eacced44c296317596b569e1e"} Jan 22 17:05:58 crc kubenswrapper[4758]: I0122 17:05:58.620991 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mpdz7" event={"ID":"1a46b6a5-f2c3-49cc-b49e-8fcee32c1b9c","Type":"ContainerStarted","Data":"7fdd96b37e8a826ed6f037fca8fa51d62efd9aa0c4db8a65e76f333a83b311f7"} Jan 22 17:05:58 crc kubenswrapper[4758]: I0122 17:05:58.643033 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mpdz7" podStartSLOduration=2.240055691 podStartE2EDuration="2.643014469s" podCreationTimestamp="2026-01-22 17:05:56 +0000 UTC" firstStartedPulling="2026-01-22 17:05:57.592414433 +0000 UTC m=+2179.075753748" lastFinishedPulling="2026-01-22 17:05:57.995373201 +0000 UTC m=+2179.478712526" observedRunningTime="2026-01-22 17:05:58.635461093 +0000 UTC m=+2180.118800388" watchObservedRunningTime="2026-01-22 17:05:58.643014469 +0000 UTC m=+2180.126353754" Jan 22 17:06:01 crc kubenswrapper[4758]: I0122 17:06:01.635271 4758 scope.go:117] "RemoveContainer" containerID="64e1a857d67e593db9601cf41360703e3f11f22770e474322a231e40b2dbbd2d" Jan 22 17:06:13 crc kubenswrapper[4758]: I0122 17:06:13.837913 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 17:06:13 crc kubenswrapper[4758]: I0122 17:06:13.838735 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 17:06:13 crc kubenswrapper[4758]: I0122 17:06:13.838843 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" Jan 22 17:06:13 crc kubenswrapper[4758]: I0122 17:06:13.840041 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5eead23d0d27ee914bca46bed2730995861ad0d4a38f25fb65f69db7d742ebbc"} pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 17:06:13 crc kubenswrapper[4758]: I0122 17:06:13.840196 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" containerID="cri-o://5eead23d0d27ee914bca46bed2730995861ad0d4a38f25fb65f69db7d742ebbc" gracePeriod=600 Jan 22 17:06:14 crc kubenswrapper[4758]: I0122 17:06:14.785967 4758 generic.go:334] "Generic (PLEG): container finished" podID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerID="5eead23d0d27ee914bca46bed2730995861ad0d4a38f25fb65f69db7d742ebbc" exitCode=0 Jan 22 17:06:14 crc kubenswrapper[4758]: I0122 17:06:14.786054 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerDied","Data":"5eead23d0d27ee914bca46bed2730995861ad0d4a38f25fb65f69db7d742ebbc"} Jan 22 17:06:14 crc kubenswrapper[4758]: I0122 17:06:14.786484 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerStarted","Data":"7b22a3b8055c9ca6f1b3b05a642218cc5ffe796314bc510e268584581f9db5e0"} Jan 22 17:06:14 crc kubenswrapper[4758]: I0122 17:06:14.786504 4758 scope.go:117] "RemoveContainer" containerID="9fbb4e4b642afb97b44eb564377795a5aede8a06f9d628acf1dc7fd06d2240ab" Jan 22 17:06:45 crc kubenswrapper[4758]: I0122 17:06:45.281947 4758 generic.go:334] "Generic (PLEG): container finished" podID="1a46b6a5-f2c3-49cc-b49e-8fcee32c1b9c" containerID="400bcb2751e94869318fb93f1e6c0c567a1c520eacced44c296317596b569e1e" exitCode=0 Jan 22 17:06:45 crc kubenswrapper[4758]: I0122 17:06:45.282483 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mpdz7" event={"ID":"1a46b6a5-f2c3-49cc-b49e-8fcee32c1b9c","Type":"ContainerDied","Data":"400bcb2751e94869318fb93f1e6c0c567a1c520eacced44c296317596b569e1e"} Jan 22 17:06:46 crc kubenswrapper[4758]: I0122 17:06:46.867085 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mpdz7" Jan 22 17:06:47 crc kubenswrapper[4758]: I0122 17:06:47.014643 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvdvh\" (UniqueName: \"kubernetes.io/projected/1a46b6a5-f2c3-49cc-b49e-8fcee32c1b9c-kube-api-access-nvdvh\") pod \"1a46b6a5-f2c3-49cc-b49e-8fcee32c1b9c\" (UID: \"1a46b6a5-f2c3-49cc-b49e-8fcee32c1b9c\") " Jan 22 17:06:47 crc kubenswrapper[4758]: I0122 17:06:47.014687 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a46b6a5-f2c3-49cc-b49e-8fcee32c1b9c-inventory\") pod \"1a46b6a5-f2c3-49cc-b49e-8fcee32c1b9c\" (UID: \"1a46b6a5-f2c3-49cc-b49e-8fcee32c1b9c\") " Jan 22 17:06:47 crc kubenswrapper[4758]: I0122 17:06:47.014734 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1a46b6a5-f2c3-49cc-b49e-8fcee32c1b9c-ssh-key-openstack-edpm-ipam\") pod \"1a46b6a5-f2c3-49cc-b49e-8fcee32c1b9c\" (UID: \"1a46b6a5-f2c3-49cc-b49e-8fcee32c1b9c\") " Jan 22 17:06:47 crc kubenswrapper[4758]: I0122 17:06:47.025012 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a46b6a5-f2c3-49cc-b49e-8fcee32c1b9c-kube-api-access-nvdvh" (OuterVolumeSpecName: "kube-api-access-nvdvh") pod "1a46b6a5-f2c3-49cc-b49e-8fcee32c1b9c" (UID: "1a46b6a5-f2c3-49cc-b49e-8fcee32c1b9c"). InnerVolumeSpecName "kube-api-access-nvdvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:06:47 crc kubenswrapper[4758]: I0122 17:06:47.048461 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a46b6a5-f2c3-49cc-b49e-8fcee32c1b9c-inventory" (OuterVolumeSpecName: "inventory") pod "1a46b6a5-f2c3-49cc-b49e-8fcee32c1b9c" (UID: "1a46b6a5-f2c3-49cc-b49e-8fcee32c1b9c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:06:47 crc kubenswrapper[4758]: I0122 17:06:47.063255 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a46b6a5-f2c3-49cc-b49e-8fcee32c1b9c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1a46b6a5-f2c3-49cc-b49e-8fcee32c1b9c" (UID: "1a46b6a5-f2c3-49cc-b49e-8fcee32c1b9c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:06:47 crc kubenswrapper[4758]: I0122 17:06:47.116803 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvdvh\" (UniqueName: \"kubernetes.io/projected/1a46b6a5-f2c3-49cc-b49e-8fcee32c1b9c-kube-api-access-nvdvh\") on node \"crc\" DevicePath \"\"" Jan 22 17:06:47 crc kubenswrapper[4758]: I0122 17:06:47.116829 4758 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a46b6a5-f2c3-49cc-b49e-8fcee32c1b9c-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 17:06:47 crc kubenswrapper[4758]: I0122 17:06:47.116838 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1a46b6a5-f2c3-49cc-b49e-8fcee32c1b9c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 17:06:47 crc kubenswrapper[4758]: I0122 17:06:47.311318 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mpdz7" event={"ID":"1a46b6a5-f2c3-49cc-b49e-8fcee32c1b9c","Type":"ContainerDied","Data":"7fdd96b37e8a826ed6f037fca8fa51d62efd9aa0c4db8a65e76f333a83b311f7"} Jan 22 17:06:47 crc kubenswrapper[4758]: I0122 17:06:47.311393 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fdd96b37e8a826ed6f037fca8fa51d62efd9aa0c4db8a65e76f333a83b311f7" Jan 22 17:06:47 crc kubenswrapper[4758]: I0122 17:06:47.311415 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mpdz7" Jan 22 17:06:47 crc kubenswrapper[4758]: I0122 17:06:47.407702 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pnlt7"] Jan 22 17:06:47 crc kubenswrapper[4758]: E0122 17:06:47.408304 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a46b6a5-f2c3-49cc-b49e-8fcee32c1b9c" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 22 17:06:47 crc kubenswrapper[4758]: I0122 17:06:47.408334 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a46b6a5-f2c3-49cc-b49e-8fcee32c1b9c" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 22 17:06:47 crc kubenswrapper[4758]: I0122 17:06:47.408639 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a46b6a5-f2c3-49cc-b49e-8fcee32c1b9c" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 22 17:06:47 crc kubenswrapper[4758]: I0122 17:06:47.409832 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pnlt7" Jan 22 17:06:47 crc kubenswrapper[4758]: I0122 17:06:47.418626 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 17:06:47 crc kubenswrapper[4758]: I0122 17:06:47.418976 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 17:06:47 crc kubenswrapper[4758]: I0122 17:06:47.419188 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 17:06:47 crc kubenswrapper[4758]: I0122 17:06:47.419247 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5gz9n" Jan 22 17:06:47 crc kubenswrapper[4758]: I0122 17:06:47.424179 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pnlt7"] Jan 22 17:06:47 crc kubenswrapper[4758]: I0122 17:06:47.523448 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkt2x\" (UniqueName: \"kubernetes.io/projected/8ad7e035-e1f4-4274-b9c1-9014a86bfb5d-kube-api-access-kkt2x\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pnlt7\" (UID: \"8ad7e035-e1f4-4274-b9c1-9014a86bfb5d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pnlt7" Jan 22 17:06:47 crc kubenswrapper[4758]: I0122 17:06:47.523561 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8ad7e035-e1f4-4274-b9c1-9014a86bfb5d-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pnlt7\" (UID: \"8ad7e035-e1f4-4274-b9c1-9014a86bfb5d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pnlt7" Jan 22 17:06:47 crc kubenswrapper[4758]: I0122 17:06:47.523811 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ad7e035-e1f4-4274-b9c1-9014a86bfb5d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pnlt7\" (UID: \"8ad7e035-e1f4-4274-b9c1-9014a86bfb5d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pnlt7" Jan 22 17:06:47 crc kubenswrapper[4758]: I0122 17:06:47.630556 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ad7e035-e1f4-4274-b9c1-9014a86bfb5d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pnlt7\" (UID: \"8ad7e035-e1f4-4274-b9c1-9014a86bfb5d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pnlt7" Jan 22 17:06:47 crc kubenswrapper[4758]: I0122 17:06:47.630746 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkt2x\" (UniqueName: \"kubernetes.io/projected/8ad7e035-e1f4-4274-b9c1-9014a86bfb5d-kube-api-access-kkt2x\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pnlt7\" (UID: \"8ad7e035-e1f4-4274-b9c1-9014a86bfb5d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pnlt7" Jan 22 17:06:47 crc kubenswrapper[4758]: I0122 17:06:47.630948 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8ad7e035-e1f4-4274-b9c1-9014a86bfb5d-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pnlt7\" (UID: \"8ad7e035-e1f4-4274-b9c1-9014a86bfb5d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pnlt7" Jan 22 17:06:47 crc kubenswrapper[4758]: I0122 17:06:47.639216 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ad7e035-e1f4-4274-b9c1-9014a86bfb5d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pnlt7\" (UID: \"8ad7e035-e1f4-4274-b9c1-9014a86bfb5d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pnlt7" Jan 22 17:06:47 crc kubenswrapper[4758]: I0122 17:06:47.641538 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8ad7e035-e1f4-4274-b9c1-9014a86bfb5d-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pnlt7\" (UID: \"8ad7e035-e1f4-4274-b9c1-9014a86bfb5d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pnlt7" Jan 22 17:06:47 crc kubenswrapper[4758]: I0122 17:06:47.649591 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkt2x\" (UniqueName: \"kubernetes.io/projected/8ad7e035-e1f4-4274-b9c1-9014a86bfb5d-kube-api-access-kkt2x\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pnlt7\" (UID: \"8ad7e035-e1f4-4274-b9c1-9014a86bfb5d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pnlt7" Jan 22 17:06:47 crc kubenswrapper[4758]: I0122 17:06:47.733774 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pnlt7" Jan 22 17:06:48 crc kubenswrapper[4758]: I0122 17:06:48.316559 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pnlt7"] Jan 22 17:06:49 crc kubenswrapper[4758]: I0122 17:06:49.367656 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pnlt7" event={"ID":"8ad7e035-e1f4-4274-b9c1-9014a86bfb5d","Type":"ContainerStarted","Data":"0f3f399626c4272a661373aff070718ca462610a72b0ef1a9579d6af34c07464"} Jan 22 17:06:49 crc kubenswrapper[4758]: I0122 17:06:49.368408 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pnlt7" event={"ID":"8ad7e035-e1f4-4274-b9c1-9014a86bfb5d","Type":"ContainerStarted","Data":"828dcf540608058f775a35b5edf44e91f593ce6741c2dfd831a1220e878927d7"} Jan 22 17:06:49 crc kubenswrapper[4758]: I0122 17:06:49.392480 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pnlt7" podStartSLOduration=1.7944062779999999 podStartE2EDuration="2.392441215s" podCreationTimestamp="2026-01-22 17:06:47 +0000 UTC" firstStartedPulling="2026-01-22 17:06:48.321553975 +0000 UTC m=+2229.804893260" lastFinishedPulling="2026-01-22 17:06:48.919588922 +0000 UTC m=+2230.402928197" observedRunningTime="2026-01-22 17:06:49.383887701 +0000 UTC m=+2230.867226986" watchObservedRunningTime="2026-01-22 17:06:49.392441215 +0000 UTC m=+2230.875780510" Jan 22 17:07:49 crc kubenswrapper[4758]: I0122 17:07:49.055532 4758 generic.go:334] "Generic (PLEG): container finished" podID="8ad7e035-e1f4-4274-b9c1-9014a86bfb5d" containerID="0f3f399626c4272a661373aff070718ca462610a72b0ef1a9579d6af34c07464" exitCode=0 Jan 22 17:07:49 crc kubenswrapper[4758]: I0122 17:07:49.055641 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pnlt7" event={"ID":"8ad7e035-e1f4-4274-b9c1-9014a86bfb5d","Type":"ContainerDied","Data":"0f3f399626c4272a661373aff070718ca462610a72b0ef1a9579d6af34c07464"} Jan 22 17:07:50 crc kubenswrapper[4758]: I0122 17:07:50.496131 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pnlt7" Jan 22 17:07:50 crc kubenswrapper[4758]: I0122 17:07:50.524169 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8ad7e035-e1f4-4274-b9c1-9014a86bfb5d-ssh-key-openstack-edpm-ipam\") pod \"8ad7e035-e1f4-4274-b9c1-9014a86bfb5d\" (UID: \"8ad7e035-e1f4-4274-b9c1-9014a86bfb5d\") " Jan 22 17:07:50 crc kubenswrapper[4758]: I0122 17:07:50.524371 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ad7e035-e1f4-4274-b9c1-9014a86bfb5d-inventory\") pod \"8ad7e035-e1f4-4274-b9c1-9014a86bfb5d\" (UID: \"8ad7e035-e1f4-4274-b9c1-9014a86bfb5d\") " Jan 22 17:07:50 crc kubenswrapper[4758]: I0122 17:07:50.524495 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkt2x\" (UniqueName: \"kubernetes.io/projected/8ad7e035-e1f4-4274-b9c1-9014a86bfb5d-kube-api-access-kkt2x\") pod \"8ad7e035-e1f4-4274-b9c1-9014a86bfb5d\" (UID: \"8ad7e035-e1f4-4274-b9c1-9014a86bfb5d\") " Jan 22 17:07:50 crc kubenswrapper[4758]: I0122 17:07:50.531296 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ad7e035-e1f4-4274-b9c1-9014a86bfb5d-kube-api-access-kkt2x" (OuterVolumeSpecName: "kube-api-access-kkt2x") pod "8ad7e035-e1f4-4274-b9c1-9014a86bfb5d" (UID: "8ad7e035-e1f4-4274-b9c1-9014a86bfb5d"). InnerVolumeSpecName "kube-api-access-kkt2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:07:50 crc kubenswrapper[4758]: I0122 17:07:50.554574 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ad7e035-e1f4-4274-b9c1-9014a86bfb5d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8ad7e035-e1f4-4274-b9c1-9014a86bfb5d" (UID: "8ad7e035-e1f4-4274-b9c1-9014a86bfb5d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:07:50 crc kubenswrapper[4758]: I0122 17:07:50.560235 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ad7e035-e1f4-4274-b9c1-9014a86bfb5d-inventory" (OuterVolumeSpecName: "inventory") pod "8ad7e035-e1f4-4274-b9c1-9014a86bfb5d" (UID: "8ad7e035-e1f4-4274-b9c1-9014a86bfb5d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:07:50 crc kubenswrapper[4758]: I0122 17:07:50.630325 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkt2x\" (UniqueName: \"kubernetes.io/projected/8ad7e035-e1f4-4274-b9c1-9014a86bfb5d-kube-api-access-kkt2x\") on node \"crc\" DevicePath \"\"" Jan 22 17:07:50 crc kubenswrapper[4758]: I0122 17:07:50.630680 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8ad7e035-e1f4-4274-b9c1-9014a86bfb5d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 17:07:50 crc kubenswrapper[4758]: I0122 17:07:50.630914 4758 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ad7e035-e1f4-4274-b9c1-9014a86bfb5d-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 17:07:51 crc kubenswrapper[4758]: I0122 17:07:51.076981 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pnlt7" event={"ID":"8ad7e035-e1f4-4274-b9c1-9014a86bfb5d","Type":"ContainerDied","Data":"828dcf540608058f775a35b5edf44e91f593ce6741c2dfd831a1220e878927d7"} Jan 22 17:07:51 crc kubenswrapper[4758]: I0122 17:07:51.077024 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="828dcf540608058f775a35b5edf44e91f593ce6741c2dfd831a1220e878927d7" Jan 22 17:07:51 crc kubenswrapper[4758]: I0122 17:07:51.077088 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pnlt7" Jan 22 17:07:51 crc kubenswrapper[4758]: I0122 17:07:51.188200 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-h2nj4"] Jan 22 17:07:51 crc kubenswrapper[4758]: E0122 17:07:51.189102 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ad7e035-e1f4-4274-b9c1-9014a86bfb5d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 22 17:07:51 crc kubenswrapper[4758]: I0122 17:07:51.189124 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ad7e035-e1f4-4274-b9c1-9014a86bfb5d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 22 17:07:51 crc kubenswrapper[4758]: I0122 17:07:51.189406 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ad7e035-e1f4-4274-b9c1-9014a86bfb5d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 22 17:07:51 crc kubenswrapper[4758]: I0122 17:07:51.190352 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-h2nj4" Jan 22 17:07:51 crc kubenswrapper[4758]: I0122 17:07:51.193508 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 17:07:51 crc kubenswrapper[4758]: I0122 17:07:51.193790 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 17:07:51 crc kubenswrapper[4758]: I0122 17:07:51.194022 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 17:07:51 crc kubenswrapper[4758]: I0122 17:07:51.197678 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5gz9n" Jan 22 17:07:51 crc kubenswrapper[4758]: I0122 17:07:51.201609 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-h2nj4"] Jan 22 17:07:51 crc kubenswrapper[4758]: I0122 17:07:51.363167 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b4ba22a1-71a4-433b-a32f-c73302d187de-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-h2nj4\" (UID: \"b4ba22a1-71a4-433b-a32f-c73302d187de\") " pod="openstack/ssh-known-hosts-edpm-deployment-h2nj4" Jan 22 17:07:51 crc kubenswrapper[4758]: I0122 17:07:51.363252 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmbjr\" (UniqueName: \"kubernetes.io/projected/b4ba22a1-71a4-433b-a32f-c73302d187de-kube-api-access-hmbjr\") pod \"ssh-known-hosts-edpm-deployment-h2nj4\" (UID: \"b4ba22a1-71a4-433b-a32f-c73302d187de\") " pod="openstack/ssh-known-hosts-edpm-deployment-h2nj4" Jan 22 17:07:51 crc kubenswrapper[4758]: I0122 17:07:51.363395 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b4ba22a1-71a4-433b-a32f-c73302d187de-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-h2nj4\" (UID: \"b4ba22a1-71a4-433b-a32f-c73302d187de\") " pod="openstack/ssh-known-hosts-edpm-deployment-h2nj4" Jan 22 17:07:51 crc kubenswrapper[4758]: I0122 17:07:51.465730 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmbjr\" (UniqueName: \"kubernetes.io/projected/b4ba22a1-71a4-433b-a32f-c73302d187de-kube-api-access-hmbjr\") pod \"ssh-known-hosts-edpm-deployment-h2nj4\" (UID: \"b4ba22a1-71a4-433b-a32f-c73302d187de\") " pod="openstack/ssh-known-hosts-edpm-deployment-h2nj4" Jan 22 17:07:51 crc kubenswrapper[4758]: I0122 17:07:51.465836 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b4ba22a1-71a4-433b-a32f-c73302d187de-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-h2nj4\" (UID: \"b4ba22a1-71a4-433b-a32f-c73302d187de\") " pod="openstack/ssh-known-hosts-edpm-deployment-h2nj4" Jan 22 17:07:51 crc kubenswrapper[4758]: I0122 17:07:51.465961 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b4ba22a1-71a4-433b-a32f-c73302d187de-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-h2nj4\" (UID: \"b4ba22a1-71a4-433b-a32f-c73302d187de\") " pod="openstack/ssh-known-hosts-edpm-deployment-h2nj4" Jan 22 17:07:51 crc kubenswrapper[4758]: I0122 17:07:51.471415 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b4ba22a1-71a4-433b-a32f-c73302d187de-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-h2nj4\" (UID: \"b4ba22a1-71a4-433b-a32f-c73302d187de\") " pod="openstack/ssh-known-hosts-edpm-deployment-h2nj4" Jan 22 17:07:51 crc kubenswrapper[4758]: I0122 17:07:51.475692 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b4ba22a1-71a4-433b-a32f-c73302d187de-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-h2nj4\" (UID: \"b4ba22a1-71a4-433b-a32f-c73302d187de\") " pod="openstack/ssh-known-hosts-edpm-deployment-h2nj4" Jan 22 17:07:51 crc kubenswrapper[4758]: I0122 17:07:51.482210 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmbjr\" (UniqueName: \"kubernetes.io/projected/b4ba22a1-71a4-433b-a32f-c73302d187de-kube-api-access-hmbjr\") pod \"ssh-known-hosts-edpm-deployment-h2nj4\" (UID: \"b4ba22a1-71a4-433b-a32f-c73302d187de\") " pod="openstack/ssh-known-hosts-edpm-deployment-h2nj4" Jan 22 17:07:51 crc kubenswrapper[4758]: I0122 17:07:51.508380 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-h2nj4" Jan 22 17:07:52 crc kubenswrapper[4758]: I0122 17:07:52.113732 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-h2nj4"] Jan 22 17:07:53 crc kubenswrapper[4758]: I0122 17:07:53.097664 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-h2nj4" event={"ID":"b4ba22a1-71a4-433b-a32f-c73302d187de","Type":"ContainerStarted","Data":"6f9e46e2f15e10e86365c449ddf7de91a15979ecc20972e0a7adc83cc2094bde"} Jan 22 17:07:53 crc kubenswrapper[4758]: I0122 17:07:53.098255 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-h2nj4" event={"ID":"b4ba22a1-71a4-433b-a32f-c73302d187de","Type":"ContainerStarted","Data":"7c332075fcb235a0643aaace7ad5e0c066435c49aacf6c312424801c1ed99559"} Jan 22 17:07:53 crc kubenswrapper[4758]: I0122 17:07:53.115568 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-h2nj4" podStartSLOduration=1.657070512 podStartE2EDuration="2.115547723s" podCreationTimestamp="2026-01-22 17:07:51 +0000 UTC" firstStartedPulling="2026-01-22 17:07:52.116629786 +0000 UTC m=+2293.599969071" lastFinishedPulling="2026-01-22 17:07:52.575106997 +0000 UTC m=+2294.058446282" observedRunningTime="2026-01-22 17:07:53.11142991 +0000 UTC m=+2294.594769215" watchObservedRunningTime="2026-01-22 17:07:53.115547723 +0000 UTC m=+2294.598887008" Jan 22 17:08:00 crc kubenswrapper[4758]: I0122 17:08:00.183770 4758 generic.go:334] "Generic (PLEG): container finished" podID="b4ba22a1-71a4-433b-a32f-c73302d187de" containerID="6f9e46e2f15e10e86365c449ddf7de91a15979ecc20972e0a7adc83cc2094bde" exitCode=0 Jan 22 17:08:00 crc kubenswrapper[4758]: I0122 17:08:00.183784 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-h2nj4" event={"ID":"b4ba22a1-71a4-433b-a32f-c73302d187de","Type":"ContainerDied","Data":"6f9e46e2f15e10e86365c449ddf7de91a15979ecc20972e0a7adc83cc2094bde"} Jan 22 17:08:01 crc kubenswrapper[4758]: I0122 17:08:01.688393 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-h2nj4" Jan 22 17:08:01 crc kubenswrapper[4758]: I0122 17:08:01.764589 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b4ba22a1-71a4-433b-a32f-c73302d187de-inventory-0\") pod \"b4ba22a1-71a4-433b-a32f-c73302d187de\" (UID: \"b4ba22a1-71a4-433b-a32f-c73302d187de\") " Jan 22 17:08:01 crc kubenswrapper[4758]: I0122 17:08:01.764656 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmbjr\" (UniqueName: \"kubernetes.io/projected/b4ba22a1-71a4-433b-a32f-c73302d187de-kube-api-access-hmbjr\") pod \"b4ba22a1-71a4-433b-a32f-c73302d187de\" (UID: \"b4ba22a1-71a4-433b-a32f-c73302d187de\") " Jan 22 17:08:01 crc kubenswrapper[4758]: I0122 17:08:01.764985 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b4ba22a1-71a4-433b-a32f-c73302d187de-ssh-key-openstack-edpm-ipam\") pod \"b4ba22a1-71a4-433b-a32f-c73302d187de\" (UID: \"b4ba22a1-71a4-433b-a32f-c73302d187de\") " Jan 22 17:08:01 crc kubenswrapper[4758]: I0122 17:08:01.778030 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4ba22a1-71a4-433b-a32f-c73302d187de-kube-api-access-hmbjr" (OuterVolumeSpecName: "kube-api-access-hmbjr") pod "b4ba22a1-71a4-433b-a32f-c73302d187de" (UID: "b4ba22a1-71a4-433b-a32f-c73302d187de"). InnerVolumeSpecName "kube-api-access-hmbjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:08:01 crc kubenswrapper[4758]: I0122 17:08:01.802978 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4ba22a1-71a4-433b-a32f-c73302d187de-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "b4ba22a1-71a4-433b-a32f-c73302d187de" (UID: "b4ba22a1-71a4-433b-a32f-c73302d187de"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:08:01 crc kubenswrapper[4758]: I0122 17:08:01.819943 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4ba22a1-71a4-433b-a32f-c73302d187de-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b4ba22a1-71a4-433b-a32f-c73302d187de" (UID: "b4ba22a1-71a4-433b-a32f-c73302d187de"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:08:01 crc kubenswrapper[4758]: I0122 17:08:01.869215 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b4ba22a1-71a4-433b-a32f-c73302d187de-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 17:08:01 crc kubenswrapper[4758]: I0122 17:08:01.869269 4758 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b4ba22a1-71a4-433b-a32f-c73302d187de-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 22 17:08:01 crc kubenswrapper[4758]: I0122 17:08:01.869284 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmbjr\" (UniqueName: \"kubernetes.io/projected/b4ba22a1-71a4-433b-a32f-c73302d187de-kube-api-access-hmbjr\") on node \"crc\" DevicePath \"\"" Jan 22 17:08:02 crc kubenswrapper[4758]: I0122 17:08:02.207260 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-h2nj4" event={"ID":"b4ba22a1-71a4-433b-a32f-c73302d187de","Type":"ContainerDied","Data":"7c332075fcb235a0643aaace7ad5e0c066435c49aacf6c312424801c1ed99559"} Jan 22 17:08:02 crc kubenswrapper[4758]: I0122 17:08:02.207329 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c332075fcb235a0643aaace7ad5e0c066435c49aacf6c312424801c1ed99559" Jan 22 17:08:02 crc kubenswrapper[4758]: I0122 17:08:02.207458 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-h2nj4" Jan 22 17:08:02 crc kubenswrapper[4758]: I0122 17:08:02.298529 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-tm4f5"] Jan 22 17:08:02 crc kubenswrapper[4758]: E0122 17:08:02.301611 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4ba22a1-71a4-433b-a32f-c73302d187de" containerName="ssh-known-hosts-edpm-deployment" Jan 22 17:08:02 crc kubenswrapper[4758]: I0122 17:08:02.301863 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4ba22a1-71a4-433b-a32f-c73302d187de" containerName="ssh-known-hosts-edpm-deployment" Jan 22 17:08:02 crc kubenswrapper[4758]: I0122 17:08:02.303116 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4ba22a1-71a4-433b-a32f-c73302d187de" containerName="ssh-known-hosts-edpm-deployment" Jan 22 17:08:02 crc kubenswrapper[4758]: I0122 17:08:02.306603 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tm4f5" Jan 22 17:08:02 crc kubenswrapper[4758]: I0122 17:08:02.310194 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5gz9n" Jan 22 17:08:02 crc kubenswrapper[4758]: I0122 17:08:02.310418 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 17:08:02 crc kubenswrapper[4758]: I0122 17:08:02.313510 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 17:08:02 crc kubenswrapper[4758]: I0122 17:08:02.314174 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 17:08:02 crc kubenswrapper[4758]: I0122 17:08:02.353099 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-tm4f5"] Jan 22 17:08:02 crc kubenswrapper[4758]: I0122 17:08:02.411057 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwhzs\" (UniqueName: \"kubernetes.io/projected/84e11d12-3496-4358-9062-7cd076d2de7c-kube-api-access-qwhzs\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tm4f5\" (UID: \"84e11d12-3496-4358-9062-7cd076d2de7c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tm4f5" Jan 22 17:08:02 crc kubenswrapper[4758]: I0122 17:08:02.411217 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84e11d12-3496-4358-9062-7cd076d2de7c-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tm4f5\" (UID: \"84e11d12-3496-4358-9062-7cd076d2de7c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tm4f5" Jan 22 17:08:02 crc kubenswrapper[4758]: I0122 17:08:02.411280 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/84e11d12-3496-4358-9062-7cd076d2de7c-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tm4f5\" (UID: \"84e11d12-3496-4358-9062-7cd076d2de7c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tm4f5" Jan 22 17:08:02 crc kubenswrapper[4758]: I0122 17:08:02.513186 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwhzs\" (UniqueName: \"kubernetes.io/projected/84e11d12-3496-4358-9062-7cd076d2de7c-kube-api-access-qwhzs\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tm4f5\" (UID: \"84e11d12-3496-4358-9062-7cd076d2de7c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tm4f5" Jan 22 17:08:02 crc kubenswrapper[4758]: I0122 17:08:02.513318 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84e11d12-3496-4358-9062-7cd076d2de7c-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tm4f5\" (UID: \"84e11d12-3496-4358-9062-7cd076d2de7c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tm4f5" Jan 22 17:08:02 crc kubenswrapper[4758]: I0122 17:08:02.513373 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/84e11d12-3496-4358-9062-7cd076d2de7c-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tm4f5\" (UID: \"84e11d12-3496-4358-9062-7cd076d2de7c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tm4f5" Jan 22 17:08:02 crc kubenswrapper[4758]: I0122 17:08:02.517333 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84e11d12-3496-4358-9062-7cd076d2de7c-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tm4f5\" (UID: \"84e11d12-3496-4358-9062-7cd076d2de7c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tm4f5" Jan 22 17:08:02 crc kubenswrapper[4758]: I0122 17:08:02.518360 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/84e11d12-3496-4358-9062-7cd076d2de7c-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tm4f5\" (UID: \"84e11d12-3496-4358-9062-7cd076d2de7c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tm4f5" Jan 22 17:08:02 crc kubenswrapper[4758]: I0122 17:08:02.533881 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwhzs\" (UniqueName: \"kubernetes.io/projected/84e11d12-3496-4358-9062-7cd076d2de7c-kube-api-access-qwhzs\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tm4f5\" (UID: \"84e11d12-3496-4358-9062-7cd076d2de7c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tm4f5" Jan 22 17:08:02 crc kubenswrapper[4758]: I0122 17:08:02.638065 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tm4f5" Jan 22 17:08:03 crc kubenswrapper[4758]: I0122 17:08:03.259045 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-tm4f5"] Jan 22 17:08:04 crc kubenswrapper[4758]: I0122 17:08:04.231245 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tm4f5" event={"ID":"84e11d12-3496-4358-9062-7cd076d2de7c","Type":"ContainerStarted","Data":"b33f3fee88480912f8b628012eaf610537dccdd73073214674810310a6b25b06"} Jan 22 17:08:04 crc kubenswrapper[4758]: I0122 17:08:04.231723 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tm4f5" event={"ID":"84e11d12-3496-4358-9062-7cd076d2de7c","Type":"ContainerStarted","Data":"b619ab30c43fce4fd8a806588900766b95e5287235f90f8c26e83d36cee361d8"} Jan 22 17:08:04 crc kubenswrapper[4758]: I0122 17:08:04.249298 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tm4f5" podStartSLOduration=1.784154316 podStartE2EDuration="2.249266967s" podCreationTimestamp="2026-01-22 17:08:02 +0000 UTC" firstStartedPulling="2026-01-22 17:08:03.258382099 +0000 UTC m=+2304.741721434" lastFinishedPulling="2026-01-22 17:08:03.72349478 +0000 UTC m=+2305.206834085" observedRunningTime="2026-01-22 17:08:04.246600734 +0000 UTC m=+2305.729940019" watchObservedRunningTime="2026-01-22 17:08:04.249266967 +0000 UTC m=+2305.732606252" Jan 22 17:08:14 crc kubenswrapper[4758]: I0122 17:08:14.398201 4758 generic.go:334] "Generic (PLEG): container finished" podID="84e11d12-3496-4358-9062-7cd076d2de7c" containerID="b33f3fee88480912f8b628012eaf610537dccdd73073214674810310a6b25b06" exitCode=0 Jan 22 17:08:14 crc kubenswrapper[4758]: I0122 17:08:14.398830 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tm4f5" event={"ID":"84e11d12-3496-4358-9062-7cd076d2de7c","Type":"ContainerDied","Data":"b33f3fee88480912f8b628012eaf610537dccdd73073214674810310a6b25b06"} Jan 22 17:08:15 crc kubenswrapper[4758]: I0122 17:08:15.850403 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tm4f5" Jan 22 17:08:15 crc kubenswrapper[4758]: I0122 17:08:15.988278 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/84e11d12-3496-4358-9062-7cd076d2de7c-ssh-key-openstack-edpm-ipam\") pod \"84e11d12-3496-4358-9062-7cd076d2de7c\" (UID: \"84e11d12-3496-4358-9062-7cd076d2de7c\") " Jan 22 17:08:15 crc kubenswrapper[4758]: I0122 17:08:15.988456 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84e11d12-3496-4358-9062-7cd076d2de7c-inventory\") pod \"84e11d12-3496-4358-9062-7cd076d2de7c\" (UID: \"84e11d12-3496-4358-9062-7cd076d2de7c\") " Jan 22 17:08:15 crc kubenswrapper[4758]: I0122 17:08:15.988498 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwhzs\" (UniqueName: \"kubernetes.io/projected/84e11d12-3496-4358-9062-7cd076d2de7c-kube-api-access-qwhzs\") pod \"84e11d12-3496-4358-9062-7cd076d2de7c\" (UID: \"84e11d12-3496-4358-9062-7cd076d2de7c\") " Jan 22 17:08:15 crc kubenswrapper[4758]: I0122 17:08:15.994943 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84e11d12-3496-4358-9062-7cd076d2de7c-kube-api-access-qwhzs" (OuterVolumeSpecName: "kube-api-access-qwhzs") pod "84e11d12-3496-4358-9062-7cd076d2de7c" (UID: "84e11d12-3496-4358-9062-7cd076d2de7c"). InnerVolumeSpecName "kube-api-access-qwhzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:08:16 crc kubenswrapper[4758]: I0122 17:08:16.022401 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84e11d12-3496-4358-9062-7cd076d2de7c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "84e11d12-3496-4358-9062-7cd076d2de7c" (UID: "84e11d12-3496-4358-9062-7cd076d2de7c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:08:16 crc kubenswrapper[4758]: I0122 17:08:16.023453 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84e11d12-3496-4358-9062-7cd076d2de7c-inventory" (OuterVolumeSpecName: "inventory") pod "84e11d12-3496-4358-9062-7cd076d2de7c" (UID: "84e11d12-3496-4358-9062-7cd076d2de7c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:08:16 crc kubenswrapper[4758]: I0122 17:08:16.091149 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/84e11d12-3496-4358-9062-7cd076d2de7c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 17:08:16 crc kubenswrapper[4758]: I0122 17:08:16.091377 4758 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84e11d12-3496-4358-9062-7cd076d2de7c-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 17:08:16 crc kubenswrapper[4758]: I0122 17:08:16.091455 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwhzs\" (UniqueName: \"kubernetes.io/projected/84e11d12-3496-4358-9062-7cd076d2de7c-kube-api-access-qwhzs\") on node \"crc\" DevicePath \"\"" Jan 22 17:08:16 crc kubenswrapper[4758]: I0122 17:08:16.485560 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tm4f5" event={"ID":"84e11d12-3496-4358-9062-7cd076d2de7c","Type":"ContainerDied","Data":"b619ab30c43fce4fd8a806588900766b95e5287235f90f8c26e83d36cee361d8"} Jan 22 17:08:16 crc kubenswrapper[4758]: I0122 17:08:16.485611 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b619ab30c43fce4fd8a806588900766b95e5287235f90f8c26e83d36cee361d8" Jan 22 17:08:16 crc kubenswrapper[4758]: I0122 17:08:16.485620 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tm4f5" Jan 22 17:08:16 crc kubenswrapper[4758]: I0122 17:08:16.549444 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wrcrb"] Jan 22 17:08:16 crc kubenswrapper[4758]: E0122 17:08:16.550621 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84e11d12-3496-4358-9062-7cd076d2de7c" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 22 17:08:16 crc kubenswrapper[4758]: I0122 17:08:16.550646 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="84e11d12-3496-4358-9062-7cd076d2de7c" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 22 17:08:16 crc kubenswrapper[4758]: I0122 17:08:16.551270 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="84e11d12-3496-4358-9062-7cd076d2de7c" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 22 17:08:16 crc kubenswrapper[4758]: I0122 17:08:16.552548 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wrcrb" Jan 22 17:08:16 crc kubenswrapper[4758]: I0122 17:08:16.558182 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5gz9n" Jan 22 17:08:16 crc kubenswrapper[4758]: I0122 17:08:16.559239 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 17:08:16 crc kubenswrapper[4758]: I0122 17:08:16.562001 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 17:08:16 crc kubenswrapper[4758]: I0122 17:08:16.562272 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 17:08:16 crc kubenswrapper[4758]: I0122 17:08:16.568368 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wrcrb"] Jan 22 17:08:16 crc kubenswrapper[4758]: I0122 17:08:16.602025 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01e9a9ff-8646-410e-81d5-f8757e1089bc-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wrcrb\" (UID: \"01e9a9ff-8646-410e-81d5-f8757e1089bc\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wrcrb" Jan 22 17:08:16 crc kubenswrapper[4758]: I0122 17:08:16.602165 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d59tg\" (UniqueName: \"kubernetes.io/projected/01e9a9ff-8646-410e-81d5-f8757e1089bc-kube-api-access-d59tg\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wrcrb\" (UID: \"01e9a9ff-8646-410e-81d5-f8757e1089bc\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wrcrb" Jan 22 17:08:16 crc kubenswrapper[4758]: I0122 17:08:16.602231 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01e9a9ff-8646-410e-81d5-f8757e1089bc-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wrcrb\" (UID: \"01e9a9ff-8646-410e-81d5-f8757e1089bc\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wrcrb" Jan 22 17:08:16 crc kubenswrapper[4758]: I0122 17:08:16.703946 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01e9a9ff-8646-410e-81d5-f8757e1089bc-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wrcrb\" (UID: \"01e9a9ff-8646-410e-81d5-f8757e1089bc\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wrcrb" Jan 22 17:08:16 crc kubenswrapper[4758]: I0122 17:08:16.704286 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01e9a9ff-8646-410e-81d5-f8757e1089bc-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wrcrb\" (UID: \"01e9a9ff-8646-410e-81d5-f8757e1089bc\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wrcrb" Jan 22 17:08:16 crc kubenswrapper[4758]: I0122 17:08:16.704513 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d59tg\" (UniqueName: \"kubernetes.io/projected/01e9a9ff-8646-410e-81d5-f8757e1089bc-kube-api-access-d59tg\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wrcrb\" (UID: \"01e9a9ff-8646-410e-81d5-f8757e1089bc\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wrcrb" Jan 22 17:08:16 crc kubenswrapper[4758]: I0122 17:08:16.709054 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01e9a9ff-8646-410e-81d5-f8757e1089bc-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wrcrb\" (UID: \"01e9a9ff-8646-410e-81d5-f8757e1089bc\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wrcrb" Jan 22 17:08:16 crc kubenswrapper[4758]: I0122 17:08:16.719997 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01e9a9ff-8646-410e-81d5-f8757e1089bc-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wrcrb\" (UID: \"01e9a9ff-8646-410e-81d5-f8757e1089bc\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wrcrb" Jan 22 17:08:16 crc kubenswrapper[4758]: I0122 17:08:16.724601 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d59tg\" (UniqueName: \"kubernetes.io/projected/01e9a9ff-8646-410e-81d5-f8757e1089bc-kube-api-access-d59tg\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wrcrb\" (UID: \"01e9a9ff-8646-410e-81d5-f8757e1089bc\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wrcrb" Jan 22 17:08:16 crc kubenswrapper[4758]: I0122 17:08:16.884303 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wrcrb" Jan 22 17:08:17 crc kubenswrapper[4758]: I0122 17:08:17.430422 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wrcrb"] Jan 22 17:08:17 crc kubenswrapper[4758]: W0122 17:08:17.437573 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01e9a9ff_8646_410e_81d5_f8757e1089bc.slice/crio-c31cdc682d968ff8fb7f6034588a6a6ad3e66768ed6d4801f79ab1c454685419 WatchSource:0}: Error finding container c31cdc682d968ff8fb7f6034588a6a6ad3e66768ed6d4801f79ab1c454685419: Status 404 returned error can't find the container with id c31cdc682d968ff8fb7f6034588a6a6ad3e66768ed6d4801f79ab1c454685419 Jan 22 17:08:17 crc kubenswrapper[4758]: I0122 17:08:17.494877 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wrcrb" event={"ID":"01e9a9ff-8646-410e-81d5-f8757e1089bc","Type":"ContainerStarted","Data":"c31cdc682d968ff8fb7f6034588a6a6ad3e66768ed6d4801f79ab1c454685419"} Jan 22 17:08:18 crc kubenswrapper[4758]: I0122 17:08:18.505143 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wrcrb" event={"ID":"01e9a9ff-8646-410e-81d5-f8757e1089bc","Type":"ContainerStarted","Data":"01348a004a6aeab028b6408a755e379c069d215f018c2975f92521e7f933a401"} Jan 22 17:08:28 crc kubenswrapper[4758]: I0122 17:08:28.593963 4758 generic.go:334] "Generic (PLEG): container finished" podID="01e9a9ff-8646-410e-81d5-f8757e1089bc" containerID="01348a004a6aeab028b6408a755e379c069d215f018c2975f92521e7f933a401" exitCode=0 Jan 22 17:08:28 crc kubenswrapper[4758]: I0122 17:08:28.594067 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wrcrb" event={"ID":"01e9a9ff-8646-410e-81d5-f8757e1089bc","Type":"ContainerDied","Data":"01348a004a6aeab028b6408a755e379c069d215f018c2975f92521e7f933a401"} Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.036032 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wrcrb" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.089080 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01e9a9ff-8646-410e-81d5-f8757e1089bc-ssh-key-openstack-edpm-ipam\") pod \"01e9a9ff-8646-410e-81d5-f8757e1089bc\" (UID: \"01e9a9ff-8646-410e-81d5-f8757e1089bc\") " Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.089185 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01e9a9ff-8646-410e-81d5-f8757e1089bc-inventory\") pod \"01e9a9ff-8646-410e-81d5-f8757e1089bc\" (UID: \"01e9a9ff-8646-410e-81d5-f8757e1089bc\") " Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.089240 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d59tg\" (UniqueName: \"kubernetes.io/projected/01e9a9ff-8646-410e-81d5-f8757e1089bc-kube-api-access-d59tg\") pod \"01e9a9ff-8646-410e-81d5-f8757e1089bc\" (UID: \"01e9a9ff-8646-410e-81d5-f8757e1089bc\") " Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.095425 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01e9a9ff-8646-410e-81d5-f8757e1089bc-kube-api-access-d59tg" (OuterVolumeSpecName: "kube-api-access-d59tg") pod "01e9a9ff-8646-410e-81d5-f8757e1089bc" (UID: "01e9a9ff-8646-410e-81d5-f8757e1089bc"). InnerVolumeSpecName "kube-api-access-d59tg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.116998 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01e9a9ff-8646-410e-81d5-f8757e1089bc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "01e9a9ff-8646-410e-81d5-f8757e1089bc" (UID: "01e9a9ff-8646-410e-81d5-f8757e1089bc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.119853 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01e9a9ff-8646-410e-81d5-f8757e1089bc-inventory" (OuterVolumeSpecName: "inventory") pod "01e9a9ff-8646-410e-81d5-f8757e1089bc" (UID: "01e9a9ff-8646-410e-81d5-f8757e1089bc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.193049 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01e9a9ff-8646-410e-81d5-f8757e1089bc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.193358 4758 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01e9a9ff-8646-410e-81d5-f8757e1089bc-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.193456 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d59tg\" (UniqueName: \"kubernetes.io/projected/01e9a9ff-8646-410e-81d5-f8757e1089bc-kube-api-access-d59tg\") on node \"crc\" DevicePath \"\"" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.615779 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wrcrb" event={"ID":"01e9a9ff-8646-410e-81d5-f8757e1089bc","Type":"ContainerDied","Data":"c31cdc682d968ff8fb7f6034588a6a6ad3e66768ed6d4801f79ab1c454685419"} Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.615825 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c31cdc682d968ff8fb7f6034588a6a6ad3e66768ed6d4801f79ab1c454685419" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.615896 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wrcrb" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.721370 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5"] Jan 22 17:08:30 crc kubenswrapper[4758]: E0122 17:08:30.721895 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01e9a9ff-8646-410e-81d5-f8757e1089bc" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.721919 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="01e9a9ff-8646-410e-81d5-f8757e1089bc" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.722146 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="01e9a9ff-8646-410e-81d5-f8757e1089bc" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.722815 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.724848 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.725324 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.725454 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.726014 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5gz9n" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.726121 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.726312 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.730450 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.730665 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.740440 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5"] Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.809309 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/328e6c99-b23b-4d6d-b816-79d6af92932f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-smgm5\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.809398 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-smgm5\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.809530 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-smgm5\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.809586 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-smgm5\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.809672 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/328e6c99-b23b-4d6d-b816-79d6af92932f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-smgm5\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.809707 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-smgm5\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.809818 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-smgm5\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.809896 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-smgm5\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.810000 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/328e6c99-b23b-4d6d-b816-79d6af92932f-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-smgm5\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.810085 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/328e6c99-b23b-4d6d-b816-79d6af92932f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-smgm5\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.810207 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-smgm5\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.810258 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-smgm5\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.810290 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtr4f\" (UniqueName: \"kubernetes.io/projected/328e6c99-b23b-4d6d-b816-79d6af92932f-kube-api-access-rtr4f\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-smgm5\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.810354 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-smgm5\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.911961 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-smgm5\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.912072 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/328e6c99-b23b-4d6d-b816-79d6af92932f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-smgm5\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.912129 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-smgm5\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.912203 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-smgm5\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.912245 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-smgm5\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.913024 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/328e6c99-b23b-4d6d-b816-79d6af92932f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-smgm5\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.913064 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-smgm5\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.913098 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-smgm5\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.913128 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-smgm5\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.913165 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/328e6c99-b23b-4d6d-b816-79d6af92932f-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-smgm5\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.913206 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/328e6c99-b23b-4d6d-b816-79d6af92932f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-smgm5\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.913253 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-smgm5\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.913294 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-smgm5\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.913321 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtr4f\" (UniqueName: \"kubernetes.io/projected/328e6c99-b23b-4d6d-b816-79d6af92932f-kube-api-access-rtr4f\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-smgm5\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.919888 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-smgm5\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.920016 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-smgm5\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.920373 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-smgm5\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.921188 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/328e6c99-b23b-4d6d-b816-79d6af92932f-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-smgm5\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.921918 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/328e6c99-b23b-4d6d-b816-79d6af92932f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-smgm5\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.922487 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-smgm5\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.922923 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-smgm5\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.923211 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/328e6c99-b23b-4d6d-b816-79d6af92932f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-smgm5\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.923229 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-smgm5\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.924589 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-smgm5\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.924992 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-smgm5\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.925426 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/328e6c99-b23b-4d6d-b816-79d6af92932f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-smgm5\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.925527 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-smgm5\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" Jan 22 17:08:30 crc kubenswrapper[4758]: I0122 17:08:30.933315 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtr4f\" (UniqueName: \"kubernetes.io/projected/328e6c99-b23b-4d6d-b816-79d6af92932f-kube-api-access-rtr4f\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-smgm5\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" Jan 22 17:08:31 crc kubenswrapper[4758]: I0122 17:08:31.039453 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" Jan 22 17:08:31 crc kubenswrapper[4758]: I0122 17:08:31.608207 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5"] Jan 22 17:08:31 crc kubenswrapper[4758]: I0122 17:08:31.633352 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" event={"ID":"328e6c99-b23b-4d6d-b816-79d6af92932f","Type":"ContainerStarted","Data":"3a8f3310390df2b1fe41c214fed83e72786d5536d1c50ff6feae5f9525172ac6"} Jan 22 17:08:32 crc kubenswrapper[4758]: I0122 17:08:32.655528 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" event={"ID":"328e6c99-b23b-4d6d-b816-79d6af92932f","Type":"ContainerStarted","Data":"115746066de46060a1eb3433c45ef395ad91724b86cb4941698818cb90ad3f34"} Jan 22 17:08:32 crc kubenswrapper[4758]: I0122 17:08:32.679571 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" podStartSLOduration=2.111825499 podStartE2EDuration="2.679544469s" podCreationTimestamp="2026-01-22 17:08:30 +0000 UTC" firstStartedPulling="2026-01-22 17:08:31.614408687 +0000 UTC m=+2333.097747972" lastFinishedPulling="2026-01-22 17:08:32.182127657 +0000 UTC m=+2333.665466942" observedRunningTime="2026-01-22 17:08:32.679089137 +0000 UTC m=+2334.162428432" watchObservedRunningTime="2026-01-22 17:08:32.679544469 +0000 UTC m=+2334.162883764" Jan 22 17:08:43 crc kubenswrapper[4758]: I0122 17:08:43.837205 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 17:08:43 crc kubenswrapper[4758]: I0122 17:08:43.837799 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 17:08:52 crc kubenswrapper[4758]: I0122 17:08:52.079625 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fx2kr"] Jan 22 17:08:52 crc kubenswrapper[4758]: I0122 17:08:52.082251 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fx2kr" Jan 22 17:08:52 crc kubenswrapper[4758]: I0122 17:08:52.107522 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fx2kr"] Jan 22 17:08:52 crc kubenswrapper[4758]: I0122 17:08:52.183590 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4z6g\" (UniqueName: \"kubernetes.io/projected/d3d292b8-3ae7-4c24-a25d-665f0aaaa031-kube-api-access-g4z6g\") pod \"community-operators-fx2kr\" (UID: \"d3d292b8-3ae7-4c24-a25d-665f0aaaa031\") " pod="openshift-marketplace/community-operators-fx2kr" Jan 22 17:08:52 crc kubenswrapper[4758]: I0122 17:08:52.183705 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3d292b8-3ae7-4c24-a25d-665f0aaaa031-utilities\") pod \"community-operators-fx2kr\" (UID: \"d3d292b8-3ae7-4c24-a25d-665f0aaaa031\") " pod="openshift-marketplace/community-operators-fx2kr" Jan 22 17:08:52 crc kubenswrapper[4758]: I0122 17:08:52.184056 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3d292b8-3ae7-4c24-a25d-665f0aaaa031-catalog-content\") pod \"community-operators-fx2kr\" (UID: \"d3d292b8-3ae7-4c24-a25d-665f0aaaa031\") " pod="openshift-marketplace/community-operators-fx2kr" Jan 22 17:08:52 crc kubenswrapper[4758]: I0122 17:08:52.272067 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qr56r"] Jan 22 17:08:52 crc kubenswrapper[4758]: I0122 17:08:52.276859 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qr56r" Jan 22 17:08:52 crc kubenswrapper[4758]: I0122 17:08:52.287055 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6adfceee-6a7d-49d0-9d6d-360ae6e1f64b-utilities\") pod \"certified-operators-qr56r\" (UID: \"6adfceee-6a7d-49d0-9d6d-360ae6e1f64b\") " pod="openshift-marketplace/certified-operators-qr56r" Jan 22 17:08:52 crc kubenswrapper[4758]: I0122 17:08:52.287162 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6adfceee-6a7d-49d0-9d6d-360ae6e1f64b-catalog-content\") pod \"certified-operators-qr56r\" (UID: \"6adfceee-6a7d-49d0-9d6d-360ae6e1f64b\") " pod="openshift-marketplace/certified-operators-qr56r" Jan 22 17:08:52 crc kubenswrapper[4758]: I0122 17:08:52.287195 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4z6g\" (UniqueName: \"kubernetes.io/projected/d3d292b8-3ae7-4c24-a25d-665f0aaaa031-kube-api-access-g4z6g\") pod \"community-operators-fx2kr\" (UID: \"d3d292b8-3ae7-4c24-a25d-665f0aaaa031\") " pod="openshift-marketplace/community-operators-fx2kr" Jan 22 17:08:52 crc kubenswrapper[4758]: I0122 17:08:52.287359 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3d292b8-3ae7-4c24-a25d-665f0aaaa031-utilities\") pod \"community-operators-fx2kr\" (UID: \"d3d292b8-3ae7-4c24-a25d-665f0aaaa031\") " pod="openshift-marketplace/community-operators-fx2kr" Jan 22 17:08:52 crc kubenswrapper[4758]: I0122 17:08:52.287455 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dxn5\" (UniqueName: \"kubernetes.io/projected/6adfceee-6a7d-49d0-9d6d-360ae6e1f64b-kube-api-access-8dxn5\") pod \"certified-operators-qr56r\" (UID: \"6adfceee-6a7d-49d0-9d6d-360ae6e1f64b\") " pod="openshift-marketplace/certified-operators-qr56r" Jan 22 17:08:52 crc kubenswrapper[4758]: I0122 17:08:52.287823 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3d292b8-3ae7-4c24-a25d-665f0aaaa031-catalog-content\") pod \"community-operators-fx2kr\" (UID: \"d3d292b8-3ae7-4c24-a25d-665f0aaaa031\") " pod="openshift-marketplace/community-operators-fx2kr" Jan 22 17:08:52 crc kubenswrapper[4758]: I0122 17:08:52.287905 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3d292b8-3ae7-4c24-a25d-665f0aaaa031-utilities\") pod \"community-operators-fx2kr\" (UID: \"d3d292b8-3ae7-4c24-a25d-665f0aaaa031\") " pod="openshift-marketplace/community-operators-fx2kr" Jan 22 17:08:52 crc kubenswrapper[4758]: I0122 17:08:52.288386 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3d292b8-3ae7-4c24-a25d-665f0aaaa031-catalog-content\") pod \"community-operators-fx2kr\" (UID: \"d3d292b8-3ae7-4c24-a25d-665f0aaaa031\") " pod="openshift-marketplace/community-operators-fx2kr" Jan 22 17:08:52 crc kubenswrapper[4758]: I0122 17:08:52.300481 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qr56r"] Jan 22 17:08:52 crc kubenswrapper[4758]: I0122 17:08:52.329097 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4z6g\" (UniqueName: \"kubernetes.io/projected/d3d292b8-3ae7-4c24-a25d-665f0aaaa031-kube-api-access-g4z6g\") pod \"community-operators-fx2kr\" (UID: \"d3d292b8-3ae7-4c24-a25d-665f0aaaa031\") " pod="openshift-marketplace/community-operators-fx2kr" Jan 22 17:08:52 crc kubenswrapper[4758]: I0122 17:08:52.389102 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6adfceee-6a7d-49d0-9d6d-360ae6e1f64b-utilities\") pod \"certified-operators-qr56r\" (UID: \"6adfceee-6a7d-49d0-9d6d-360ae6e1f64b\") " pod="openshift-marketplace/certified-operators-qr56r" Jan 22 17:08:52 crc kubenswrapper[4758]: I0122 17:08:52.389182 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6adfceee-6a7d-49d0-9d6d-360ae6e1f64b-catalog-content\") pod \"certified-operators-qr56r\" (UID: \"6adfceee-6a7d-49d0-9d6d-360ae6e1f64b\") " pod="openshift-marketplace/certified-operators-qr56r" Jan 22 17:08:52 crc kubenswrapper[4758]: I0122 17:08:52.389246 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dxn5\" (UniqueName: \"kubernetes.io/projected/6adfceee-6a7d-49d0-9d6d-360ae6e1f64b-kube-api-access-8dxn5\") pod \"certified-operators-qr56r\" (UID: \"6adfceee-6a7d-49d0-9d6d-360ae6e1f64b\") " pod="openshift-marketplace/certified-operators-qr56r" Jan 22 17:08:52 crc kubenswrapper[4758]: I0122 17:08:52.389569 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6adfceee-6a7d-49d0-9d6d-360ae6e1f64b-utilities\") pod \"certified-operators-qr56r\" (UID: \"6adfceee-6a7d-49d0-9d6d-360ae6e1f64b\") " pod="openshift-marketplace/certified-operators-qr56r" Jan 22 17:08:52 crc kubenswrapper[4758]: I0122 17:08:52.389694 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6adfceee-6a7d-49d0-9d6d-360ae6e1f64b-catalog-content\") pod \"certified-operators-qr56r\" (UID: \"6adfceee-6a7d-49d0-9d6d-360ae6e1f64b\") " pod="openshift-marketplace/certified-operators-qr56r" Jan 22 17:08:52 crc kubenswrapper[4758]: I0122 17:08:52.408971 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dxn5\" (UniqueName: \"kubernetes.io/projected/6adfceee-6a7d-49d0-9d6d-360ae6e1f64b-kube-api-access-8dxn5\") pod \"certified-operators-qr56r\" (UID: \"6adfceee-6a7d-49d0-9d6d-360ae6e1f64b\") " pod="openshift-marketplace/certified-operators-qr56r" Jan 22 17:08:52 crc kubenswrapper[4758]: I0122 17:08:52.430111 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fx2kr" Jan 22 17:08:52 crc kubenswrapper[4758]: I0122 17:08:52.610096 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qr56r" Jan 22 17:08:53 crc kubenswrapper[4758]: I0122 17:08:53.042132 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fx2kr"] Jan 22 17:08:53 crc kubenswrapper[4758]: I0122 17:08:53.051806 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qr56r"] Jan 22 17:08:53 crc kubenswrapper[4758]: W0122 17:08:53.068847 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6adfceee_6a7d_49d0_9d6d_360ae6e1f64b.slice/crio-1988e36702381058069edf526f20750a432ebb6278e6559c0cf25969df23d2d9 WatchSource:0}: Error finding container 1988e36702381058069edf526f20750a432ebb6278e6559c0cf25969df23d2d9: Status 404 returned error can't find the container with id 1988e36702381058069edf526f20750a432ebb6278e6559c0cf25969df23d2d9 Jan 22 17:08:53 crc kubenswrapper[4758]: I0122 17:08:53.877760 4758 generic.go:334] "Generic (PLEG): container finished" podID="d3d292b8-3ae7-4c24-a25d-665f0aaaa031" containerID="94411a5457fa3a45409307c4488d7646368cb35d99c2bab0ccc58c00466f6b83" exitCode=0 Jan 22 17:08:53 crc kubenswrapper[4758]: I0122 17:08:53.877827 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fx2kr" event={"ID":"d3d292b8-3ae7-4c24-a25d-665f0aaaa031","Type":"ContainerDied","Data":"94411a5457fa3a45409307c4488d7646368cb35d99c2bab0ccc58c00466f6b83"} Jan 22 17:08:53 crc kubenswrapper[4758]: I0122 17:08:53.878132 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fx2kr" event={"ID":"d3d292b8-3ae7-4c24-a25d-665f0aaaa031","Type":"ContainerStarted","Data":"a8507fc910a9ad58328c0f9a2c736d6218f1ff4b0461242881678cf1755937cd"} Jan 22 17:08:53 crc kubenswrapper[4758]: I0122 17:08:53.880031 4758 generic.go:334] "Generic (PLEG): container finished" podID="6adfceee-6a7d-49d0-9d6d-360ae6e1f64b" containerID="27db9651f3a3c037a333f762d769405db4e0d18065fe218cfdf673a69acb52a3" exitCode=0 Jan 22 17:08:53 crc kubenswrapper[4758]: I0122 17:08:53.880066 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qr56r" event={"ID":"6adfceee-6a7d-49d0-9d6d-360ae6e1f64b","Type":"ContainerDied","Data":"27db9651f3a3c037a333f762d769405db4e0d18065fe218cfdf673a69acb52a3"} Jan 22 17:08:53 crc kubenswrapper[4758]: I0122 17:08:53.880089 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qr56r" event={"ID":"6adfceee-6a7d-49d0-9d6d-360ae6e1f64b","Type":"ContainerStarted","Data":"1988e36702381058069edf526f20750a432ebb6278e6559c0cf25969df23d2d9"} Jan 22 17:08:54 crc kubenswrapper[4758]: I0122 17:08:54.889154 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fx2kr" event={"ID":"d3d292b8-3ae7-4c24-a25d-665f0aaaa031","Type":"ContainerStarted","Data":"93b14f279c9794f0c270a315f4276aa05176a6cc1899528898d26b3e11e02a5b"} Jan 22 17:08:55 crc kubenswrapper[4758]: I0122 17:08:55.899617 4758 generic.go:334] "Generic (PLEG): container finished" podID="d3d292b8-3ae7-4c24-a25d-665f0aaaa031" containerID="93b14f279c9794f0c270a315f4276aa05176a6cc1899528898d26b3e11e02a5b" exitCode=0 Jan 22 17:08:55 crc kubenswrapper[4758]: I0122 17:08:55.899702 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fx2kr" event={"ID":"d3d292b8-3ae7-4c24-a25d-665f0aaaa031","Type":"ContainerDied","Data":"93b14f279c9794f0c270a315f4276aa05176a6cc1899528898d26b3e11e02a5b"} Jan 22 17:08:58 crc kubenswrapper[4758]: I0122 17:08:58.942778 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fx2kr" event={"ID":"d3d292b8-3ae7-4c24-a25d-665f0aaaa031","Type":"ContainerStarted","Data":"acacbb9141c26affa96e10b12fe0226e1af45ad3bf921d5ba5ba711c00bb17ad"} Jan 22 17:08:58 crc kubenswrapper[4758]: I0122 17:08:58.945210 4758 generic.go:334] "Generic (PLEG): container finished" podID="6adfceee-6a7d-49d0-9d6d-360ae6e1f64b" containerID="2919da381e9a7a5706bfdde62cbf196359fc0d29ae7fb21b8d8451d7ed8323e4" exitCode=0 Jan 22 17:08:58 crc kubenswrapper[4758]: I0122 17:08:58.945255 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qr56r" event={"ID":"6adfceee-6a7d-49d0-9d6d-360ae6e1f64b","Type":"ContainerDied","Data":"2919da381e9a7a5706bfdde62cbf196359fc0d29ae7fb21b8d8451d7ed8323e4"} Jan 22 17:08:58 crc kubenswrapper[4758]: I0122 17:08:58.947493 4758 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 17:08:58 crc kubenswrapper[4758]: I0122 17:08:58.967786 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fx2kr" podStartSLOduration=2.942508444 podStartE2EDuration="6.967766788s" podCreationTimestamp="2026-01-22 17:08:52 +0000 UTC" firstStartedPulling="2026-01-22 17:08:53.880604651 +0000 UTC m=+2355.363943936" lastFinishedPulling="2026-01-22 17:08:57.905862955 +0000 UTC m=+2359.389202280" observedRunningTime="2026-01-22 17:08:58.959566266 +0000 UTC m=+2360.442905551" watchObservedRunningTime="2026-01-22 17:08:58.967766788 +0000 UTC m=+2360.451106073" Jan 22 17:08:59 crc kubenswrapper[4758]: I0122 17:08:59.958338 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qr56r" event={"ID":"6adfceee-6a7d-49d0-9d6d-360ae6e1f64b","Type":"ContainerStarted","Data":"a18350b0296b5f6f4bdca454d954bd5b866cb834f2e7498bb3fd0a2b7821a2e1"} Jan 22 17:08:59 crc kubenswrapper[4758]: I0122 17:08:59.994605 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qr56r" podStartSLOduration=2.46958789 podStartE2EDuration="7.994579786s" podCreationTimestamp="2026-01-22 17:08:52 +0000 UTC" firstStartedPulling="2026-01-22 17:08:53.88166566 +0000 UTC m=+2355.365004935" lastFinishedPulling="2026-01-22 17:08:59.406657546 +0000 UTC m=+2360.889996831" observedRunningTime="2026-01-22 17:08:59.979148895 +0000 UTC m=+2361.462488180" watchObservedRunningTime="2026-01-22 17:08:59.994579786 +0000 UTC m=+2361.477919071" Jan 22 17:09:02 crc kubenswrapper[4758]: I0122 17:09:02.430818 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fx2kr" Jan 22 17:09:02 crc kubenswrapper[4758]: I0122 17:09:02.431465 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fx2kr" Jan 22 17:09:02 crc kubenswrapper[4758]: I0122 17:09:02.507720 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fx2kr" Jan 22 17:09:02 crc kubenswrapper[4758]: I0122 17:09:02.611794 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qr56r" Jan 22 17:09:02 crc kubenswrapper[4758]: I0122 17:09:02.611864 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qr56r" Jan 22 17:09:02 crc kubenswrapper[4758]: I0122 17:09:02.658382 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qr56r" Jan 22 17:09:12 crc kubenswrapper[4758]: I0122 17:09:12.495924 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fx2kr" Jan 22 17:09:12 crc kubenswrapper[4758]: I0122 17:09:12.554029 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fx2kr"] Jan 22 17:09:12 crc kubenswrapper[4758]: I0122 17:09:12.655854 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qr56r" Jan 22 17:09:13 crc kubenswrapper[4758]: I0122 17:09:13.097776 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fx2kr" podUID="d3d292b8-3ae7-4c24-a25d-665f0aaaa031" containerName="registry-server" containerID="cri-o://acacbb9141c26affa96e10b12fe0226e1af45ad3bf921d5ba5ba711c00bb17ad" gracePeriod=2 Jan 22 17:09:13 crc kubenswrapper[4758]: I0122 17:09:13.596734 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fx2kr" Jan 22 17:09:13 crc kubenswrapper[4758]: I0122 17:09:13.675542 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3d292b8-3ae7-4c24-a25d-665f0aaaa031-catalog-content\") pod \"d3d292b8-3ae7-4c24-a25d-665f0aaaa031\" (UID: \"d3d292b8-3ae7-4c24-a25d-665f0aaaa031\") " Jan 22 17:09:13 crc kubenswrapper[4758]: I0122 17:09:13.675585 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3d292b8-3ae7-4c24-a25d-665f0aaaa031-utilities\") pod \"d3d292b8-3ae7-4c24-a25d-665f0aaaa031\" (UID: \"d3d292b8-3ae7-4c24-a25d-665f0aaaa031\") " Jan 22 17:09:13 crc kubenswrapper[4758]: I0122 17:09:13.675661 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4z6g\" (UniqueName: \"kubernetes.io/projected/d3d292b8-3ae7-4c24-a25d-665f0aaaa031-kube-api-access-g4z6g\") pod \"d3d292b8-3ae7-4c24-a25d-665f0aaaa031\" (UID: \"d3d292b8-3ae7-4c24-a25d-665f0aaaa031\") " Jan 22 17:09:13 crc kubenswrapper[4758]: I0122 17:09:13.676880 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3d292b8-3ae7-4c24-a25d-665f0aaaa031-utilities" (OuterVolumeSpecName: "utilities") pod "d3d292b8-3ae7-4c24-a25d-665f0aaaa031" (UID: "d3d292b8-3ae7-4c24-a25d-665f0aaaa031"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:09:13 crc kubenswrapper[4758]: I0122 17:09:13.682996 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3d292b8-3ae7-4c24-a25d-665f0aaaa031-kube-api-access-g4z6g" (OuterVolumeSpecName: "kube-api-access-g4z6g") pod "d3d292b8-3ae7-4c24-a25d-665f0aaaa031" (UID: "d3d292b8-3ae7-4c24-a25d-665f0aaaa031"). InnerVolumeSpecName "kube-api-access-g4z6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:09:13 crc kubenswrapper[4758]: I0122 17:09:13.728902 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3d292b8-3ae7-4c24-a25d-665f0aaaa031-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d3d292b8-3ae7-4c24-a25d-665f0aaaa031" (UID: "d3d292b8-3ae7-4c24-a25d-665f0aaaa031"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:09:13 crc kubenswrapper[4758]: I0122 17:09:13.778275 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4z6g\" (UniqueName: \"kubernetes.io/projected/d3d292b8-3ae7-4c24-a25d-665f0aaaa031-kube-api-access-g4z6g\") on node \"crc\" DevicePath \"\"" Jan 22 17:09:13 crc kubenswrapper[4758]: I0122 17:09:13.778309 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3d292b8-3ae7-4c24-a25d-665f0aaaa031-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 17:09:13 crc kubenswrapper[4758]: I0122 17:09:13.778319 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3d292b8-3ae7-4c24-a25d-665f0aaaa031-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 17:09:13 crc kubenswrapper[4758]: I0122 17:09:13.837942 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 17:09:13 crc kubenswrapper[4758]: I0122 17:09:13.838016 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 17:09:14 crc kubenswrapper[4758]: I0122 17:09:14.111188 4758 generic.go:334] "Generic (PLEG): container finished" podID="d3d292b8-3ae7-4c24-a25d-665f0aaaa031" containerID="acacbb9141c26affa96e10b12fe0226e1af45ad3bf921d5ba5ba711c00bb17ad" exitCode=0 Jan 22 17:09:14 crc kubenswrapper[4758]: I0122 17:09:14.111236 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fx2kr" event={"ID":"d3d292b8-3ae7-4c24-a25d-665f0aaaa031","Type":"ContainerDied","Data":"acacbb9141c26affa96e10b12fe0226e1af45ad3bf921d5ba5ba711c00bb17ad"} Jan 22 17:09:14 crc kubenswrapper[4758]: I0122 17:09:14.111256 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fx2kr" Jan 22 17:09:14 crc kubenswrapper[4758]: I0122 17:09:14.111272 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fx2kr" event={"ID":"d3d292b8-3ae7-4c24-a25d-665f0aaaa031","Type":"ContainerDied","Data":"a8507fc910a9ad58328c0f9a2c736d6218f1ff4b0461242881678cf1755937cd"} Jan 22 17:09:14 crc kubenswrapper[4758]: I0122 17:09:14.111329 4758 scope.go:117] "RemoveContainer" containerID="acacbb9141c26affa96e10b12fe0226e1af45ad3bf921d5ba5ba711c00bb17ad" Jan 22 17:09:14 crc kubenswrapper[4758]: I0122 17:09:14.145758 4758 scope.go:117] "RemoveContainer" containerID="93b14f279c9794f0c270a315f4276aa05176a6cc1899528898d26b3e11e02a5b" Jan 22 17:09:14 crc kubenswrapper[4758]: I0122 17:09:14.150203 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fx2kr"] Jan 22 17:09:14 crc kubenswrapper[4758]: I0122 17:09:14.159250 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fx2kr"] Jan 22 17:09:14 crc kubenswrapper[4758]: I0122 17:09:14.182717 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qr56r"] Jan 22 17:09:14 crc kubenswrapper[4758]: I0122 17:09:14.188614 4758 scope.go:117] "RemoveContainer" containerID="94411a5457fa3a45409307c4488d7646368cb35d99c2bab0ccc58c00466f6b83" Jan 22 17:09:14 crc kubenswrapper[4758]: I0122 17:09:14.249953 4758 scope.go:117] "RemoveContainer" containerID="acacbb9141c26affa96e10b12fe0226e1af45ad3bf921d5ba5ba711c00bb17ad" Jan 22 17:09:14 crc kubenswrapper[4758]: E0122 17:09:14.251062 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acacbb9141c26affa96e10b12fe0226e1af45ad3bf921d5ba5ba711c00bb17ad\": container with ID starting with acacbb9141c26affa96e10b12fe0226e1af45ad3bf921d5ba5ba711c00bb17ad not found: ID does not exist" containerID="acacbb9141c26affa96e10b12fe0226e1af45ad3bf921d5ba5ba711c00bb17ad" Jan 22 17:09:14 crc kubenswrapper[4758]: I0122 17:09:14.251151 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acacbb9141c26affa96e10b12fe0226e1af45ad3bf921d5ba5ba711c00bb17ad"} err="failed to get container status \"acacbb9141c26affa96e10b12fe0226e1af45ad3bf921d5ba5ba711c00bb17ad\": rpc error: code = NotFound desc = could not find container \"acacbb9141c26affa96e10b12fe0226e1af45ad3bf921d5ba5ba711c00bb17ad\": container with ID starting with acacbb9141c26affa96e10b12fe0226e1af45ad3bf921d5ba5ba711c00bb17ad not found: ID does not exist" Jan 22 17:09:14 crc kubenswrapper[4758]: I0122 17:09:14.251212 4758 scope.go:117] "RemoveContainer" containerID="93b14f279c9794f0c270a315f4276aa05176a6cc1899528898d26b3e11e02a5b" Jan 22 17:09:14 crc kubenswrapper[4758]: E0122 17:09:14.251895 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93b14f279c9794f0c270a315f4276aa05176a6cc1899528898d26b3e11e02a5b\": container with ID starting with 93b14f279c9794f0c270a315f4276aa05176a6cc1899528898d26b3e11e02a5b not found: ID does not exist" containerID="93b14f279c9794f0c270a315f4276aa05176a6cc1899528898d26b3e11e02a5b" Jan 22 17:09:14 crc kubenswrapper[4758]: I0122 17:09:14.251944 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93b14f279c9794f0c270a315f4276aa05176a6cc1899528898d26b3e11e02a5b"} err="failed to get container status \"93b14f279c9794f0c270a315f4276aa05176a6cc1899528898d26b3e11e02a5b\": rpc error: code = NotFound desc = could not find container \"93b14f279c9794f0c270a315f4276aa05176a6cc1899528898d26b3e11e02a5b\": container with ID starting with 93b14f279c9794f0c270a315f4276aa05176a6cc1899528898d26b3e11e02a5b not found: ID does not exist" Jan 22 17:09:14 crc kubenswrapper[4758]: I0122 17:09:14.251976 4758 scope.go:117] "RemoveContainer" containerID="94411a5457fa3a45409307c4488d7646368cb35d99c2bab0ccc58c00466f6b83" Jan 22 17:09:14 crc kubenswrapper[4758]: E0122 17:09:14.252275 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94411a5457fa3a45409307c4488d7646368cb35d99c2bab0ccc58c00466f6b83\": container with ID starting with 94411a5457fa3a45409307c4488d7646368cb35d99c2bab0ccc58c00466f6b83 not found: ID does not exist" containerID="94411a5457fa3a45409307c4488d7646368cb35d99c2bab0ccc58c00466f6b83" Jan 22 17:09:14 crc kubenswrapper[4758]: I0122 17:09:14.252316 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94411a5457fa3a45409307c4488d7646368cb35d99c2bab0ccc58c00466f6b83"} err="failed to get container status \"94411a5457fa3a45409307c4488d7646368cb35d99c2bab0ccc58c00466f6b83\": rpc error: code = NotFound desc = could not find container \"94411a5457fa3a45409307c4488d7646368cb35d99c2bab0ccc58c00466f6b83\": container with ID starting with 94411a5457fa3a45409307c4488d7646368cb35d99c2bab0ccc58c00466f6b83 not found: ID does not exist" Jan 22 17:09:14 crc kubenswrapper[4758]: I0122 17:09:14.562818 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sgz9b"] Jan 22 17:09:14 crc kubenswrapper[4758]: I0122 17:09:14.563167 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sgz9b" podUID="4b4ed303-532f-42b2-a60e-b8d95bd6dd08" containerName="registry-server" containerID="cri-o://dd88ab9a1129810a77b2b1b975f9f9d37a8b12b55bfce106ce53248efc49376e" gracePeriod=2 Jan 22 17:09:14 crc kubenswrapper[4758]: I0122 17:09:14.824055 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3d292b8-3ae7-4c24-a25d-665f0aaaa031" path="/var/lib/kubelet/pods/d3d292b8-3ae7-4c24-a25d-665f0aaaa031/volumes" Jan 22 17:09:15 crc kubenswrapper[4758]: I0122 17:09:15.057435 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sgz9b" Jan 22 17:09:15 crc kubenswrapper[4758]: I0122 17:09:15.106627 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b4ed303-532f-42b2-a60e-b8d95bd6dd08-utilities\") pod \"4b4ed303-532f-42b2-a60e-b8d95bd6dd08\" (UID: \"4b4ed303-532f-42b2-a60e-b8d95bd6dd08\") " Jan 22 17:09:15 crc kubenswrapper[4758]: I0122 17:09:15.106842 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b4ed303-532f-42b2-a60e-b8d95bd6dd08-catalog-content\") pod \"4b4ed303-532f-42b2-a60e-b8d95bd6dd08\" (UID: \"4b4ed303-532f-42b2-a60e-b8d95bd6dd08\") " Jan 22 17:09:15 crc kubenswrapper[4758]: I0122 17:09:15.107125 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fn857\" (UniqueName: \"kubernetes.io/projected/4b4ed303-532f-42b2-a60e-b8d95bd6dd08-kube-api-access-fn857\") pod \"4b4ed303-532f-42b2-a60e-b8d95bd6dd08\" (UID: \"4b4ed303-532f-42b2-a60e-b8d95bd6dd08\") " Jan 22 17:09:15 crc kubenswrapper[4758]: I0122 17:09:15.107814 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b4ed303-532f-42b2-a60e-b8d95bd6dd08-utilities" (OuterVolumeSpecName: "utilities") pod "4b4ed303-532f-42b2-a60e-b8d95bd6dd08" (UID: "4b4ed303-532f-42b2-a60e-b8d95bd6dd08"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:09:15 crc kubenswrapper[4758]: I0122 17:09:15.117242 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b4ed303-532f-42b2-a60e-b8d95bd6dd08-kube-api-access-fn857" (OuterVolumeSpecName: "kube-api-access-fn857") pod "4b4ed303-532f-42b2-a60e-b8d95bd6dd08" (UID: "4b4ed303-532f-42b2-a60e-b8d95bd6dd08"). InnerVolumeSpecName "kube-api-access-fn857". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:09:15 crc kubenswrapper[4758]: I0122 17:09:15.124805 4758 generic.go:334] "Generic (PLEG): container finished" podID="4b4ed303-532f-42b2-a60e-b8d95bd6dd08" containerID="dd88ab9a1129810a77b2b1b975f9f9d37a8b12b55bfce106ce53248efc49376e" exitCode=0 Jan 22 17:09:15 crc kubenswrapper[4758]: I0122 17:09:15.124863 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sgz9b" Jan 22 17:09:15 crc kubenswrapper[4758]: I0122 17:09:15.124862 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sgz9b" event={"ID":"4b4ed303-532f-42b2-a60e-b8d95bd6dd08","Type":"ContainerDied","Data":"dd88ab9a1129810a77b2b1b975f9f9d37a8b12b55bfce106ce53248efc49376e"} Jan 22 17:09:15 crc kubenswrapper[4758]: I0122 17:09:15.125219 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sgz9b" event={"ID":"4b4ed303-532f-42b2-a60e-b8d95bd6dd08","Type":"ContainerDied","Data":"44462aa60c7af177fac69d8871f1d255e89235b51a8ff08b9ea57c95030a6b58"} Jan 22 17:09:15 crc kubenswrapper[4758]: I0122 17:09:15.125330 4758 scope.go:117] "RemoveContainer" containerID="dd88ab9a1129810a77b2b1b975f9f9d37a8b12b55bfce106ce53248efc49376e" Jan 22 17:09:15 crc kubenswrapper[4758]: I0122 17:09:15.164701 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b4ed303-532f-42b2-a60e-b8d95bd6dd08-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4b4ed303-532f-42b2-a60e-b8d95bd6dd08" (UID: "4b4ed303-532f-42b2-a60e-b8d95bd6dd08"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:09:15 crc kubenswrapper[4758]: I0122 17:09:15.182576 4758 scope.go:117] "RemoveContainer" containerID="6095390e277642b8befc8dcb9ef2c2c3af9be4283942949f1e8a9ad6ddc85498" Jan 22 17:09:15 crc kubenswrapper[4758]: I0122 17:09:15.209130 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fn857\" (UniqueName: \"kubernetes.io/projected/4b4ed303-532f-42b2-a60e-b8d95bd6dd08-kube-api-access-fn857\") on node \"crc\" DevicePath \"\"" Jan 22 17:09:15 crc kubenswrapper[4758]: I0122 17:09:15.209179 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b4ed303-532f-42b2-a60e-b8d95bd6dd08-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 17:09:15 crc kubenswrapper[4758]: I0122 17:09:15.209189 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b4ed303-532f-42b2-a60e-b8d95bd6dd08-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 17:09:15 crc kubenswrapper[4758]: I0122 17:09:15.215401 4758 scope.go:117] "RemoveContainer" containerID="5a6fbd91880f208852e2deea8f5151e587d9e0b82cdc9c1f274029910345f4aa" Jan 22 17:09:15 crc kubenswrapper[4758]: I0122 17:09:15.237258 4758 scope.go:117] "RemoveContainer" containerID="dd88ab9a1129810a77b2b1b975f9f9d37a8b12b55bfce106ce53248efc49376e" Jan 22 17:09:15 crc kubenswrapper[4758]: E0122 17:09:15.237863 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd88ab9a1129810a77b2b1b975f9f9d37a8b12b55bfce106ce53248efc49376e\": container with ID starting with dd88ab9a1129810a77b2b1b975f9f9d37a8b12b55bfce106ce53248efc49376e not found: ID does not exist" containerID="dd88ab9a1129810a77b2b1b975f9f9d37a8b12b55bfce106ce53248efc49376e" Jan 22 17:09:15 crc kubenswrapper[4758]: I0122 17:09:15.237969 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd88ab9a1129810a77b2b1b975f9f9d37a8b12b55bfce106ce53248efc49376e"} err="failed to get container status \"dd88ab9a1129810a77b2b1b975f9f9d37a8b12b55bfce106ce53248efc49376e\": rpc error: code = NotFound desc = could not find container \"dd88ab9a1129810a77b2b1b975f9f9d37a8b12b55bfce106ce53248efc49376e\": container with ID starting with dd88ab9a1129810a77b2b1b975f9f9d37a8b12b55bfce106ce53248efc49376e not found: ID does not exist" Jan 22 17:09:15 crc kubenswrapper[4758]: I0122 17:09:15.238046 4758 scope.go:117] "RemoveContainer" containerID="6095390e277642b8befc8dcb9ef2c2c3af9be4283942949f1e8a9ad6ddc85498" Jan 22 17:09:15 crc kubenswrapper[4758]: E0122 17:09:15.238361 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6095390e277642b8befc8dcb9ef2c2c3af9be4283942949f1e8a9ad6ddc85498\": container with ID starting with 6095390e277642b8befc8dcb9ef2c2c3af9be4283942949f1e8a9ad6ddc85498 not found: ID does not exist" containerID="6095390e277642b8befc8dcb9ef2c2c3af9be4283942949f1e8a9ad6ddc85498" Jan 22 17:09:15 crc kubenswrapper[4758]: I0122 17:09:15.238455 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6095390e277642b8befc8dcb9ef2c2c3af9be4283942949f1e8a9ad6ddc85498"} err="failed to get container status \"6095390e277642b8befc8dcb9ef2c2c3af9be4283942949f1e8a9ad6ddc85498\": rpc error: code = NotFound desc = could not find container \"6095390e277642b8befc8dcb9ef2c2c3af9be4283942949f1e8a9ad6ddc85498\": container with ID starting with 6095390e277642b8befc8dcb9ef2c2c3af9be4283942949f1e8a9ad6ddc85498 not found: ID does not exist" Jan 22 17:09:15 crc kubenswrapper[4758]: I0122 17:09:15.238522 4758 scope.go:117] "RemoveContainer" containerID="5a6fbd91880f208852e2deea8f5151e587d9e0b82cdc9c1f274029910345f4aa" Jan 22 17:09:15 crc kubenswrapper[4758]: E0122 17:09:15.238827 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a6fbd91880f208852e2deea8f5151e587d9e0b82cdc9c1f274029910345f4aa\": container with ID starting with 5a6fbd91880f208852e2deea8f5151e587d9e0b82cdc9c1f274029910345f4aa not found: ID does not exist" containerID="5a6fbd91880f208852e2deea8f5151e587d9e0b82cdc9c1f274029910345f4aa" Jan 22 17:09:15 crc kubenswrapper[4758]: I0122 17:09:15.238950 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a6fbd91880f208852e2deea8f5151e587d9e0b82cdc9c1f274029910345f4aa"} err="failed to get container status \"5a6fbd91880f208852e2deea8f5151e587d9e0b82cdc9c1f274029910345f4aa\": rpc error: code = NotFound desc = could not find container \"5a6fbd91880f208852e2deea8f5151e587d9e0b82cdc9c1f274029910345f4aa\": container with ID starting with 5a6fbd91880f208852e2deea8f5151e587d9e0b82cdc9c1f274029910345f4aa not found: ID does not exist" Jan 22 17:09:15 crc kubenswrapper[4758]: I0122 17:09:15.469912 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sgz9b"] Jan 22 17:09:15 crc kubenswrapper[4758]: I0122 17:09:15.477408 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sgz9b"] Jan 22 17:09:16 crc kubenswrapper[4758]: I0122 17:09:16.142345 4758 generic.go:334] "Generic (PLEG): container finished" podID="328e6c99-b23b-4d6d-b816-79d6af92932f" containerID="115746066de46060a1eb3433c45ef395ad91724b86cb4941698818cb90ad3f34" exitCode=0 Jan 22 17:09:16 crc kubenswrapper[4758]: I0122 17:09:16.142545 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" event={"ID":"328e6c99-b23b-4d6d-b816-79d6af92932f","Type":"ContainerDied","Data":"115746066de46060a1eb3433c45ef395ad91724b86cb4941698818cb90ad3f34"} Jan 22 17:09:16 crc kubenswrapper[4758]: I0122 17:09:16.822500 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b4ed303-532f-42b2-a60e-b8d95bd6dd08" path="/var/lib/kubelet/pods/4b4ed303-532f-42b2-a60e-b8d95bd6dd08/volumes" Jan 22 17:09:17 crc kubenswrapper[4758]: I0122 17:09:17.579353 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" Jan 22 17:09:17 crc kubenswrapper[4758]: I0122 17:09:17.654966 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtr4f\" (UniqueName: \"kubernetes.io/projected/328e6c99-b23b-4d6d-b816-79d6af92932f-kube-api-access-rtr4f\") pod \"328e6c99-b23b-4d6d-b816-79d6af92932f\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " Jan 22 17:09:17 crc kubenswrapper[4758]: I0122 17:09:17.655038 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-neutron-metadata-combined-ca-bundle\") pod \"328e6c99-b23b-4d6d-b816-79d6af92932f\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " Jan 22 17:09:17 crc kubenswrapper[4758]: I0122 17:09:17.655069 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-inventory\") pod \"328e6c99-b23b-4d6d-b816-79d6af92932f\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " Jan 22 17:09:17 crc kubenswrapper[4758]: I0122 17:09:17.655096 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-bootstrap-combined-ca-bundle\") pod \"328e6c99-b23b-4d6d-b816-79d6af92932f\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " Jan 22 17:09:17 crc kubenswrapper[4758]: I0122 17:09:17.655140 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-repo-setup-combined-ca-bundle\") pod \"328e6c99-b23b-4d6d-b816-79d6af92932f\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " Jan 22 17:09:17 crc kubenswrapper[4758]: I0122 17:09:17.655196 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-libvirt-combined-ca-bundle\") pod \"328e6c99-b23b-4d6d-b816-79d6af92932f\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " Jan 22 17:09:17 crc kubenswrapper[4758]: I0122 17:09:17.655234 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/328e6c99-b23b-4d6d-b816-79d6af92932f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"328e6c99-b23b-4d6d-b816-79d6af92932f\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " Jan 22 17:09:17 crc kubenswrapper[4758]: I0122 17:09:17.655320 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/328e6c99-b23b-4d6d-b816-79d6af92932f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"328e6c99-b23b-4d6d-b816-79d6af92932f\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " Jan 22 17:09:17 crc kubenswrapper[4758]: I0122 17:09:17.655350 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-ovn-combined-ca-bundle\") pod \"328e6c99-b23b-4d6d-b816-79d6af92932f\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " Jan 22 17:09:17 crc kubenswrapper[4758]: I0122 17:09:17.655374 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-telemetry-combined-ca-bundle\") pod \"328e6c99-b23b-4d6d-b816-79d6af92932f\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " Jan 22 17:09:17 crc kubenswrapper[4758]: I0122 17:09:17.655397 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/328e6c99-b23b-4d6d-b816-79d6af92932f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"328e6c99-b23b-4d6d-b816-79d6af92932f\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " Jan 22 17:09:17 crc kubenswrapper[4758]: I0122 17:09:17.655467 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-nova-combined-ca-bundle\") pod \"328e6c99-b23b-4d6d-b816-79d6af92932f\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " Jan 22 17:09:17 crc kubenswrapper[4758]: I0122 17:09:17.655533 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-ssh-key-openstack-edpm-ipam\") pod \"328e6c99-b23b-4d6d-b816-79d6af92932f\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " Jan 22 17:09:17 crc kubenswrapper[4758]: I0122 17:09:17.655566 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/328e6c99-b23b-4d6d-b816-79d6af92932f-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"328e6c99-b23b-4d6d-b816-79d6af92932f\" (UID: \"328e6c99-b23b-4d6d-b816-79d6af92932f\") " Jan 22 17:09:17 crc kubenswrapper[4758]: I0122 17:09:17.663661 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/328e6c99-b23b-4d6d-b816-79d6af92932f-kube-api-access-rtr4f" (OuterVolumeSpecName: "kube-api-access-rtr4f") pod "328e6c99-b23b-4d6d-b816-79d6af92932f" (UID: "328e6c99-b23b-4d6d-b816-79d6af92932f"). InnerVolumeSpecName "kube-api-access-rtr4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:09:17 crc kubenswrapper[4758]: I0122 17:09:17.664875 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "328e6c99-b23b-4d6d-b816-79d6af92932f" (UID: "328e6c99-b23b-4d6d-b816-79d6af92932f"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:09:17 crc kubenswrapper[4758]: I0122 17:09:17.665016 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "328e6c99-b23b-4d6d-b816-79d6af92932f" (UID: "328e6c99-b23b-4d6d-b816-79d6af92932f"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:09:17 crc kubenswrapper[4758]: I0122 17:09:17.665197 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "328e6c99-b23b-4d6d-b816-79d6af92932f" (UID: "328e6c99-b23b-4d6d-b816-79d6af92932f"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:09:17 crc kubenswrapper[4758]: I0122 17:09:17.665297 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "328e6c99-b23b-4d6d-b816-79d6af92932f" (UID: "328e6c99-b23b-4d6d-b816-79d6af92932f"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:09:17 crc kubenswrapper[4758]: I0122 17:09:17.665404 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/328e6c99-b23b-4d6d-b816-79d6af92932f-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "328e6c99-b23b-4d6d-b816-79d6af92932f" (UID: "328e6c99-b23b-4d6d-b816-79d6af92932f"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:09:17 crc kubenswrapper[4758]: I0122 17:09:17.666020 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/328e6c99-b23b-4d6d-b816-79d6af92932f-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "328e6c99-b23b-4d6d-b816-79d6af92932f" (UID: "328e6c99-b23b-4d6d-b816-79d6af92932f"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:09:17 crc kubenswrapper[4758]: I0122 17:09:17.666204 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/328e6c99-b23b-4d6d-b816-79d6af92932f-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "328e6c99-b23b-4d6d-b816-79d6af92932f" (UID: "328e6c99-b23b-4d6d-b816-79d6af92932f"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:09:17 crc kubenswrapper[4758]: I0122 17:09:17.667356 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "328e6c99-b23b-4d6d-b816-79d6af92932f" (UID: "328e6c99-b23b-4d6d-b816-79d6af92932f"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:09:17 crc kubenswrapper[4758]: I0122 17:09:17.667640 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/328e6c99-b23b-4d6d-b816-79d6af92932f-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "328e6c99-b23b-4d6d-b816-79d6af92932f" (UID: "328e6c99-b23b-4d6d-b816-79d6af92932f"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:09:17 crc kubenswrapper[4758]: I0122 17:09:17.669958 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "328e6c99-b23b-4d6d-b816-79d6af92932f" (UID: "328e6c99-b23b-4d6d-b816-79d6af92932f"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:09:17 crc kubenswrapper[4758]: I0122 17:09:17.673120 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "328e6c99-b23b-4d6d-b816-79d6af92932f" (UID: "328e6c99-b23b-4d6d-b816-79d6af92932f"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:09:17 crc kubenswrapper[4758]: I0122 17:09:17.696718 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-inventory" (OuterVolumeSpecName: "inventory") pod "328e6c99-b23b-4d6d-b816-79d6af92932f" (UID: "328e6c99-b23b-4d6d-b816-79d6af92932f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:09:17 crc kubenswrapper[4758]: I0122 17:09:17.699218 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "328e6c99-b23b-4d6d-b816-79d6af92932f" (UID: "328e6c99-b23b-4d6d-b816-79d6af92932f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:09:17 crc kubenswrapper[4758]: I0122 17:09:17.758360 4758 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/328e6c99-b23b-4d6d-b816-79d6af92932f-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 22 17:09:17 crc kubenswrapper[4758]: I0122 17:09:17.758410 4758 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:09:17 crc kubenswrapper[4758]: I0122 17:09:17.758422 4758 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:09:17 crc kubenswrapper[4758]: I0122 17:09:17.758432 4758 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/328e6c99-b23b-4d6d-b816-79d6af92932f-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 22 17:09:17 crc kubenswrapper[4758]: I0122 17:09:17.758441 4758 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:09:17 crc kubenswrapper[4758]: I0122 17:09:17.758450 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 17:09:17 crc kubenswrapper[4758]: I0122 17:09:17.758458 4758 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/328e6c99-b23b-4d6d-b816-79d6af92932f-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 22 17:09:17 crc kubenswrapper[4758]: I0122 17:09:17.758466 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtr4f\" (UniqueName: \"kubernetes.io/projected/328e6c99-b23b-4d6d-b816-79d6af92932f-kube-api-access-rtr4f\") on node \"crc\" DevicePath \"\"" Jan 22 17:09:17 crc kubenswrapper[4758]: I0122 17:09:17.758475 4758 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:09:17 crc kubenswrapper[4758]: I0122 17:09:17.758486 4758 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 17:09:17 crc kubenswrapper[4758]: I0122 17:09:17.758494 4758 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:09:17 crc kubenswrapper[4758]: I0122 17:09:17.758505 4758 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:09:17 crc kubenswrapper[4758]: I0122 17:09:17.758512 4758 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328e6c99-b23b-4d6d-b816-79d6af92932f-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:09:17 crc kubenswrapper[4758]: I0122 17:09:17.758523 4758 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/328e6c99-b23b-4d6d-b816-79d6af92932f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 22 17:09:18 crc kubenswrapper[4758]: I0122 17:09:18.164142 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" event={"ID":"328e6c99-b23b-4d6d-b816-79d6af92932f","Type":"ContainerDied","Data":"3a8f3310390df2b1fe41c214fed83e72786d5536d1c50ff6feae5f9525172ac6"} Jan 22 17:09:18 crc kubenswrapper[4758]: I0122 17:09:18.164409 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a8f3310390df2b1fe41c214fed83e72786d5536d1c50ff6feae5f9525172ac6" Jan 22 17:09:18 crc kubenswrapper[4758]: I0122 17:09:18.164233 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-smgm5" Jan 22 17:09:18 crc kubenswrapper[4758]: I0122 17:09:18.259703 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-t69z2"] Jan 22 17:09:18 crc kubenswrapper[4758]: E0122 17:09:18.260277 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3d292b8-3ae7-4c24-a25d-665f0aaaa031" containerName="extract-utilities" Jan 22 17:09:18 crc kubenswrapper[4758]: I0122 17:09:18.260301 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3d292b8-3ae7-4c24-a25d-665f0aaaa031" containerName="extract-utilities" Jan 22 17:09:18 crc kubenswrapper[4758]: E0122 17:09:18.260323 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3d292b8-3ae7-4c24-a25d-665f0aaaa031" containerName="extract-content" Jan 22 17:09:18 crc kubenswrapper[4758]: I0122 17:09:18.260331 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3d292b8-3ae7-4c24-a25d-665f0aaaa031" containerName="extract-content" Jan 22 17:09:18 crc kubenswrapper[4758]: E0122 17:09:18.260349 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b4ed303-532f-42b2-a60e-b8d95bd6dd08" containerName="extract-content" Jan 22 17:09:18 crc kubenswrapper[4758]: I0122 17:09:18.260357 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b4ed303-532f-42b2-a60e-b8d95bd6dd08" containerName="extract-content" Jan 22 17:09:18 crc kubenswrapper[4758]: E0122 17:09:18.260374 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="328e6c99-b23b-4d6d-b816-79d6af92932f" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 22 17:09:18 crc kubenswrapper[4758]: I0122 17:09:18.260383 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="328e6c99-b23b-4d6d-b816-79d6af92932f" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 22 17:09:18 crc kubenswrapper[4758]: E0122 17:09:18.260401 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b4ed303-532f-42b2-a60e-b8d95bd6dd08" containerName="extract-utilities" Jan 22 17:09:18 crc kubenswrapper[4758]: I0122 17:09:18.260409 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b4ed303-532f-42b2-a60e-b8d95bd6dd08" containerName="extract-utilities" Jan 22 17:09:18 crc kubenswrapper[4758]: E0122 17:09:18.260434 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b4ed303-532f-42b2-a60e-b8d95bd6dd08" containerName="registry-server" Jan 22 17:09:18 crc kubenswrapper[4758]: I0122 17:09:18.260441 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b4ed303-532f-42b2-a60e-b8d95bd6dd08" containerName="registry-server" Jan 22 17:09:18 crc kubenswrapper[4758]: E0122 17:09:18.260456 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3d292b8-3ae7-4c24-a25d-665f0aaaa031" containerName="registry-server" Jan 22 17:09:18 crc kubenswrapper[4758]: I0122 17:09:18.260463 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3d292b8-3ae7-4c24-a25d-665f0aaaa031" containerName="registry-server" Jan 22 17:09:18 crc kubenswrapper[4758]: I0122 17:09:18.260700 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3d292b8-3ae7-4c24-a25d-665f0aaaa031" containerName="registry-server" Jan 22 17:09:18 crc kubenswrapper[4758]: I0122 17:09:18.260720 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b4ed303-532f-42b2-a60e-b8d95bd6dd08" containerName="registry-server" Jan 22 17:09:18 crc kubenswrapper[4758]: I0122 17:09:18.260731 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="328e6c99-b23b-4d6d-b816-79d6af92932f" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 22 17:09:18 crc kubenswrapper[4758]: I0122 17:09:18.262026 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t69z2" Jan 22 17:09:18 crc kubenswrapper[4758]: I0122 17:09:18.264886 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 17:09:18 crc kubenswrapper[4758]: I0122 17:09:18.265338 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 17:09:18 crc kubenswrapper[4758]: I0122 17:09:18.265473 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 17:09:18 crc kubenswrapper[4758]: I0122 17:09:18.265579 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 22 17:09:18 crc kubenswrapper[4758]: I0122 17:09:18.265833 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5gz9n" Jan 22 17:09:18 crc kubenswrapper[4758]: I0122 17:09:18.277872 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-t69z2"] Jan 22 17:09:18 crc kubenswrapper[4758]: I0122 17:09:18.369965 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqfkf\" (UniqueName: \"kubernetes.io/projected/3b6debcd-ee7f-4791-90eb-36e13e82f542-kube-api-access-pqfkf\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t69z2\" (UID: \"3b6debcd-ee7f-4791-90eb-36e13e82f542\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t69z2" Jan 22 17:09:18 crc kubenswrapper[4758]: I0122 17:09:18.370491 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b6debcd-ee7f-4791-90eb-36e13e82f542-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t69z2\" (UID: \"3b6debcd-ee7f-4791-90eb-36e13e82f542\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t69z2" Jan 22 17:09:18 crc kubenswrapper[4758]: I0122 17:09:18.370575 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b6debcd-ee7f-4791-90eb-36e13e82f542-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t69z2\" (UID: \"3b6debcd-ee7f-4791-90eb-36e13e82f542\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t69z2" Jan 22 17:09:18 crc kubenswrapper[4758]: I0122 17:09:18.370656 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b6debcd-ee7f-4791-90eb-36e13e82f542-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t69z2\" (UID: \"3b6debcd-ee7f-4791-90eb-36e13e82f542\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t69z2" Jan 22 17:09:18 crc kubenswrapper[4758]: I0122 17:09:18.370754 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3b6debcd-ee7f-4791-90eb-36e13e82f542-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t69z2\" (UID: \"3b6debcd-ee7f-4791-90eb-36e13e82f542\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t69z2" Jan 22 17:09:18 crc kubenswrapper[4758]: I0122 17:09:18.472455 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqfkf\" (UniqueName: \"kubernetes.io/projected/3b6debcd-ee7f-4791-90eb-36e13e82f542-kube-api-access-pqfkf\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t69z2\" (UID: \"3b6debcd-ee7f-4791-90eb-36e13e82f542\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t69z2" Jan 22 17:09:18 crc kubenswrapper[4758]: I0122 17:09:18.472559 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b6debcd-ee7f-4791-90eb-36e13e82f542-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t69z2\" (UID: \"3b6debcd-ee7f-4791-90eb-36e13e82f542\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t69z2" Jan 22 17:09:18 crc kubenswrapper[4758]: I0122 17:09:18.472579 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b6debcd-ee7f-4791-90eb-36e13e82f542-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t69z2\" (UID: \"3b6debcd-ee7f-4791-90eb-36e13e82f542\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t69z2" Jan 22 17:09:18 crc kubenswrapper[4758]: I0122 17:09:18.472612 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b6debcd-ee7f-4791-90eb-36e13e82f542-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t69z2\" (UID: \"3b6debcd-ee7f-4791-90eb-36e13e82f542\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t69z2" Jan 22 17:09:18 crc kubenswrapper[4758]: I0122 17:09:18.472640 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3b6debcd-ee7f-4791-90eb-36e13e82f542-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t69z2\" (UID: \"3b6debcd-ee7f-4791-90eb-36e13e82f542\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t69z2" Jan 22 17:09:18 crc kubenswrapper[4758]: I0122 17:09:18.473899 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3b6debcd-ee7f-4791-90eb-36e13e82f542-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t69z2\" (UID: \"3b6debcd-ee7f-4791-90eb-36e13e82f542\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t69z2" Jan 22 17:09:18 crc kubenswrapper[4758]: I0122 17:09:18.476775 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b6debcd-ee7f-4791-90eb-36e13e82f542-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t69z2\" (UID: \"3b6debcd-ee7f-4791-90eb-36e13e82f542\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t69z2" Jan 22 17:09:18 crc kubenswrapper[4758]: I0122 17:09:18.478396 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b6debcd-ee7f-4791-90eb-36e13e82f542-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t69z2\" (UID: \"3b6debcd-ee7f-4791-90eb-36e13e82f542\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t69z2" Jan 22 17:09:18 crc kubenswrapper[4758]: I0122 17:09:18.481828 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b6debcd-ee7f-4791-90eb-36e13e82f542-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t69z2\" (UID: \"3b6debcd-ee7f-4791-90eb-36e13e82f542\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t69z2" Jan 22 17:09:18 crc kubenswrapper[4758]: I0122 17:09:18.490504 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqfkf\" (UniqueName: \"kubernetes.io/projected/3b6debcd-ee7f-4791-90eb-36e13e82f542-kube-api-access-pqfkf\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t69z2\" (UID: \"3b6debcd-ee7f-4791-90eb-36e13e82f542\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t69z2" Jan 22 17:09:18 crc kubenswrapper[4758]: I0122 17:09:18.584780 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t69z2" Jan 22 17:09:19 crc kubenswrapper[4758]: I0122 17:09:19.216725 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-t69z2"] Jan 22 17:09:20 crc kubenswrapper[4758]: I0122 17:09:20.186353 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t69z2" event={"ID":"3b6debcd-ee7f-4791-90eb-36e13e82f542","Type":"ContainerStarted","Data":"4d5ab5e0b54a9d5daf470fef60543ff565f99cf24e3254295188314fc83edfdb"} Jan 22 17:09:21 crc kubenswrapper[4758]: I0122 17:09:21.198733 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t69z2" event={"ID":"3b6debcd-ee7f-4791-90eb-36e13e82f542","Type":"ContainerStarted","Data":"ffff9520cb583b99b846dd00762f7a2c5c521dd35f2f7c28d6d013ba8a6f643f"} Jan 22 17:09:21 crc kubenswrapper[4758]: I0122 17:09:21.217059 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t69z2" podStartSLOduration=2.497975433 podStartE2EDuration="3.217040529s" podCreationTimestamp="2026-01-22 17:09:18 +0000 UTC" firstStartedPulling="2026-01-22 17:09:19.216993916 +0000 UTC m=+2380.700333191" lastFinishedPulling="2026-01-22 17:09:19.936059002 +0000 UTC m=+2381.419398287" observedRunningTime="2026-01-22 17:09:21.215819926 +0000 UTC m=+2382.699159211" watchObservedRunningTime="2026-01-22 17:09:21.217040529 +0000 UTC m=+2382.700379814" Jan 22 17:09:43 crc kubenswrapper[4758]: I0122 17:09:43.837401 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 17:09:43 crc kubenswrapper[4758]: I0122 17:09:43.838072 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 17:09:43 crc kubenswrapper[4758]: I0122 17:09:43.838122 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" Jan 22 17:09:43 crc kubenswrapper[4758]: I0122 17:09:43.839021 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7b22a3b8055c9ca6f1b3b05a642218cc5ffe796314bc510e268584581f9db5e0"} pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 17:09:43 crc kubenswrapper[4758]: I0122 17:09:43.839096 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" containerID="cri-o://7b22a3b8055c9ca6f1b3b05a642218cc5ffe796314bc510e268584581f9db5e0" gracePeriod=600 Jan 22 17:09:43 crc kubenswrapper[4758]: E0122 17:09:43.966920 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:09:44 crc kubenswrapper[4758]: I0122 17:09:44.518030 4758 generic.go:334] "Generic (PLEG): container finished" podID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerID="7b22a3b8055c9ca6f1b3b05a642218cc5ffe796314bc510e268584581f9db5e0" exitCode=0 Jan 22 17:09:44 crc kubenswrapper[4758]: I0122 17:09:44.518086 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerDied","Data":"7b22a3b8055c9ca6f1b3b05a642218cc5ffe796314bc510e268584581f9db5e0"} Jan 22 17:09:44 crc kubenswrapper[4758]: I0122 17:09:44.518140 4758 scope.go:117] "RemoveContainer" containerID="5eead23d0d27ee914bca46bed2730995861ad0d4a38f25fb65f69db7d742ebbc" Jan 22 17:09:44 crc kubenswrapper[4758]: I0122 17:09:44.518728 4758 scope.go:117] "RemoveContainer" containerID="7b22a3b8055c9ca6f1b3b05a642218cc5ffe796314bc510e268584581f9db5e0" Jan 22 17:09:44 crc kubenswrapper[4758]: E0122 17:09:44.519034 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:09:58 crc kubenswrapper[4758]: I0122 17:09:58.816986 4758 scope.go:117] "RemoveContainer" containerID="7b22a3b8055c9ca6f1b3b05a642218cc5ffe796314bc510e268584581f9db5e0" Jan 22 17:09:58 crc kubenswrapper[4758]: E0122 17:09:58.818072 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:10:09 crc kubenswrapper[4758]: I0122 17:10:09.809397 4758 scope.go:117] "RemoveContainer" containerID="7b22a3b8055c9ca6f1b3b05a642218cc5ffe796314bc510e268584581f9db5e0" Jan 22 17:10:09 crc kubenswrapper[4758]: E0122 17:10:09.810417 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:10:24 crc kubenswrapper[4758]: I0122 17:10:24.808461 4758 scope.go:117] "RemoveContainer" containerID="7b22a3b8055c9ca6f1b3b05a642218cc5ffe796314bc510e268584581f9db5e0" Jan 22 17:10:24 crc kubenswrapper[4758]: E0122 17:10:24.809358 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:10:35 crc kubenswrapper[4758]: I0122 17:10:35.808450 4758 scope.go:117] "RemoveContainer" containerID="7b22a3b8055c9ca6f1b3b05a642218cc5ffe796314bc510e268584581f9db5e0" Jan 22 17:10:35 crc kubenswrapper[4758]: E0122 17:10:35.809205 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:10:38 crc kubenswrapper[4758]: I0122 17:10:38.254210 4758 generic.go:334] "Generic (PLEG): container finished" podID="3b6debcd-ee7f-4791-90eb-36e13e82f542" containerID="ffff9520cb583b99b846dd00762f7a2c5c521dd35f2f7c28d6d013ba8a6f643f" exitCode=0 Jan 22 17:10:38 crc kubenswrapper[4758]: I0122 17:10:38.254570 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t69z2" event={"ID":"3b6debcd-ee7f-4791-90eb-36e13e82f542","Type":"ContainerDied","Data":"ffff9520cb583b99b846dd00762f7a2c5c521dd35f2f7c28d6d013ba8a6f643f"} Jan 22 17:10:39 crc kubenswrapper[4758]: I0122 17:10:39.696227 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t69z2" Jan 22 17:10:39 crc kubenswrapper[4758]: I0122 17:10:39.828186 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqfkf\" (UniqueName: \"kubernetes.io/projected/3b6debcd-ee7f-4791-90eb-36e13e82f542-kube-api-access-pqfkf\") pod \"3b6debcd-ee7f-4791-90eb-36e13e82f542\" (UID: \"3b6debcd-ee7f-4791-90eb-36e13e82f542\") " Jan 22 17:10:39 crc kubenswrapper[4758]: I0122 17:10:39.828299 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3b6debcd-ee7f-4791-90eb-36e13e82f542-ovncontroller-config-0\") pod \"3b6debcd-ee7f-4791-90eb-36e13e82f542\" (UID: \"3b6debcd-ee7f-4791-90eb-36e13e82f542\") " Jan 22 17:10:39 crc kubenswrapper[4758]: I0122 17:10:39.828337 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b6debcd-ee7f-4791-90eb-36e13e82f542-ssh-key-openstack-edpm-ipam\") pod \"3b6debcd-ee7f-4791-90eb-36e13e82f542\" (UID: \"3b6debcd-ee7f-4791-90eb-36e13e82f542\") " Jan 22 17:10:39 crc kubenswrapper[4758]: I0122 17:10:39.828490 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b6debcd-ee7f-4791-90eb-36e13e82f542-inventory\") pod \"3b6debcd-ee7f-4791-90eb-36e13e82f542\" (UID: \"3b6debcd-ee7f-4791-90eb-36e13e82f542\") " Jan 22 17:10:39 crc kubenswrapper[4758]: I0122 17:10:39.828888 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b6debcd-ee7f-4791-90eb-36e13e82f542-ovn-combined-ca-bundle\") pod \"3b6debcd-ee7f-4791-90eb-36e13e82f542\" (UID: \"3b6debcd-ee7f-4791-90eb-36e13e82f542\") " Jan 22 17:10:39 crc kubenswrapper[4758]: I0122 17:10:39.833917 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b6debcd-ee7f-4791-90eb-36e13e82f542-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "3b6debcd-ee7f-4791-90eb-36e13e82f542" (UID: "3b6debcd-ee7f-4791-90eb-36e13e82f542"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:10:39 crc kubenswrapper[4758]: I0122 17:10:39.836096 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b6debcd-ee7f-4791-90eb-36e13e82f542-kube-api-access-pqfkf" (OuterVolumeSpecName: "kube-api-access-pqfkf") pod "3b6debcd-ee7f-4791-90eb-36e13e82f542" (UID: "3b6debcd-ee7f-4791-90eb-36e13e82f542"). InnerVolumeSpecName "kube-api-access-pqfkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:10:39 crc kubenswrapper[4758]: I0122 17:10:39.867106 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b6debcd-ee7f-4791-90eb-36e13e82f542-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "3b6debcd-ee7f-4791-90eb-36e13e82f542" (UID: "3b6debcd-ee7f-4791-90eb-36e13e82f542"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 17:10:39 crc kubenswrapper[4758]: I0122 17:10:39.868752 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b6debcd-ee7f-4791-90eb-36e13e82f542-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3b6debcd-ee7f-4791-90eb-36e13e82f542" (UID: "3b6debcd-ee7f-4791-90eb-36e13e82f542"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:10:39 crc kubenswrapper[4758]: I0122 17:10:39.869930 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b6debcd-ee7f-4791-90eb-36e13e82f542-inventory" (OuterVolumeSpecName: "inventory") pod "3b6debcd-ee7f-4791-90eb-36e13e82f542" (UID: "3b6debcd-ee7f-4791-90eb-36e13e82f542"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:10:39 crc kubenswrapper[4758]: I0122 17:10:39.932484 4758 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b6debcd-ee7f-4791-90eb-36e13e82f542-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 17:10:39 crc kubenswrapper[4758]: I0122 17:10:39.932527 4758 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b6debcd-ee7f-4791-90eb-36e13e82f542-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:10:39 crc kubenswrapper[4758]: I0122 17:10:39.932542 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqfkf\" (UniqueName: \"kubernetes.io/projected/3b6debcd-ee7f-4791-90eb-36e13e82f542-kube-api-access-pqfkf\") on node \"crc\" DevicePath \"\"" Jan 22 17:10:39 crc kubenswrapper[4758]: I0122 17:10:39.932555 4758 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3b6debcd-ee7f-4791-90eb-36e13e82f542-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 22 17:10:39 crc kubenswrapper[4758]: I0122 17:10:39.932567 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b6debcd-ee7f-4791-90eb-36e13e82f542-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 17:10:40 crc kubenswrapper[4758]: I0122 17:10:40.277806 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t69z2" event={"ID":"3b6debcd-ee7f-4791-90eb-36e13e82f542","Type":"ContainerDied","Data":"4d5ab5e0b54a9d5daf470fef60543ff565f99cf24e3254295188314fc83edfdb"} Jan 22 17:10:40 crc kubenswrapper[4758]: I0122 17:10:40.277913 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d5ab5e0b54a9d5daf470fef60543ff565f99cf24e3254295188314fc83edfdb" Jan 22 17:10:40 crc kubenswrapper[4758]: I0122 17:10:40.278021 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t69z2" Jan 22 17:10:40 crc kubenswrapper[4758]: I0122 17:10:40.398352 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-579gl"] Jan 22 17:10:40 crc kubenswrapper[4758]: E0122 17:10:40.399056 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b6debcd-ee7f-4791-90eb-36e13e82f542" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 22 17:10:40 crc kubenswrapper[4758]: I0122 17:10:40.399082 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b6debcd-ee7f-4791-90eb-36e13e82f542" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 22 17:10:40 crc kubenswrapper[4758]: I0122 17:10:40.399455 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b6debcd-ee7f-4791-90eb-36e13e82f542" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 22 17:10:40 crc kubenswrapper[4758]: I0122 17:10:40.400672 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-579gl" Jan 22 17:10:40 crc kubenswrapper[4758]: I0122 17:10:40.409500 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 17:10:40 crc kubenswrapper[4758]: I0122 17:10:40.409799 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5gz9n" Jan 22 17:10:40 crc kubenswrapper[4758]: I0122 17:10:40.409927 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 17:10:40 crc kubenswrapper[4758]: I0122 17:10:40.410165 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 22 17:10:40 crc kubenswrapper[4758]: I0122 17:10:40.410176 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 22 17:10:40 crc kubenswrapper[4758]: I0122 17:10:40.410816 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 17:10:40 crc kubenswrapper[4758]: I0122 17:10:40.426714 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-579gl"] Jan 22 17:10:40 crc kubenswrapper[4758]: I0122 17:10:40.548256 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a76cd73-4259-4fa1-8846-f645ef6603b1-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-579gl\" (UID: \"0a76cd73-4259-4fa1-8846-f645ef6603b1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-579gl" Jan 22 17:10:40 crc kubenswrapper[4758]: I0122 17:10:40.548696 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0a76cd73-4259-4fa1-8846-f645ef6603b1-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-579gl\" (UID: \"0a76cd73-4259-4fa1-8846-f645ef6603b1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-579gl" Jan 22 17:10:40 crc kubenswrapper[4758]: I0122 17:10:40.548769 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0a76cd73-4259-4fa1-8846-f645ef6603b1-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-579gl\" (UID: \"0a76cd73-4259-4fa1-8846-f645ef6603b1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-579gl" Jan 22 17:10:40 crc kubenswrapper[4758]: I0122 17:10:40.548790 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv6bz\" (UniqueName: \"kubernetes.io/projected/0a76cd73-4259-4fa1-8846-f645ef6603b1-kube-api-access-sv6bz\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-579gl\" (UID: \"0a76cd73-4259-4fa1-8846-f645ef6603b1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-579gl" Jan 22 17:10:40 crc kubenswrapper[4758]: I0122 17:10:40.548817 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a76cd73-4259-4fa1-8846-f645ef6603b1-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-579gl\" (UID: \"0a76cd73-4259-4fa1-8846-f645ef6603b1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-579gl" Jan 22 17:10:40 crc kubenswrapper[4758]: I0122 17:10:40.548880 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a76cd73-4259-4fa1-8846-f645ef6603b1-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-579gl\" (UID: \"0a76cd73-4259-4fa1-8846-f645ef6603b1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-579gl" Jan 22 17:10:40 crc kubenswrapper[4758]: I0122 17:10:40.651544 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0a76cd73-4259-4fa1-8846-f645ef6603b1-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-579gl\" (UID: \"0a76cd73-4259-4fa1-8846-f645ef6603b1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-579gl" Jan 22 17:10:40 crc kubenswrapper[4758]: I0122 17:10:40.651625 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0a76cd73-4259-4fa1-8846-f645ef6603b1-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-579gl\" (UID: \"0a76cd73-4259-4fa1-8846-f645ef6603b1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-579gl" Jan 22 17:10:40 crc kubenswrapper[4758]: I0122 17:10:40.651677 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv6bz\" (UniqueName: \"kubernetes.io/projected/0a76cd73-4259-4fa1-8846-f645ef6603b1-kube-api-access-sv6bz\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-579gl\" (UID: \"0a76cd73-4259-4fa1-8846-f645ef6603b1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-579gl" Jan 22 17:10:40 crc kubenswrapper[4758]: I0122 17:10:40.651703 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a76cd73-4259-4fa1-8846-f645ef6603b1-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-579gl\" (UID: \"0a76cd73-4259-4fa1-8846-f645ef6603b1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-579gl" Jan 22 17:10:40 crc kubenswrapper[4758]: I0122 17:10:40.651755 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a76cd73-4259-4fa1-8846-f645ef6603b1-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-579gl\" (UID: \"0a76cd73-4259-4fa1-8846-f645ef6603b1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-579gl" Jan 22 17:10:40 crc kubenswrapper[4758]: I0122 17:10:40.651854 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a76cd73-4259-4fa1-8846-f645ef6603b1-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-579gl\" (UID: \"0a76cd73-4259-4fa1-8846-f645ef6603b1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-579gl" Jan 22 17:10:40 crc kubenswrapper[4758]: I0122 17:10:40.656277 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a76cd73-4259-4fa1-8846-f645ef6603b1-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-579gl\" (UID: \"0a76cd73-4259-4fa1-8846-f645ef6603b1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-579gl" Jan 22 17:10:40 crc kubenswrapper[4758]: I0122 17:10:40.657013 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0a76cd73-4259-4fa1-8846-f645ef6603b1-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-579gl\" (UID: \"0a76cd73-4259-4fa1-8846-f645ef6603b1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-579gl" Jan 22 17:10:40 crc kubenswrapper[4758]: I0122 17:10:40.657814 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a76cd73-4259-4fa1-8846-f645ef6603b1-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-579gl\" (UID: \"0a76cd73-4259-4fa1-8846-f645ef6603b1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-579gl" Jan 22 17:10:40 crc kubenswrapper[4758]: I0122 17:10:40.658385 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a76cd73-4259-4fa1-8846-f645ef6603b1-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-579gl\" (UID: \"0a76cd73-4259-4fa1-8846-f645ef6603b1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-579gl" Jan 22 17:10:40 crc kubenswrapper[4758]: I0122 17:10:40.660055 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0a76cd73-4259-4fa1-8846-f645ef6603b1-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-579gl\" (UID: \"0a76cd73-4259-4fa1-8846-f645ef6603b1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-579gl" Jan 22 17:10:40 crc kubenswrapper[4758]: I0122 17:10:40.677258 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv6bz\" (UniqueName: \"kubernetes.io/projected/0a76cd73-4259-4fa1-8846-f645ef6603b1-kube-api-access-sv6bz\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-579gl\" (UID: \"0a76cd73-4259-4fa1-8846-f645ef6603b1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-579gl" Jan 22 17:10:40 crc kubenswrapper[4758]: I0122 17:10:40.728693 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-579gl" Jan 22 17:10:41 crc kubenswrapper[4758]: I0122 17:10:41.293764 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-579gl"] Jan 22 17:10:42 crc kubenswrapper[4758]: I0122 17:10:42.318404 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-579gl" event={"ID":"0a76cd73-4259-4fa1-8846-f645ef6603b1","Type":"ContainerStarted","Data":"690de9718c3edc93c9afacf89e8c242ee0d7f62f04b635f68556a78d2d5ffab5"} Jan 22 17:10:43 crc kubenswrapper[4758]: I0122 17:10:43.331825 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-579gl" event={"ID":"0a76cd73-4259-4fa1-8846-f645ef6603b1","Type":"ContainerStarted","Data":"dfc3900bb28facc133d7de0e2842105a17149921ae71782bf59e40419b4bafcf"} Jan 22 17:10:43 crc kubenswrapper[4758]: I0122 17:10:43.359259 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-579gl" podStartSLOduration=2.5878464340000003 podStartE2EDuration="3.359218262s" podCreationTimestamp="2026-01-22 17:10:40 +0000 UTC" firstStartedPulling="2026-01-22 17:10:41.298042805 +0000 UTC m=+2462.781382090" lastFinishedPulling="2026-01-22 17:10:42.069414633 +0000 UTC m=+2463.552753918" observedRunningTime="2026-01-22 17:10:43.346859024 +0000 UTC m=+2464.830198299" watchObservedRunningTime="2026-01-22 17:10:43.359218262 +0000 UTC m=+2464.842557567" Jan 22 17:10:47 crc kubenswrapper[4758]: I0122 17:10:47.808244 4758 scope.go:117] "RemoveContainer" containerID="7b22a3b8055c9ca6f1b3b05a642218cc5ffe796314bc510e268584581f9db5e0" Jan 22 17:10:47 crc kubenswrapper[4758]: E0122 17:10:47.809095 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:11:01 crc kubenswrapper[4758]: I0122 17:11:01.809046 4758 scope.go:117] "RemoveContainer" containerID="7b22a3b8055c9ca6f1b3b05a642218cc5ffe796314bc510e268584581f9db5e0" Jan 22 17:11:01 crc kubenswrapper[4758]: E0122 17:11:01.809948 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:11:05 crc kubenswrapper[4758]: I0122 17:11:05.219781 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-d4vlf"] Jan 22 17:11:05 crc kubenswrapper[4758]: I0122 17:11:05.222668 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d4vlf" Jan 22 17:11:05 crc kubenswrapper[4758]: I0122 17:11:05.259142 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d4vlf"] Jan 22 17:11:05 crc kubenswrapper[4758]: I0122 17:11:05.316922 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx5nj\" (UniqueName: \"kubernetes.io/projected/d5a27c27-03e3-48db-9859-b14942df5c08-kube-api-access-dx5nj\") pod \"redhat-marketplace-d4vlf\" (UID: \"d5a27c27-03e3-48db-9859-b14942df5c08\") " pod="openshift-marketplace/redhat-marketplace-d4vlf" Jan 22 17:11:05 crc kubenswrapper[4758]: I0122 17:11:05.317145 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5a27c27-03e3-48db-9859-b14942df5c08-catalog-content\") pod \"redhat-marketplace-d4vlf\" (UID: \"d5a27c27-03e3-48db-9859-b14942df5c08\") " pod="openshift-marketplace/redhat-marketplace-d4vlf" Jan 22 17:11:05 crc kubenswrapper[4758]: I0122 17:11:05.317233 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5a27c27-03e3-48db-9859-b14942df5c08-utilities\") pod \"redhat-marketplace-d4vlf\" (UID: \"d5a27c27-03e3-48db-9859-b14942df5c08\") " pod="openshift-marketplace/redhat-marketplace-d4vlf" Jan 22 17:11:05 crc kubenswrapper[4758]: I0122 17:11:05.418212 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5a27c27-03e3-48db-9859-b14942df5c08-catalog-content\") pod \"redhat-marketplace-d4vlf\" (UID: \"d5a27c27-03e3-48db-9859-b14942df5c08\") " pod="openshift-marketplace/redhat-marketplace-d4vlf" Jan 22 17:11:05 crc kubenswrapper[4758]: I0122 17:11:05.418292 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5a27c27-03e3-48db-9859-b14942df5c08-utilities\") pod \"redhat-marketplace-d4vlf\" (UID: \"d5a27c27-03e3-48db-9859-b14942df5c08\") " pod="openshift-marketplace/redhat-marketplace-d4vlf" Jan 22 17:11:05 crc kubenswrapper[4758]: I0122 17:11:05.418323 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx5nj\" (UniqueName: \"kubernetes.io/projected/d5a27c27-03e3-48db-9859-b14942df5c08-kube-api-access-dx5nj\") pod \"redhat-marketplace-d4vlf\" (UID: \"d5a27c27-03e3-48db-9859-b14942df5c08\") " pod="openshift-marketplace/redhat-marketplace-d4vlf" Jan 22 17:11:05 crc kubenswrapper[4758]: I0122 17:11:05.418888 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5a27c27-03e3-48db-9859-b14942df5c08-utilities\") pod \"redhat-marketplace-d4vlf\" (UID: \"d5a27c27-03e3-48db-9859-b14942df5c08\") " pod="openshift-marketplace/redhat-marketplace-d4vlf" Jan 22 17:11:05 crc kubenswrapper[4758]: I0122 17:11:05.419107 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5a27c27-03e3-48db-9859-b14942df5c08-catalog-content\") pod \"redhat-marketplace-d4vlf\" (UID: \"d5a27c27-03e3-48db-9859-b14942df5c08\") " pod="openshift-marketplace/redhat-marketplace-d4vlf" Jan 22 17:11:05 crc kubenswrapper[4758]: I0122 17:11:05.437941 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx5nj\" (UniqueName: \"kubernetes.io/projected/d5a27c27-03e3-48db-9859-b14942df5c08-kube-api-access-dx5nj\") pod \"redhat-marketplace-d4vlf\" (UID: \"d5a27c27-03e3-48db-9859-b14942df5c08\") " pod="openshift-marketplace/redhat-marketplace-d4vlf" Jan 22 17:11:05 crc kubenswrapper[4758]: I0122 17:11:05.562548 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d4vlf" Jan 22 17:11:06 crc kubenswrapper[4758]: I0122 17:11:06.048603 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d4vlf"] Jan 22 17:11:06 crc kubenswrapper[4758]: I0122 17:11:06.627966 4758 generic.go:334] "Generic (PLEG): container finished" podID="d5a27c27-03e3-48db-9859-b14942df5c08" containerID="81515407dcb3cafc7e41468cca522d576bdf206ca1d2444197d36136262ca07b" exitCode=0 Jan 22 17:11:06 crc kubenswrapper[4758]: I0122 17:11:06.628015 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d4vlf" event={"ID":"d5a27c27-03e3-48db-9859-b14942df5c08","Type":"ContainerDied","Data":"81515407dcb3cafc7e41468cca522d576bdf206ca1d2444197d36136262ca07b"} Jan 22 17:11:06 crc kubenswrapper[4758]: I0122 17:11:06.628043 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d4vlf" event={"ID":"d5a27c27-03e3-48db-9859-b14942df5c08","Type":"ContainerStarted","Data":"0646ef5ddfe878f5f676bb5882630afa28f4d59f33fff061b7a25e6c82b2b1e0"} Jan 22 17:11:08 crc kubenswrapper[4758]: I0122 17:11:08.656237 4758 generic.go:334] "Generic (PLEG): container finished" podID="d5a27c27-03e3-48db-9859-b14942df5c08" containerID="a600655186a663da20f33fab267a13271f40d86a091de37ac292b6dc3d07dcc3" exitCode=0 Jan 22 17:11:08 crc kubenswrapper[4758]: I0122 17:11:08.656418 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d4vlf" event={"ID":"d5a27c27-03e3-48db-9859-b14942df5c08","Type":"ContainerDied","Data":"a600655186a663da20f33fab267a13271f40d86a091de37ac292b6dc3d07dcc3"} Jan 22 17:11:09 crc kubenswrapper[4758]: I0122 17:11:09.668986 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d4vlf" event={"ID":"d5a27c27-03e3-48db-9859-b14942df5c08","Type":"ContainerStarted","Data":"6a7283e410bf76cbf6c9d0c6e3ddb5b9acc1287f7eb814b7b55d80c7ed580504"} Jan 22 17:11:09 crc kubenswrapper[4758]: I0122 17:11:09.700767 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-d4vlf" podStartSLOduration=1.955909795 podStartE2EDuration="4.700724012s" podCreationTimestamp="2026-01-22 17:11:05 +0000 UTC" firstStartedPulling="2026-01-22 17:11:06.629777 +0000 UTC m=+2488.113116285" lastFinishedPulling="2026-01-22 17:11:09.374591217 +0000 UTC m=+2490.857930502" observedRunningTime="2026-01-22 17:11:09.688388806 +0000 UTC m=+2491.171728091" watchObservedRunningTime="2026-01-22 17:11:09.700724012 +0000 UTC m=+2491.184063297" Jan 22 17:11:15 crc kubenswrapper[4758]: I0122 17:11:15.563134 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-d4vlf" Jan 22 17:11:15 crc kubenswrapper[4758]: I0122 17:11:15.563871 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-d4vlf" Jan 22 17:11:15 crc kubenswrapper[4758]: I0122 17:11:15.677870 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-d4vlf" Jan 22 17:11:15 crc kubenswrapper[4758]: I0122 17:11:15.809133 4758 scope.go:117] "RemoveContainer" containerID="7b22a3b8055c9ca6f1b3b05a642218cc5ffe796314bc510e268584581f9db5e0" Jan 22 17:11:15 crc kubenswrapper[4758]: E0122 17:11:15.809646 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:11:15 crc kubenswrapper[4758]: I0122 17:11:15.935716 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-d4vlf" Jan 22 17:11:15 crc kubenswrapper[4758]: I0122 17:11:15.989651 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d4vlf"] Jan 22 17:11:17 crc kubenswrapper[4758]: I0122 17:11:17.884809 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-d4vlf" podUID="d5a27c27-03e3-48db-9859-b14942df5c08" containerName="registry-server" containerID="cri-o://6a7283e410bf76cbf6c9d0c6e3ddb5b9acc1287f7eb814b7b55d80c7ed580504" gracePeriod=2 Jan 22 17:11:18 crc kubenswrapper[4758]: I0122 17:11:18.450824 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d4vlf" Jan 22 17:11:18 crc kubenswrapper[4758]: I0122 17:11:18.642378 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5a27c27-03e3-48db-9859-b14942df5c08-catalog-content\") pod \"d5a27c27-03e3-48db-9859-b14942df5c08\" (UID: \"d5a27c27-03e3-48db-9859-b14942df5c08\") " Jan 22 17:11:18 crc kubenswrapper[4758]: I0122 17:11:18.642728 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5a27c27-03e3-48db-9859-b14942df5c08-utilities\") pod \"d5a27c27-03e3-48db-9859-b14942df5c08\" (UID: \"d5a27c27-03e3-48db-9859-b14942df5c08\") " Jan 22 17:11:18 crc kubenswrapper[4758]: I0122 17:11:18.642840 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dx5nj\" (UniqueName: \"kubernetes.io/projected/d5a27c27-03e3-48db-9859-b14942df5c08-kube-api-access-dx5nj\") pod \"d5a27c27-03e3-48db-9859-b14942df5c08\" (UID: \"d5a27c27-03e3-48db-9859-b14942df5c08\") " Jan 22 17:11:18 crc kubenswrapper[4758]: I0122 17:11:18.643538 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5a27c27-03e3-48db-9859-b14942df5c08-utilities" (OuterVolumeSpecName: "utilities") pod "d5a27c27-03e3-48db-9859-b14942df5c08" (UID: "d5a27c27-03e3-48db-9859-b14942df5c08"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:11:18 crc kubenswrapper[4758]: I0122 17:11:18.644213 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5a27c27-03e3-48db-9859-b14942df5c08-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 17:11:18 crc kubenswrapper[4758]: I0122 17:11:18.648998 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5a27c27-03e3-48db-9859-b14942df5c08-kube-api-access-dx5nj" (OuterVolumeSpecName: "kube-api-access-dx5nj") pod "d5a27c27-03e3-48db-9859-b14942df5c08" (UID: "d5a27c27-03e3-48db-9859-b14942df5c08"). InnerVolumeSpecName "kube-api-access-dx5nj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:11:18 crc kubenswrapper[4758]: I0122 17:11:18.670308 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5a27c27-03e3-48db-9859-b14942df5c08-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d5a27c27-03e3-48db-9859-b14942df5c08" (UID: "d5a27c27-03e3-48db-9859-b14942df5c08"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:11:18 crc kubenswrapper[4758]: I0122 17:11:18.746810 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5a27c27-03e3-48db-9859-b14942df5c08-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 17:11:18 crc kubenswrapper[4758]: I0122 17:11:18.746851 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dx5nj\" (UniqueName: \"kubernetes.io/projected/d5a27c27-03e3-48db-9859-b14942df5c08-kube-api-access-dx5nj\") on node \"crc\" DevicePath \"\"" Jan 22 17:11:18 crc kubenswrapper[4758]: I0122 17:11:18.899293 4758 generic.go:334] "Generic (PLEG): container finished" podID="d5a27c27-03e3-48db-9859-b14942df5c08" containerID="6a7283e410bf76cbf6c9d0c6e3ddb5b9acc1287f7eb814b7b55d80c7ed580504" exitCode=0 Jan 22 17:11:18 crc kubenswrapper[4758]: I0122 17:11:18.899352 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d4vlf" event={"ID":"d5a27c27-03e3-48db-9859-b14942df5c08","Type":"ContainerDied","Data":"6a7283e410bf76cbf6c9d0c6e3ddb5b9acc1287f7eb814b7b55d80c7ed580504"} Jan 22 17:11:18 crc kubenswrapper[4758]: I0122 17:11:18.899386 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d4vlf" event={"ID":"d5a27c27-03e3-48db-9859-b14942df5c08","Type":"ContainerDied","Data":"0646ef5ddfe878f5f676bb5882630afa28f4d59f33fff061b7a25e6c82b2b1e0"} Jan 22 17:11:18 crc kubenswrapper[4758]: I0122 17:11:18.899407 4758 scope.go:117] "RemoveContainer" containerID="6a7283e410bf76cbf6c9d0c6e3ddb5b9acc1287f7eb814b7b55d80c7ed580504" Jan 22 17:11:18 crc kubenswrapper[4758]: I0122 17:11:18.899413 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d4vlf" Jan 22 17:11:18 crc kubenswrapper[4758]: I0122 17:11:18.944543 4758 scope.go:117] "RemoveContainer" containerID="a600655186a663da20f33fab267a13271f40d86a091de37ac292b6dc3d07dcc3" Jan 22 17:11:18 crc kubenswrapper[4758]: I0122 17:11:18.952475 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d4vlf"] Jan 22 17:11:18 crc kubenswrapper[4758]: I0122 17:11:18.969706 4758 scope.go:117] "RemoveContainer" containerID="81515407dcb3cafc7e41468cca522d576bdf206ca1d2444197d36136262ca07b" Jan 22 17:11:18 crc kubenswrapper[4758]: I0122 17:11:18.972483 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-d4vlf"] Jan 22 17:11:19 crc kubenswrapper[4758]: I0122 17:11:19.014706 4758 scope.go:117] "RemoveContainer" containerID="6a7283e410bf76cbf6c9d0c6e3ddb5b9acc1287f7eb814b7b55d80c7ed580504" Jan 22 17:11:19 crc kubenswrapper[4758]: E0122 17:11:19.015218 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a7283e410bf76cbf6c9d0c6e3ddb5b9acc1287f7eb814b7b55d80c7ed580504\": container with ID starting with 6a7283e410bf76cbf6c9d0c6e3ddb5b9acc1287f7eb814b7b55d80c7ed580504 not found: ID does not exist" containerID="6a7283e410bf76cbf6c9d0c6e3ddb5b9acc1287f7eb814b7b55d80c7ed580504" Jan 22 17:11:19 crc kubenswrapper[4758]: I0122 17:11:19.015294 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a7283e410bf76cbf6c9d0c6e3ddb5b9acc1287f7eb814b7b55d80c7ed580504"} err="failed to get container status \"6a7283e410bf76cbf6c9d0c6e3ddb5b9acc1287f7eb814b7b55d80c7ed580504\": rpc error: code = NotFound desc = could not find container \"6a7283e410bf76cbf6c9d0c6e3ddb5b9acc1287f7eb814b7b55d80c7ed580504\": container with ID starting with 6a7283e410bf76cbf6c9d0c6e3ddb5b9acc1287f7eb814b7b55d80c7ed580504 not found: ID does not exist" Jan 22 17:11:19 crc kubenswrapper[4758]: I0122 17:11:19.015325 4758 scope.go:117] "RemoveContainer" containerID="a600655186a663da20f33fab267a13271f40d86a091de37ac292b6dc3d07dcc3" Jan 22 17:11:19 crc kubenswrapper[4758]: E0122 17:11:19.015766 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a600655186a663da20f33fab267a13271f40d86a091de37ac292b6dc3d07dcc3\": container with ID starting with a600655186a663da20f33fab267a13271f40d86a091de37ac292b6dc3d07dcc3 not found: ID does not exist" containerID="a600655186a663da20f33fab267a13271f40d86a091de37ac292b6dc3d07dcc3" Jan 22 17:11:19 crc kubenswrapper[4758]: I0122 17:11:19.015799 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a600655186a663da20f33fab267a13271f40d86a091de37ac292b6dc3d07dcc3"} err="failed to get container status \"a600655186a663da20f33fab267a13271f40d86a091de37ac292b6dc3d07dcc3\": rpc error: code = NotFound desc = could not find container \"a600655186a663da20f33fab267a13271f40d86a091de37ac292b6dc3d07dcc3\": container with ID starting with a600655186a663da20f33fab267a13271f40d86a091de37ac292b6dc3d07dcc3 not found: ID does not exist" Jan 22 17:11:19 crc kubenswrapper[4758]: I0122 17:11:19.015820 4758 scope.go:117] "RemoveContainer" containerID="81515407dcb3cafc7e41468cca522d576bdf206ca1d2444197d36136262ca07b" Jan 22 17:11:19 crc kubenswrapper[4758]: E0122 17:11:19.016103 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81515407dcb3cafc7e41468cca522d576bdf206ca1d2444197d36136262ca07b\": container with ID starting with 81515407dcb3cafc7e41468cca522d576bdf206ca1d2444197d36136262ca07b not found: ID does not exist" containerID="81515407dcb3cafc7e41468cca522d576bdf206ca1d2444197d36136262ca07b" Jan 22 17:11:19 crc kubenswrapper[4758]: I0122 17:11:19.016140 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81515407dcb3cafc7e41468cca522d576bdf206ca1d2444197d36136262ca07b"} err="failed to get container status \"81515407dcb3cafc7e41468cca522d576bdf206ca1d2444197d36136262ca07b\": rpc error: code = NotFound desc = could not find container \"81515407dcb3cafc7e41468cca522d576bdf206ca1d2444197d36136262ca07b\": container with ID starting with 81515407dcb3cafc7e41468cca522d576bdf206ca1d2444197d36136262ca07b not found: ID does not exist" Jan 22 17:11:20 crc kubenswrapper[4758]: I0122 17:11:20.821317 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5a27c27-03e3-48db-9859-b14942df5c08" path="/var/lib/kubelet/pods/d5a27c27-03e3-48db-9859-b14942df5c08/volumes" Jan 22 17:11:27 crc kubenswrapper[4758]: I0122 17:11:27.808632 4758 scope.go:117] "RemoveContainer" containerID="7b22a3b8055c9ca6f1b3b05a642218cc5ffe796314bc510e268584581f9db5e0" Jan 22 17:11:27 crc kubenswrapper[4758]: E0122 17:11:27.809981 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:11:39 crc kubenswrapper[4758]: I0122 17:11:39.808991 4758 scope.go:117] "RemoveContainer" containerID="7b22a3b8055c9ca6f1b3b05a642218cc5ffe796314bc510e268584581f9db5e0" Jan 22 17:11:39 crc kubenswrapper[4758]: E0122 17:11:39.810459 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:11:42 crc kubenswrapper[4758]: I0122 17:11:42.153921 4758 generic.go:334] "Generic (PLEG): container finished" podID="0a76cd73-4259-4fa1-8846-f645ef6603b1" containerID="dfc3900bb28facc133d7de0e2842105a17149921ae71782bf59e40419b4bafcf" exitCode=0 Jan 22 17:11:42 crc kubenswrapper[4758]: I0122 17:11:42.154028 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-579gl" event={"ID":"0a76cd73-4259-4fa1-8846-f645ef6603b1","Type":"ContainerDied","Data":"dfc3900bb28facc133d7de0e2842105a17149921ae71782bf59e40419b4bafcf"} Jan 22 17:11:43 crc kubenswrapper[4758]: I0122 17:11:43.578148 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-579gl" Jan 22 17:11:43 crc kubenswrapper[4758]: I0122 17:11:43.768297 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a76cd73-4259-4fa1-8846-f645ef6603b1-neutron-metadata-combined-ca-bundle\") pod \"0a76cd73-4259-4fa1-8846-f645ef6603b1\" (UID: \"0a76cd73-4259-4fa1-8846-f645ef6603b1\") " Jan 22 17:11:43 crc kubenswrapper[4758]: I0122 17:11:43.768449 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a76cd73-4259-4fa1-8846-f645ef6603b1-inventory\") pod \"0a76cd73-4259-4fa1-8846-f645ef6603b1\" (UID: \"0a76cd73-4259-4fa1-8846-f645ef6603b1\") " Jan 22 17:11:43 crc kubenswrapper[4758]: I0122 17:11:43.768588 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sv6bz\" (UniqueName: \"kubernetes.io/projected/0a76cd73-4259-4fa1-8846-f645ef6603b1-kube-api-access-sv6bz\") pod \"0a76cd73-4259-4fa1-8846-f645ef6603b1\" (UID: \"0a76cd73-4259-4fa1-8846-f645ef6603b1\") " Jan 22 17:11:43 crc kubenswrapper[4758]: I0122 17:11:43.768704 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0a76cd73-4259-4fa1-8846-f645ef6603b1-neutron-ovn-metadata-agent-neutron-config-0\") pod \"0a76cd73-4259-4fa1-8846-f645ef6603b1\" (UID: \"0a76cd73-4259-4fa1-8846-f645ef6603b1\") " Jan 22 17:11:43 crc kubenswrapper[4758]: I0122 17:11:43.768766 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a76cd73-4259-4fa1-8846-f645ef6603b1-ssh-key-openstack-edpm-ipam\") pod \"0a76cd73-4259-4fa1-8846-f645ef6603b1\" (UID: \"0a76cd73-4259-4fa1-8846-f645ef6603b1\") " Jan 22 17:11:43 crc kubenswrapper[4758]: I0122 17:11:43.768793 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0a76cd73-4259-4fa1-8846-f645ef6603b1-nova-metadata-neutron-config-0\") pod \"0a76cd73-4259-4fa1-8846-f645ef6603b1\" (UID: \"0a76cd73-4259-4fa1-8846-f645ef6603b1\") " Jan 22 17:11:43 crc kubenswrapper[4758]: I0122 17:11:43.775650 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a76cd73-4259-4fa1-8846-f645ef6603b1-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "0a76cd73-4259-4fa1-8846-f645ef6603b1" (UID: "0a76cd73-4259-4fa1-8846-f645ef6603b1"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:11:43 crc kubenswrapper[4758]: I0122 17:11:43.775683 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a76cd73-4259-4fa1-8846-f645ef6603b1-kube-api-access-sv6bz" (OuterVolumeSpecName: "kube-api-access-sv6bz") pod "0a76cd73-4259-4fa1-8846-f645ef6603b1" (UID: "0a76cd73-4259-4fa1-8846-f645ef6603b1"). InnerVolumeSpecName "kube-api-access-sv6bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:11:43 crc kubenswrapper[4758]: I0122 17:11:43.808576 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a76cd73-4259-4fa1-8846-f645ef6603b1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0a76cd73-4259-4fa1-8846-f645ef6603b1" (UID: "0a76cd73-4259-4fa1-8846-f645ef6603b1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:11:43 crc kubenswrapper[4758]: I0122 17:11:43.809123 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a76cd73-4259-4fa1-8846-f645ef6603b1-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "0a76cd73-4259-4fa1-8846-f645ef6603b1" (UID: "0a76cd73-4259-4fa1-8846-f645ef6603b1"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:11:43 crc kubenswrapper[4758]: I0122 17:11:43.811439 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a76cd73-4259-4fa1-8846-f645ef6603b1-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "0a76cd73-4259-4fa1-8846-f645ef6603b1" (UID: "0a76cd73-4259-4fa1-8846-f645ef6603b1"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:11:43 crc kubenswrapper[4758]: I0122 17:11:43.816859 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a76cd73-4259-4fa1-8846-f645ef6603b1-inventory" (OuterVolumeSpecName: "inventory") pod "0a76cd73-4259-4fa1-8846-f645ef6603b1" (UID: "0a76cd73-4259-4fa1-8846-f645ef6603b1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:11:43 crc kubenswrapper[4758]: I0122 17:11:43.871435 4758 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a76cd73-4259-4fa1-8846-f645ef6603b1-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:11:43 crc kubenswrapper[4758]: I0122 17:11:43.871487 4758 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a76cd73-4259-4fa1-8846-f645ef6603b1-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 17:11:43 crc kubenswrapper[4758]: I0122 17:11:43.871502 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sv6bz\" (UniqueName: \"kubernetes.io/projected/0a76cd73-4259-4fa1-8846-f645ef6603b1-kube-api-access-sv6bz\") on node \"crc\" DevicePath \"\"" Jan 22 17:11:43 crc kubenswrapper[4758]: I0122 17:11:43.871517 4758 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0a76cd73-4259-4fa1-8846-f645ef6603b1-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 22 17:11:43 crc kubenswrapper[4758]: I0122 17:11:43.871533 4758 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0a76cd73-4259-4fa1-8846-f645ef6603b1-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 22 17:11:43 crc kubenswrapper[4758]: I0122 17:11:43.871549 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a76cd73-4259-4fa1-8846-f645ef6603b1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 17:11:44 crc kubenswrapper[4758]: I0122 17:11:44.184257 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-579gl" event={"ID":"0a76cd73-4259-4fa1-8846-f645ef6603b1","Type":"ContainerDied","Data":"690de9718c3edc93c9afacf89e8c242ee0d7f62f04b635f68556a78d2d5ffab5"} Jan 22 17:11:44 crc kubenswrapper[4758]: I0122 17:11:44.184336 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="690de9718c3edc93c9afacf89e8c242ee0d7f62f04b635f68556a78d2d5ffab5" Jan 22 17:11:44 crc kubenswrapper[4758]: I0122 17:11:44.184339 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-579gl" Jan 22 17:11:44 crc kubenswrapper[4758]: I0122 17:11:44.388241 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vlm88"] Jan 22 17:11:44 crc kubenswrapper[4758]: E0122 17:11:44.388869 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a76cd73-4259-4fa1-8846-f645ef6603b1" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 22 17:11:44 crc kubenswrapper[4758]: I0122 17:11:44.388905 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a76cd73-4259-4fa1-8846-f645ef6603b1" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 22 17:11:44 crc kubenswrapper[4758]: E0122 17:11:44.388928 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5a27c27-03e3-48db-9859-b14942df5c08" containerName="registry-server" Jan 22 17:11:44 crc kubenswrapper[4758]: I0122 17:11:44.388940 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5a27c27-03e3-48db-9859-b14942df5c08" containerName="registry-server" Jan 22 17:11:44 crc kubenswrapper[4758]: E0122 17:11:44.388965 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5a27c27-03e3-48db-9859-b14942df5c08" containerName="extract-content" Jan 22 17:11:44 crc kubenswrapper[4758]: I0122 17:11:44.388976 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5a27c27-03e3-48db-9859-b14942df5c08" containerName="extract-content" Jan 22 17:11:44 crc kubenswrapper[4758]: E0122 17:11:44.389003 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5a27c27-03e3-48db-9859-b14942df5c08" containerName="extract-utilities" Jan 22 17:11:44 crc kubenswrapper[4758]: I0122 17:11:44.389013 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5a27c27-03e3-48db-9859-b14942df5c08" containerName="extract-utilities" Jan 22 17:11:44 crc kubenswrapper[4758]: I0122 17:11:44.389395 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5a27c27-03e3-48db-9859-b14942df5c08" containerName="registry-server" Jan 22 17:11:44 crc kubenswrapper[4758]: I0122 17:11:44.389433 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a76cd73-4259-4fa1-8846-f645ef6603b1" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 22 17:11:44 crc kubenswrapper[4758]: I0122 17:11:44.390516 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vlm88" Jan 22 17:11:44 crc kubenswrapper[4758]: I0122 17:11:44.395027 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 22 17:11:44 crc kubenswrapper[4758]: I0122 17:11:44.395320 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5gz9n" Jan 22 17:11:44 crc kubenswrapper[4758]: I0122 17:11:44.395584 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 17:11:44 crc kubenswrapper[4758]: I0122 17:11:44.395930 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 17:11:44 crc kubenswrapper[4758]: I0122 17:11:44.396945 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 17:11:44 crc kubenswrapper[4758]: I0122 17:11:44.404790 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vlm88"] Jan 22 17:11:44 crc kubenswrapper[4758]: I0122 17:11:44.584272 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/23a50ad6-72f6-49e1-b41f-7ab16b033783-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vlm88\" (UID: \"23a50ad6-72f6-49e1-b41f-7ab16b033783\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vlm88" Jan 22 17:11:44 crc kubenswrapper[4758]: I0122 17:11:44.584362 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23a50ad6-72f6-49e1-b41f-7ab16b033783-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vlm88\" (UID: \"23a50ad6-72f6-49e1-b41f-7ab16b033783\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vlm88" Jan 22 17:11:44 crc kubenswrapper[4758]: I0122 17:11:44.584998 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k58zj\" (UniqueName: \"kubernetes.io/projected/23a50ad6-72f6-49e1-b41f-7ab16b033783-kube-api-access-k58zj\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vlm88\" (UID: \"23a50ad6-72f6-49e1-b41f-7ab16b033783\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vlm88" Jan 22 17:11:44 crc kubenswrapper[4758]: I0122 17:11:44.585056 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23a50ad6-72f6-49e1-b41f-7ab16b033783-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vlm88\" (UID: \"23a50ad6-72f6-49e1-b41f-7ab16b033783\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vlm88" Jan 22 17:11:44 crc kubenswrapper[4758]: I0122 17:11:44.585094 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23a50ad6-72f6-49e1-b41f-7ab16b033783-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vlm88\" (UID: \"23a50ad6-72f6-49e1-b41f-7ab16b033783\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vlm88" Jan 22 17:11:44 crc kubenswrapper[4758]: I0122 17:11:44.687084 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23a50ad6-72f6-49e1-b41f-7ab16b033783-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vlm88\" (UID: \"23a50ad6-72f6-49e1-b41f-7ab16b033783\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vlm88" Jan 22 17:11:44 crc kubenswrapper[4758]: I0122 17:11:44.687219 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k58zj\" (UniqueName: \"kubernetes.io/projected/23a50ad6-72f6-49e1-b41f-7ab16b033783-kube-api-access-k58zj\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vlm88\" (UID: \"23a50ad6-72f6-49e1-b41f-7ab16b033783\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vlm88" Jan 22 17:11:44 crc kubenswrapper[4758]: I0122 17:11:44.687268 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23a50ad6-72f6-49e1-b41f-7ab16b033783-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vlm88\" (UID: \"23a50ad6-72f6-49e1-b41f-7ab16b033783\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vlm88" Jan 22 17:11:44 crc kubenswrapper[4758]: I0122 17:11:44.687308 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23a50ad6-72f6-49e1-b41f-7ab16b033783-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vlm88\" (UID: \"23a50ad6-72f6-49e1-b41f-7ab16b033783\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vlm88" Jan 22 17:11:44 crc kubenswrapper[4758]: I0122 17:11:44.687407 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/23a50ad6-72f6-49e1-b41f-7ab16b033783-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vlm88\" (UID: \"23a50ad6-72f6-49e1-b41f-7ab16b033783\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vlm88" Jan 22 17:11:44 crc kubenswrapper[4758]: I0122 17:11:44.692219 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/23a50ad6-72f6-49e1-b41f-7ab16b033783-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vlm88\" (UID: \"23a50ad6-72f6-49e1-b41f-7ab16b033783\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vlm88" Jan 22 17:11:44 crc kubenswrapper[4758]: I0122 17:11:44.693322 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23a50ad6-72f6-49e1-b41f-7ab16b033783-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vlm88\" (UID: \"23a50ad6-72f6-49e1-b41f-7ab16b033783\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vlm88" Jan 22 17:11:44 crc kubenswrapper[4758]: I0122 17:11:44.698970 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23a50ad6-72f6-49e1-b41f-7ab16b033783-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vlm88\" (UID: \"23a50ad6-72f6-49e1-b41f-7ab16b033783\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vlm88" Jan 22 17:11:44 crc kubenswrapper[4758]: I0122 17:11:44.700403 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23a50ad6-72f6-49e1-b41f-7ab16b033783-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vlm88\" (UID: \"23a50ad6-72f6-49e1-b41f-7ab16b033783\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vlm88" Jan 22 17:11:44 crc kubenswrapper[4758]: I0122 17:11:44.715870 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k58zj\" (UniqueName: \"kubernetes.io/projected/23a50ad6-72f6-49e1-b41f-7ab16b033783-kube-api-access-k58zj\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vlm88\" (UID: \"23a50ad6-72f6-49e1-b41f-7ab16b033783\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vlm88" Jan 22 17:11:44 crc kubenswrapper[4758]: I0122 17:11:44.734598 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vlm88" Jan 22 17:11:45 crc kubenswrapper[4758]: I0122 17:11:45.413938 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vlm88"] Jan 22 17:11:46 crc kubenswrapper[4758]: I0122 17:11:46.210503 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vlm88" event={"ID":"23a50ad6-72f6-49e1-b41f-7ab16b033783","Type":"ContainerStarted","Data":"e03ef077c6b4cc10bb23021729a9532f97e09e451e7692fb0a78496c160812e0"} Jan 22 17:11:46 crc kubenswrapper[4758]: I0122 17:11:46.210975 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vlm88" event={"ID":"23a50ad6-72f6-49e1-b41f-7ab16b033783","Type":"ContainerStarted","Data":"2626ae2f808ef5f60456c1dd9eb539d758348ed7900e3df0e2e1bdcb142e6f07"} Jan 22 17:11:46 crc kubenswrapper[4758]: I0122 17:11:46.232198 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vlm88" podStartSLOduration=1.749623637 podStartE2EDuration="2.232160307s" podCreationTimestamp="2026-01-22 17:11:44 +0000 UTC" firstStartedPulling="2026-01-22 17:11:45.421315961 +0000 UTC m=+2526.904655246" lastFinishedPulling="2026-01-22 17:11:45.903852591 +0000 UTC m=+2527.387191916" observedRunningTime="2026-01-22 17:11:46.226222084 +0000 UTC m=+2527.709561379" watchObservedRunningTime="2026-01-22 17:11:46.232160307 +0000 UTC m=+2527.715499602" Jan 22 17:11:51 crc kubenswrapper[4758]: I0122 17:11:51.809896 4758 scope.go:117] "RemoveContainer" containerID="7b22a3b8055c9ca6f1b3b05a642218cc5ffe796314bc510e268584581f9db5e0" Jan 22 17:11:51 crc kubenswrapper[4758]: E0122 17:11:51.812486 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:12:03 crc kubenswrapper[4758]: I0122 17:12:03.809060 4758 scope.go:117] "RemoveContainer" containerID="7b22a3b8055c9ca6f1b3b05a642218cc5ffe796314bc510e268584581f9db5e0" Jan 22 17:12:03 crc kubenswrapper[4758]: E0122 17:12:03.810627 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:12:15 crc kubenswrapper[4758]: I0122 17:12:15.808000 4758 scope.go:117] "RemoveContainer" containerID="7b22a3b8055c9ca6f1b3b05a642218cc5ffe796314bc510e268584581f9db5e0" Jan 22 17:12:15 crc kubenswrapper[4758]: E0122 17:12:15.808894 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:12:28 crc kubenswrapper[4758]: I0122 17:12:28.813827 4758 scope.go:117] "RemoveContainer" containerID="7b22a3b8055c9ca6f1b3b05a642218cc5ffe796314bc510e268584581f9db5e0" Jan 22 17:12:28 crc kubenswrapper[4758]: E0122 17:12:28.814951 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:12:40 crc kubenswrapper[4758]: I0122 17:12:40.809021 4758 scope.go:117] "RemoveContainer" containerID="7b22a3b8055c9ca6f1b3b05a642218cc5ffe796314bc510e268584581f9db5e0" Jan 22 17:12:40 crc kubenswrapper[4758]: E0122 17:12:40.810207 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:12:51 crc kubenswrapper[4758]: I0122 17:12:51.808431 4758 scope.go:117] "RemoveContainer" containerID="7b22a3b8055c9ca6f1b3b05a642218cc5ffe796314bc510e268584581f9db5e0" Jan 22 17:12:51 crc kubenswrapper[4758]: E0122 17:12:51.811447 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:12:57 crc kubenswrapper[4758]: I0122 17:12:57.803757 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hlswl"] Jan 22 17:12:57 crc kubenswrapper[4758]: I0122 17:12:57.808491 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hlswl" Jan 22 17:12:57 crc kubenswrapper[4758]: I0122 17:12:57.842317 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hlswl"] Jan 22 17:12:57 crc kubenswrapper[4758]: I0122 17:12:57.944811 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7807361e-833b-4509-a14d-560cdb429c01-catalog-content\") pod \"redhat-operators-hlswl\" (UID: \"7807361e-833b-4509-a14d-560cdb429c01\") " pod="openshift-marketplace/redhat-operators-hlswl" Jan 22 17:12:57 crc kubenswrapper[4758]: I0122 17:12:57.944864 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7807361e-833b-4509-a14d-560cdb429c01-utilities\") pod \"redhat-operators-hlswl\" (UID: \"7807361e-833b-4509-a14d-560cdb429c01\") " pod="openshift-marketplace/redhat-operators-hlswl" Jan 22 17:12:57 crc kubenswrapper[4758]: I0122 17:12:57.945676 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z899m\" (UniqueName: \"kubernetes.io/projected/7807361e-833b-4509-a14d-560cdb429c01-kube-api-access-z899m\") pod \"redhat-operators-hlswl\" (UID: \"7807361e-833b-4509-a14d-560cdb429c01\") " pod="openshift-marketplace/redhat-operators-hlswl" Jan 22 17:12:58 crc kubenswrapper[4758]: I0122 17:12:58.047897 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z899m\" (UniqueName: \"kubernetes.io/projected/7807361e-833b-4509-a14d-560cdb429c01-kube-api-access-z899m\") pod \"redhat-operators-hlswl\" (UID: \"7807361e-833b-4509-a14d-560cdb429c01\") " pod="openshift-marketplace/redhat-operators-hlswl" Jan 22 17:12:58 crc kubenswrapper[4758]: I0122 17:12:58.048471 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7807361e-833b-4509-a14d-560cdb429c01-catalog-content\") pod \"redhat-operators-hlswl\" (UID: \"7807361e-833b-4509-a14d-560cdb429c01\") " pod="openshift-marketplace/redhat-operators-hlswl" Jan 22 17:12:58 crc kubenswrapper[4758]: I0122 17:12:58.048521 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7807361e-833b-4509-a14d-560cdb429c01-utilities\") pod \"redhat-operators-hlswl\" (UID: \"7807361e-833b-4509-a14d-560cdb429c01\") " pod="openshift-marketplace/redhat-operators-hlswl" Jan 22 17:12:58 crc kubenswrapper[4758]: I0122 17:12:58.049169 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7807361e-833b-4509-a14d-560cdb429c01-utilities\") pod \"redhat-operators-hlswl\" (UID: \"7807361e-833b-4509-a14d-560cdb429c01\") " pod="openshift-marketplace/redhat-operators-hlswl" Jan 22 17:12:58 crc kubenswrapper[4758]: I0122 17:12:58.049489 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7807361e-833b-4509-a14d-560cdb429c01-catalog-content\") pod \"redhat-operators-hlswl\" (UID: \"7807361e-833b-4509-a14d-560cdb429c01\") " pod="openshift-marketplace/redhat-operators-hlswl" Jan 22 17:12:58 crc kubenswrapper[4758]: I0122 17:12:58.078471 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z899m\" (UniqueName: \"kubernetes.io/projected/7807361e-833b-4509-a14d-560cdb429c01-kube-api-access-z899m\") pod \"redhat-operators-hlswl\" (UID: \"7807361e-833b-4509-a14d-560cdb429c01\") " pod="openshift-marketplace/redhat-operators-hlswl" Jan 22 17:12:58 crc kubenswrapper[4758]: I0122 17:12:58.143121 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hlswl" Jan 22 17:12:58 crc kubenswrapper[4758]: I0122 17:12:58.680076 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hlswl"] Jan 22 17:12:58 crc kubenswrapper[4758]: I0122 17:12:58.955764 4758 generic.go:334] "Generic (PLEG): container finished" podID="7807361e-833b-4509-a14d-560cdb429c01" containerID="5308791d21803ea1668d57af42c3794f1e00e5358b2213263ca6f27fc47b75d0" exitCode=0 Jan 22 17:12:58 crc kubenswrapper[4758]: I0122 17:12:58.955846 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hlswl" event={"ID":"7807361e-833b-4509-a14d-560cdb429c01","Type":"ContainerDied","Data":"5308791d21803ea1668d57af42c3794f1e00e5358b2213263ca6f27fc47b75d0"} Jan 22 17:12:58 crc kubenswrapper[4758]: I0122 17:12:58.956116 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hlswl" event={"ID":"7807361e-833b-4509-a14d-560cdb429c01","Type":"ContainerStarted","Data":"3ef8ec1ade53b266c6dd44a67f07339d52138761b6b550206b4a7d56dc4d1926"} Jan 22 17:12:59 crc kubenswrapper[4758]: I0122 17:12:59.970658 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hlswl" event={"ID":"7807361e-833b-4509-a14d-560cdb429c01","Type":"ContainerStarted","Data":"8903f537135e9cf594a29bd00b3e3e3900ee83b926e607527962554ee3f167c0"} Jan 22 17:13:03 crc kubenswrapper[4758]: I0122 17:13:03.014323 4758 generic.go:334] "Generic (PLEG): container finished" podID="7807361e-833b-4509-a14d-560cdb429c01" containerID="8903f537135e9cf594a29bd00b3e3e3900ee83b926e607527962554ee3f167c0" exitCode=0 Jan 22 17:13:03 crc kubenswrapper[4758]: I0122 17:13:03.014640 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hlswl" event={"ID":"7807361e-833b-4509-a14d-560cdb429c01","Type":"ContainerDied","Data":"8903f537135e9cf594a29bd00b3e3e3900ee83b926e607527962554ee3f167c0"} Jan 22 17:13:04 crc kubenswrapper[4758]: I0122 17:13:04.036053 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hlswl" event={"ID":"7807361e-833b-4509-a14d-560cdb429c01","Type":"ContainerStarted","Data":"318043e09d7b252accc0cb21b655827c6688fd9a9fdf650ec35518e1786469bd"} Jan 22 17:13:04 crc kubenswrapper[4758]: I0122 17:13:04.061444 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hlswl" podStartSLOduration=2.539697488 podStartE2EDuration="7.061412288s" podCreationTimestamp="2026-01-22 17:12:57 +0000 UTC" firstStartedPulling="2026-01-22 17:12:58.957582689 +0000 UTC m=+2600.440921974" lastFinishedPulling="2026-01-22 17:13:03.479297479 +0000 UTC m=+2604.962636774" observedRunningTime="2026-01-22 17:13:04.061096249 +0000 UTC m=+2605.544435574" watchObservedRunningTime="2026-01-22 17:13:04.061412288 +0000 UTC m=+2605.544751573" Jan 22 17:13:05 crc kubenswrapper[4758]: I0122 17:13:05.809303 4758 scope.go:117] "RemoveContainer" containerID="7b22a3b8055c9ca6f1b3b05a642218cc5ffe796314bc510e268584581f9db5e0" Jan 22 17:13:05 crc kubenswrapper[4758]: E0122 17:13:05.809766 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:13:08 crc kubenswrapper[4758]: I0122 17:13:08.143331 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hlswl" Jan 22 17:13:08 crc kubenswrapper[4758]: I0122 17:13:08.144782 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hlswl" Jan 22 17:13:09 crc kubenswrapper[4758]: I0122 17:13:09.193917 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hlswl" podUID="7807361e-833b-4509-a14d-560cdb429c01" containerName="registry-server" probeResult="failure" output=< Jan 22 17:13:09 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 22 17:13:09 crc kubenswrapper[4758]: > Jan 22 17:13:18 crc kubenswrapper[4758]: I0122 17:13:18.208273 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hlswl" Jan 22 17:13:18 crc kubenswrapper[4758]: I0122 17:13:18.266058 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hlswl" Jan 22 17:13:18 crc kubenswrapper[4758]: I0122 17:13:18.459248 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hlswl"] Jan 22 17:13:18 crc kubenswrapper[4758]: I0122 17:13:18.825306 4758 scope.go:117] "RemoveContainer" containerID="7b22a3b8055c9ca6f1b3b05a642218cc5ffe796314bc510e268584581f9db5e0" Jan 22 17:13:18 crc kubenswrapper[4758]: E0122 17:13:18.825980 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:13:20 crc kubenswrapper[4758]: I0122 17:13:20.185730 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hlswl" podUID="7807361e-833b-4509-a14d-560cdb429c01" containerName="registry-server" containerID="cri-o://318043e09d7b252accc0cb21b655827c6688fd9a9fdf650ec35518e1786469bd" gracePeriod=2 Jan 22 17:13:20 crc kubenswrapper[4758]: I0122 17:13:20.668287 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hlswl" Jan 22 17:13:20 crc kubenswrapper[4758]: I0122 17:13:20.671656 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7807361e-833b-4509-a14d-560cdb429c01-catalog-content\") pod \"7807361e-833b-4509-a14d-560cdb429c01\" (UID: \"7807361e-833b-4509-a14d-560cdb429c01\") " Jan 22 17:13:20 crc kubenswrapper[4758]: I0122 17:13:20.671696 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z899m\" (UniqueName: \"kubernetes.io/projected/7807361e-833b-4509-a14d-560cdb429c01-kube-api-access-z899m\") pod \"7807361e-833b-4509-a14d-560cdb429c01\" (UID: \"7807361e-833b-4509-a14d-560cdb429c01\") " Jan 22 17:13:20 crc kubenswrapper[4758]: I0122 17:13:20.671825 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7807361e-833b-4509-a14d-560cdb429c01-utilities\") pod \"7807361e-833b-4509-a14d-560cdb429c01\" (UID: \"7807361e-833b-4509-a14d-560cdb429c01\") " Jan 22 17:13:20 crc kubenswrapper[4758]: I0122 17:13:20.673272 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7807361e-833b-4509-a14d-560cdb429c01-utilities" (OuterVolumeSpecName: "utilities") pod "7807361e-833b-4509-a14d-560cdb429c01" (UID: "7807361e-833b-4509-a14d-560cdb429c01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:13:20 crc kubenswrapper[4758]: I0122 17:13:20.686511 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7807361e-833b-4509-a14d-560cdb429c01-kube-api-access-z899m" (OuterVolumeSpecName: "kube-api-access-z899m") pod "7807361e-833b-4509-a14d-560cdb429c01" (UID: "7807361e-833b-4509-a14d-560cdb429c01"). InnerVolumeSpecName "kube-api-access-z899m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:13:20 crc kubenswrapper[4758]: I0122 17:13:20.773846 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z899m\" (UniqueName: \"kubernetes.io/projected/7807361e-833b-4509-a14d-560cdb429c01-kube-api-access-z899m\") on node \"crc\" DevicePath \"\"" Jan 22 17:13:20 crc kubenswrapper[4758]: I0122 17:13:20.773897 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7807361e-833b-4509-a14d-560cdb429c01-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 17:13:20 crc kubenswrapper[4758]: I0122 17:13:20.824399 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7807361e-833b-4509-a14d-560cdb429c01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7807361e-833b-4509-a14d-560cdb429c01" (UID: "7807361e-833b-4509-a14d-560cdb429c01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:13:20 crc kubenswrapper[4758]: I0122 17:13:20.875488 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7807361e-833b-4509-a14d-560cdb429c01-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 17:13:21 crc kubenswrapper[4758]: I0122 17:13:21.208347 4758 generic.go:334] "Generic (PLEG): container finished" podID="7807361e-833b-4509-a14d-560cdb429c01" containerID="318043e09d7b252accc0cb21b655827c6688fd9a9fdf650ec35518e1786469bd" exitCode=0 Jan 22 17:13:21 crc kubenswrapper[4758]: I0122 17:13:21.208418 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hlswl" event={"ID":"7807361e-833b-4509-a14d-560cdb429c01","Type":"ContainerDied","Data":"318043e09d7b252accc0cb21b655827c6688fd9a9fdf650ec35518e1786469bd"} Jan 22 17:13:21 crc kubenswrapper[4758]: I0122 17:13:21.208451 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hlswl" Jan 22 17:13:21 crc kubenswrapper[4758]: I0122 17:13:21.208474 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hlswl" event={"ID":"7807361e-833b-4509-a14d-560cdb429c01","Type":"ContainerDied","Data":"3ef8ec1ade53b266c6dd44a67f07339d52138761b6b550206b4a7d56dc4d1926"} Jan 22 17:13:21 crc kubenswrapper[4758]: I0122 17:13:21.208503 4758 scope.go:117] "RemoveContainer" containerID="318043e09d7b252accc0cb21b655827c6688fd9a9fdf650ec35518e1786469bd" Jan 22 17:13:21 crc kubenswrapper[4758]: I0122 17:13:21.240680 4758 scope.go:117] "RemoveContainer" containerID="8903f537135e9cf594a29bd00b3e3e3900ee83b926e607527962554ee3f167c0" Jan 22 17:13:21 crc kubenswrapper[4758]: I0122 17:13:21.266224 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hlswl"] Jan 22 17:13:21 crc kubenswrapper[4758]: I0122 17:13:21.270942 4758 scope.go:117] "RemoveContainer" containerID="5308791d21803ea1668d57af42c3794f1e00e5358b2213263ca6f27fc47b75d0" Jan 22 17:13:21 crc kubenswrapper[4758]: I0122 17:13:21.274922 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hlswl"] Jan 22 17:13:21 crc kubenswrapper[4758]: I0122 17:13:21.352929 4758 scope.go:117] "RemoveContainer" containerID="318043e09d7b252accc0cb21b655827c6688fd9a9fdf650ec35518e1786469bd" Jan 22 17:13:21 crc kubenswrapper[4758]: E0122 17:13:21.355259 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"318043e09d7b252accc0cb21b655827c6688fd9a9fdf650ec35518e1786469bd\": container with ID starting with 318043e09d7b252accc0cb21b655827c6688fd9a9fdf650ec35518e1786469bd not found: ID does not exist" containerID="318043e09d7b252accc0cb21b655827c6688fd9a9fdf650ec35518e1786469bd" Jan 22 17:13:21 crc kubenswrapper[4758]: I0122 17:13:21.355336 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"318043e09d7b252accc0cb21b655827c6688fd9a9fdf650ec35518e1786469bd"} err="failed to get container status \"318043e09d7b252accc0cb21b655827c6688fd9a9fdf650ec35518e1786469bd\": rpc error: code = NotFound desc = could not find container \"318043e09d7b252accc0cb21b655827c6688fd9a9fdf650ec35518e1786469bd\": container with ID starting with 318043e09d7b252accc0cb21b655827c6688fd9a9fdf650ec35518e1786469bd not found: ID does not exist" Jan 22 17:13:21 crc kubenswrapper[4758]: I0122 17:13:21.355373 4758 scope.go:117] "RemoveContainer" containerID="8903f537135e9cf594a29bd00b3e3e3900ee83b926e607527962554ee3f167c0" Jan 22 17:13:21 crc kubenswrapper[4758]: E0122 17:13:21.359103 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8903f537135e9cf594a29bd00b3e3e3900ee83b926e607527962554ee3f167c0\": container with ID starting with 8903f537135e9cf594a29bd00b3e3e3900ee83b926e607527962554ee3f167c0 not found: ID does not exist" containerID="8903f537135e9cf594a29bd00b3e3e3900ee83b926e607527962554ee3f167c0" Jan 22 17:13:21 crc kubenswrapper[4758]: I0122 17:13:21.359141 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8903f537135e9cf594a29bd00b3e3e3900ee83b926e607527962554ee3f167c0"} err="failed to get container status \"8903f537135e9cf594a29bd00b3e3e3900ee83b926e607527962554ee3f167c0\": rpc error: code = NotFound desc = could not find container \"8903f537135e9cf594a29bd00b3e3e3900ee83b926e607527962554ee3f167c0\": container with ID starting with 8903f537135e9cf594a29bd00b3e3e3900ee83b926e607527962554ee3f167c0 not found: ID does not exist" Jan 22 17:13:21 crc kubenswrapper[4758]: I0122 17:13:21.359166 4758 scope.go:117] "RemoveContainer" containerID="5308791d21803ea1668d57af42c3794f1e00e5358b2213263ca6f27fc47b75d0" Jan 22 17:13:21 crc kubenswrapper[4758]: E0122 17:13:21.363039 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5308791d21803ea1668d57af42c3794f1e00e5358b2213263ca6f27fc47b75d0\": container with ID starting with 5308791d21803ea1668d57af42c3794f1e00e5358b2213263ca6f27fc47b75d0 not found: ID does not exist" containerID="5308791d21803ea1668d57af42c3794f1e00e5358b2213263ca6f27fc47b75d0" Jan 22 17:13:21 crc kubenswrapper[4758]: I0122 17:13:21.363096 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5308791d21803ea1668d57af42c3794f1e00e5358b2213263ca6f27fc47b75d0"} err="failed to get container status \"5308791d21803ea1668d57af42c3794f1e00e5358b2213263ca6f27fc47b75d0\": rpc error: code = NotFound desc = could not find container \"5308791d21803ea1668d57af42c3794f1e00e5358b2213263ca6f27fc47b75d0\": container with ID starting with 5308791d21803ea1668d57af42c3794f1e00e5358b2213263ca6f27fc47b75d0 not found: ID does not exist" Jan 22 17:13:22 crc kubenswrapper[4758]: I0122 17:13:22.820650 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7807361e-833b-4509-a14d-560cdb429c01" path="/var/lib/kubelet/pods/7807361e-833b-4509-a14d-560cdb429c01/volumes" Jan 22 17:13:29 crc kubenswrapper[4758]: I0122 17:13:29.808484 4758 scope.go:117] "RemoveContainer" containerID="7b22a3b8055c9ca6f1b3b05a642218cc5ffe796314bc510e268584581f9db5e0" Jan 22 17:13:29 crc kubenswrapper[4758]: E0122 17:13:29.811708 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:13:43 crc kubenswrapper[4758]: I0122 17:13:43.808998 4758 scope.go:117] "RemoveContainer" containerID="7b22a3b8055c9ca6f1b3b05a642218cc5ffe796314bc510e268584581f9db5e0" Jan 22 17:13:43 crc kubenswrapper[4758]: E0122 17:13:43.809966 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:13:55 crc kubenswrapper[4758]: I0122 17:13:55.808652 4758 scope.go:117] "RemoveContainer" containerID="7b22a3b8055c9ca6f1b3b05a642218cc5ffe796314bc510e268584581f9db5e0" Jan 22 17:13:55 crc kubenswrapper[4758]: E0122 17:13:55.809491 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:14:07 crc kubenswrapper[4758]: I0122 17:14:07.809021 4758 scope.go:117] "RemoveContainer" containerID="7b22a3b8055c9ca6f1b3b05a642218cc5ffe796314bc510e268584581f9db5e0" Jan 22 17:14:07 crc kubenswrapper[4758]: E0122 17:14:07.809929 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:14:19 crc kubenswrapper[4758]: I0122 17:14:19.809488 4758 scope.go:117] "RemoveContainer" containerID="7b22a3b8055c9ca6f1b3b05a642218cc5ffe796314bc510e268584581f9db5e0" Jan 22 17:14:19 crc kubenswrapper[4758]: E0122 17:14:19.810212 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:14:32 crc kubenswrapper[4758]: I0122 17:14:32.808612 4758 scope.go:117] "RemoveContainer" containerID="7b22a3b8055c9ca6f1b3b05a642218cc5ffe796314bc510e268584581f9db5e0" Jan 22 17:14:32 crc kubenswrapper[4758]: E0122 17:14:32.809568 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:14:43 crc kubenswrapper[4758]: I0122 17:14:43.809136 4758 scope.go:117] "RemoveContainer" containerID="7b22a3b8055c9ca6f1b3b05a642218cc5ffe796314bc510e268584581f9db5e0" Jan 22 17:14:43 crc kubenswrapper[4758]: E0122 17:14:43.810225 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:14:57 crc kubenswrapper[4758]: I0122 17:14:57.808776 4758 scope.go:117] "RemoveContainer" containerID="7b22a3b8055c9ca6f1b3b05a642218cc5ffe796314bc510e268584581f9db5e0" Jan 22 17:14:58 crc kubenswrapper[4758]: I0122 17:14:58.418649 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerStarted","Data":"01fb1e209dcbeaf3580f2514e490323105bdb6768d6254ceaacb76d57033f58c"} Jan 22 17:15:00 crc kubenswrapper[4758]: I0122 17:15:00.163814 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485035-ntr6j"] Jan 22 17:15:00 crc kubenswrapper[4758]: E0122 17:15:00.164860 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7807361e-833b-4509-a14d-560cdb429c01" containerName="extract-utilities" Jan 22 17:15:00 crc kubenswrapper[4758]: I0122 17:15:00.164880 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="7807361e-833b-4509-a14d-560cdb429c01" containerName="extract-utilities" Jan 22 17:15:00 crc kubenswrapper[4758]: E0122 17:15:00.164912 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7807361e-833b-4509-a14d-560cdb429c01" containerName="registry-server" Jan 22 17:15:00 crc kubenswrapper[4758]: I0122 17:15:00.164918 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="7807361e-833b-4509-a14d-560cdb429c01" containerName="registry-server" Jan 22 17:15:00 crc kubenswrapper[4758]: E0122 17:15:00.164928 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7807361e-833b-4509-a14d-560cdb429c01" containerName="extract-content" Jan 22 17:15:00 crc kubenswrapper[4758]: I0122 17:15:00.164936 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="7807361e-833b-4509-a14d-560cdb429c01" containerName="extract-content" Jan 22 17:15:00 crc kubenswrapper[4758]: I0122 17:15:00.165171 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="7807361e-833b-4509-a14d-560cdb429c01" containerName="registry-server" Jan 22 17:15:00 crc kubenswrapper[4758]: I0122 17:15:00.165943 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485035-ntr6j" Jan 22 17:15:00 crc kubenswrapper[4758]: I0122 17:15:00.168023 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 17:15:00 crc kubenswrapper[4758]: I0122 17:15:00.169956 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 17:15:00 crc kubenswrapper[4758]: I0122 17:15:00.199678 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485035-ntr6j"] Jan 22 17:15:00 crc kubenswrapper[4758]: I0122 17:15:00.304288 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/748791b9-ce3e-4a89-8098-318c6da7b3db-config-volume\") pod \"collect-profiles-29485035-ntr6j\" (UID: \"748791b9-ce3e-4a89-8098-318c6da7b3db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485035-ntr6j" Jan 22 17:15:00 crc kubenswrapper[4758]: I0122 17:15:00.304458 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-222kg\" (UniqueName: \"kubernetes.io/projected/748791b9-ce3e-4a89-8098-318c6da7b3db-kube-api-access-222kg\") pod \"collect-profiles-29485035-ntr6j\" (UID: \"748791b9-ce3e-4a89-8098-318c6da7b3db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485035-ntr6j" Jan 22 17:15:00 crc kubenswrapper[4758]: I0122 17:15:00.304500 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/748791b9-ce3e-4a89-8098-318c6da7b3db-secret-volume\") pod \"collect-profiles-29485035-ntr6j\" (UID: \"748791b9-ce3e-4a89-8098-318c6da7b3db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485035-ntr6j" Jan 22 17:15:00 crc kubenswrapper[4758]: I0122 17:15:00.406307 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/748791b9-ce3e-4a89-8098-318c6da7b3db-config-volume\") pod \"collect-profiles-29485035-ntr6j\" (UID: \"748791b9-ce3e-4a89-8098-318c6da7b3db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485035-ntr6j" Jan 22 17:15:00 crc kubenswrapper[4758]: I0122 17:15:00.406394 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-222kg\" (UniqueName: \"kubernetes.io/projected/748791b9-ce3e-4a89-8098-318c6da7b3db-kube-api-access-222kg\") pod \"collect-profiles-29485035-ntr6j\" (UID: \"748791b9-ce3e-4a89-8098-318c6da7b3db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485035-ntr6j" Jan 22 17:15:00 crc kubenswrapper[4758]: I0122 17:15:00.406439 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/748791b9-ce3e-4a89-8098-318c6da7b3db-secret-volume\") pod \"collect-profiles-29485035-ntr6j\" (UID: \"748791b9-ce3e-4a89-8098-318c6da7b3db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485035-ntr6j" Jan 22 17:15:00 crc kubenswrapper[4758]: I0122 17:15:00.408698 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/748791b9-ce3e-4a89-8098-318c6da7b3db-config-volume\") pod \"collect-profiles-29485035-ntr6j\" (UID: \"748791b9-ce3e-4a89-8098-318c6da7b3db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485035-ntr6j" Jan 22 17:15:00 crc kubenswrapper[4758]: I0122 17:15:00.419873 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/748791b9-ce3e-4a89-8098-318c6da7b3db-secret-volume\") pod \"collect-profiles-29485035-ntr6j\" (UID: \"748791b9-ce3e-4a89-8098-318c6da7b3db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485035-ntr6j" Jan 22 17:15:00 crc kubenswrapper[4758]: I0122 17:15:00.429138 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-222kg\" (UniqueName: \"kubernetes.io/projected/748791b9-ce3e-4a89-8098-318c6da7b3db-kube-api-access-222kg\") pod \"collect-profiles-29485035-ntr6j\" (UID: \"748791b9-ce3e-4a89-8098-318c6da7b3db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485035-ntr6j" Jan 22 17:15:00 crc kubenswrapper[4758]: I0122 17:15:00.491085 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485035-ntr6j" Jan 22 17:15:00 crc kubenswrapper[4758]: W0122 17:15:00.986854 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod748791b9_ce3e_4a89_8098_318c6da7b3db.slice/crio-705b3aea2bff00085e9218d3b2d1bd0f50e979fea3a210cc063e98e2674ce514 WatchSource:0}: Error finding container 705b3aea2bff00085e9218d3b2d1bd0f50e979fea3a210cc063e98e2674ce514: Status 404 returned error can't find the container with id 705b3aea2bff00085e9218d3b2d1bd0f50e979fea3a210cc063e98e2674ce514 Jan 22 17:15:00 crc kubenswrapper[4758]: I0122 17:15:00.987345 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485035-ntr6j"] Jan 22 17:15:01 crc kubenswrapper[4758]: I0122 17:15:01.445491 4758 generic.go:334] "Generic (PLEG): container finished" podID="748791b9-ce3e-4a89-8098-318c6da7b3db" containerID="35ec117f9c484d69b152a3eaba3229c7b0ea74ffb48c0b003079715239cdcb7a" exitCode=0 Jan 22 17:15:01 crc kubenswrapper[4758]: I0122 17:15:01.445566 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485035-ntr6j" event={"ID":"748791b9-ce3e-4a89-8098-318c6da7b3db","Type":"ContainerDied","Data":"35ec117f9c484d69b152a3eaba3229c7b0ea74ffb48c0b003079715239cdcb7a"} Jan 22 17:15:01 crc kubenswrapper[4758]: I0122 17:15:01.445896 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485035-ntr6j" event={"ID":"748791b9-ce3e-4a89-8098-318c6da7b3db","Type":"ContainerStarted","Data":"705b3aea2bff00085e9218d3b2d1bd0f50e979fea3a210cc063e98e2674ce514"} Jan 22 17:15:02 crc kubenswrapper[4758]: I0122 17:15:02.863525 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485035-ntr6j" Jan 22 17:15:02 crc kubenswrapper[4758]: I0122 17:15:02.962616 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/748791b9-ce3e-4a89-8098-318c6da7b3db-config-volume\") pod \"748791b9-ce3e-4a89-8098-318c6da7b3db\" (UID: \"748791b9-ce3e-4a89-8098-318c6da7b3db\") " Jan 22 17:15:02 crc kubenswrapper[4758]: I0122 17:15:02.962856 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/748791b9-ce3e-4a89-8098-318c6da7b3db-secret-volume\") pod \"748791b9-ce3e-4a89-8098-318c6da7b3db\" (UID: \"748791b9-ce3e-4a89-8098-318c6da7b3db\") " Jan 22 17:15:02 crc kubenswrapper[4758]: I0122 17:15:02.962898 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-222kg\" (UniqueName: \"kubernetes.io/projected/748791b9-ce3e-4a89-8098-318c6da7b3db-kube-api-access-222kg\") pod \"748791b9-ce3e-4a89-8098-318c6da7b3db\" (UID: \"748791b9-ce3e-4a89-8098-318c6da7b3db\") " Jan 22 17:15:02 crc kubenswrapper[4758]: I0122 17:15:02.963295 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/748791b9-ce3e-4a89-8098-318c6da7b3db-config-volume" (OuterVolumeSpecName: "config-volume") pod "748791b9-ce3e-4a89-8098-318c6da7b3db" (UID: "748791b9-ce3e-4a89-8098-318c6da7b3db"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 17:15:02 crc kubenswrapper[4758]: I0122 17:15:02.963602 4758 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/748791b9-ce3e-4a89-8098-318c6da7b3db-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 17:15:02 crc kubenswrapper[4758]: I0122 17:15:02.968943 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/748791b9-ce3e-4a89-8098-318c6da7b3db-kube-api-access-222kg" (OuterVolumeSpecName: "kube-api-access-222kg") pod "748791b9-ce3e-4a89-8098-318c6da7b3db" (UID: "748791b9-ce3e-4a89-8098-318c6da7b3db"). InnerVolumeSpecName "kube-api-access-222kg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:15:02 crc kubenswrapper[4758]: I0122 17:15:02.969644 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/748791b9-ce3e-4a89-8098-318c6da7b3db-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "748791b9-ce3e-4a89-8098-318c6da7b3db" (UID: "748791b9-ce3e-4a89-8098-318c6da7b3db"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:15:03 crc kubenswrapper[4758]: I0122 17:15:03.065675 4758 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/748791b9-ce3e-4a89-8098-318c6da7b3db-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 17:15:03 crc kubenswrapper[4758]: I0122 17:15:03.065707 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-222kg\" (UniqueName: \"kubernetes.io/projected/748791b9-ce3e-4a89-8098-318c6da7b3db-kube-api-access-222kg\") on node \"crc\" DevicePath \"\"" Jan 22 17:15:03 crc kubenswrapper[4758]: I0122 17:15:03.470715 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485035-ntr6j" event={"ID":"748791b9-ce3e-4a89-8098-318c6da7b3db","Type":"ContainerDied","Data":"705b3aea2bff00085e9218d3b2d1bd0f50e979fea3a210cc063e98e2674ce514"} Jan 22 17:15:03 crc kubenswrapper[4758]: I0122 17:15:03.470791 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="705b3aea2bff00085e9218d3b2d1bd0f50e979fea3a210cc063e98e2674ce514" Jan 22 17:15:03 crc kubenswrapper[4758]: I0122 17:15:03.470857 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485035-ntr6j" Jan 22 17:15:03 crc kubenswrapper[4758]: I0122 17:15:03.942801 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484990-bjkct"] Jan 22 17:15:03 crc kubenswrapper[4758]: I0122 17:15:03.951092 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484990-bjkct"] Jan 22 17:15:04 crc kubenswrapper[4758]: I0122 17:15:04.821422 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cec5698b-f4e0-4c73-abe0-f999df35f0c6" path="/var/lib/kubelet/pods/cec5698b-f4e0-4c73-abe0-f999df35f0c6/volumes" Jan 22 17:16:02 crc kubenswrapper[4758]: I0122 17:16:02.089052 4758 scope.go:117] "RemoveContainer" containerID="ac7c55b44df7dfc84a1aee9d072b00ab1099d6746d5676554bf47046ad89de10" Jan 22 17:17:13 crc kubenswrapper[4758]: I0122 17:17:13.837481 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 17:17:13 crc kubenswrapper[4758]: I0122 17:17:13.838299 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 17:17:15 crc kubenswrapper[4758]: I0122 17:17:15.967573 4758 generic.go:334] "Generic (PLEG): container finished" podID="23a50ad6-72f6-49e1-b41f-7ab16b033783" containerID="e03ef077c6b4cc10bb23021729a9532f97e09e451e7692fb0a78496c160812e0" exitCode=0 Jan 22 17:17:15 crc kubenswrapper[4758]: I0122 17:17:15.967651 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vlm88" event={"ID":"23a50ad6-72f6-49e1-b41f-7ab16b033783","Type":"ContainerDied","Data":"e03ef077c6b4cc10bb23021729a9532f97e09e451e7692fb0a78496c160812e0"} Jan 22 17:17:17 crc kubenswrapper[4758]: I0122 17:17:17.464046 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vlm88" Jan 22 17:17:17 crc kubenswrapper[4758]: I0122 17:17:17.511370 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/23a50ad6-72f6-49e1-b41f-7ab16b033783-libvirt-secret-0\") pod \"23a50ad6-72f6-49e1-b41f-7ab16b033783\" (UID: \"23a50ad6-72f6-49e1-b41f-7ab16b033783\") " Jan 22 17:17:17 crc kubenswrapper[4758]: I0122 17:17:17.511467 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k58zj\" (UniqueName: \"kubernetes.io/projected/23a50ad6-72f6-49e1-b41f-7ab16b033783-kube-api-access-k58zj\") pod \"23a50ad6-72f6-49e1-b41f-7ab16b033783\" (UID: \"23a50ad6-72f6-49e1-b41f-7ab16b033783\") " Jan 22 17:17:17 crc kubenswrapper[4758]: I0122 17:17:17.511503 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23a50ad6-72f6-49e1-b41f-7ab16b033783-libvirt-combined-ca-bundle\") pod \"23a50ad6-72f6-49e1-b41f-7ab16b033783\" (UID: \"23a50ad6-72f6-49e1-b41f-7ab16b033783\") " Jan 22 17:17:17 crc kubenswrapper[4758]: I0122 17:17:17.511579 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23a50ad6-72f6-49e1-b41f-7ab16b033783-inventory\") pod \"23a50ad6-72f6-49e1-b41f-7ab16b033783\" (UID: \"23a50ad6-72f6-49e1-b41f-7ab16b033783\") " Jan 22 17:17:17 crc kubenswrapper[4758]: I0122 17:17:17.511653 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23a50ad6-72f6-49e1-b41f-7ab16b033783-ssh-key-openstack-edpm-ipam\") pod \"23a50ad6-72f6-49e1-b41f-7ab16b033783\" (UID: \"23a50ad6-72f6-49e1-b41f-7ab16b033783\") " Jan 22 17:17:17 crc kubenswrapper[4758]: I0122 17:17:17.517192 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23a50ad6-72f6-49e1-b41f-7ab16b033783-kube-api-access-k58zj" (OuterVolumeSpecName: "kube-api-access-k58zj") pod "23a50ad6-72f6-49e1-b41f-7ab16b033783" (UID: "23a50ad6-72f6-49e1-b41f-7ab16b033783"). InnerVolumeSpecName "kube-api-access-k58zj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:17:17 crc kubenswrapper[4758]: I0122 17:17:17.521983 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23a50ad6-72f6-49e1-b41f-7ab16b033783-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "23a50ad6-72f6-49e1-b41f-7ab16b033783" (UID: "23a50ad6-72f6-49e1-b41f-7ab16b033783"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:17:17 crc kubenswrapper[4758]: I0122 17:17:17.540791 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23a50ad6-72f6-49e1-b41f-7ab16b033783-inventory" (OuterVolumeSpecName: "inventory") pod "23a50ad6-72f6-49e1-b41f-7ab16b033783" (UID: "23a50ad6-72f6-49e1-b41f-7ab16b033783"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:17:17 crc kubenswrapper[4758]: I0122 17:17:17.540827 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23a50ad6-72f6-49e1-b41f-7ab16b033783-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "23a50ad6-72f6-49e1-b41f-7ab16b033783" (UID: "23a50ad6-72f6-49e1-b41f-7ab16b033783"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:17:17 crc kubenswrapper[4758]: I0122 17:17:17.559422 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23a50ad6-72f6-49e1-b41f-7ab16b033783-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "23a50ad6-72f6-49e1-b41f-7ab16b033783" (UID: "23a50ad6-72f6-49e1-b41f-7ab16b033783"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:17:17 crc kubenswrapper[4758]: I0122 17:17:17.613564 4758 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23a50ad6-72f6-49e1-b41f-7ab16b033783-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 17:17:17 crc kubenswrapper[4758]: I0122 17:17:17.613596 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23a50ad6-72f6-49e1-b41f-7ab16b033783-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 17:17:17 crc kubenswrapper[4758]: I0122 17:17:17.613607 4758 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/23a50ad6-72f6-49e1-b41f-7ab16b033783-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 22 17:17:17 crc kubenswrapper[4758]: I0122 17:17:17.613617 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k58zj\" (UniqueName: \"kubernetes.io/projected/23a50ad6-72f6-49e1-b41f-7ab16b033783-kube-api-access-k58zj\") on node \"crc\" DevicePath \"\"" Jan 22 17:17:17 crc kubenswrapper[4758]: I0122 17:17:17.613625 4758 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23a50ad6-72f6-49e1-b41f-7ab16b033783-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:17:17 crc kubenswrapper[4758]: I0122 17:17:17.995455 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vlm88" event={"ID":"23a50ad6-72f6-49e1-b41f-7ab16b033783","Type":"ContainerDied","Data":"2626ae2f808ef5f60456c1dd9eb539d758348ed7900e3df0e2e1bdcb142e6f07"} Jan 22 17:17:17 crc kubenswrapper[4758]: I0122 17:17:17.995804 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2626ae2f808ef5f60456c1dd9eb539d758348ed7900e3df0e2e1bdcb142e6f07" Jan 22 17:17:17 crc kubenswrapper[4758]: I0122 17:17:17.995512 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vlm88" Jan 22 17:17:18 crc kubenswrapper[4758]: I0122 17:17:18.149815 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-7j728"] Jan 22 17:17:18 crc kubenswrapper[4758]: E0122 17:17:18.150359 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="748791b9-ce3e-4a89-8098-318c6da7b3db" containerName="collect-profiles" Jan 22 17:17:18 crc kubenswrapper[4758]: I0122 17:17:18.150389 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="748791b9-ce3e-4a89-8098-318c6da7b3db" containerName="collect-profiles" Jan 22 17:17:18 crc kubenswrapper[4758]: E0122 17:17:18.150432 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23a50ad6-72f6-49e1-b41f-7ab16b033783" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 22 17:17:18 crc kubenswrapper[4758]: I0122 17:17:18.150441 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="23a50ad6-72f6-49e1-b41f-7ab16b033783" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 22 17:17:18 crc kubenswrapper[4758]: I0122 17:17:18.150770 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="23a50ad6-72f6-49e1-b41f-7ab16b033783" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 22 17:17:18 crc kubenswrapper[4758]: I0122 17:17:18.150801 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="748791b9-ce3e-4a89-8098-318c6da7b3db" containerName="collect-profiles" Jan 22 17:17:18 crc kubenswrapper[4758]: I0122 17:17:18.151611 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7j728" Jan 22 17:17:18 crc kubenswrapper[4758]: I0122 17:17:18.154681 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 22 17:17:18 crc kubenswrapper[4758]: I0122 17:17:18.154937 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 22 17:17:18 crc kubenswrapper[4758]: I0122 17:17:18.155050 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 17:17:18 crc kubenswrapper[4758]: I0122 17:17:18.154940 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 17:17:18 crc kubenswrapper[4758]: I0122 17:17:18.155464 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 17:17:18 crc kubenswrapper[4758]: I0122 17:17:18.155531 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 22 17:17:18 crc kubenswrapper[4758]: I0122 17:17:18.156835 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5gz9n" Jan 22 17:17:18 crc kubenswrapper[4758]: I0122 17:17:18.162532 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-7j728"] Jan 22 17:17:18 crc kubenswrapper[4758]: I0122 17:17:18.224711 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/7cbdeacc-f53e-43de-9068-513ac27f1487-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7j728\" (UID: \"7cbdeacc-f53e-43de-9068-513ac27f1487\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7j728" Jan 22 17:17:18 crc kubenswrapper[4758]: I0122 17:17:18.224777 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7cbdeacc-f53e-43de-9068-513ac27f1487-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7j728\" (UID: \"7cbdeacc-f53e-43de-9068-513ac27f1487\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7j728" Jan 22 17:17:18 crc kubenswrapper[4758]: I0122 17:17:18.224905 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbksx\" (UniqueName: \"kubernetes.io/projected/7cbdeacc-f53e-43de-9068-513ac27f1487-kube-api-access-mbksx\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7j728\" (UID: \"7cbdeacc-f53e-43de-9068-513ac27f1487\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7j728" Jan 22 17:17:18 crc kubenswrapper[4758]: I0122 17:17:18.225244 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/7cbdeacc-f53e-43de-9068-513ac27f1487-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7j728\" (UID: \"7cbdeacc-f53e-43de-9068-513ac27f1487\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7j728" Jan 22 17:17:18 crc kubenswrapper[4758]: I0122 17:17:18.225309 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7cbdeacc-f53e-43de-9068-513ac27f1487-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7j728\" (UID: \"7cbdeacc-f53e-43de-9068-513ac27f1487\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7j728" Jan 22 17:17:18 crc kubenswrapper[4758]: I0122 17:17:18.225458 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/7cbdeacc-f53e-43de-9068-513ac27f1487-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7j728\" (UID: \"7cbdeacc-f53e-43de-9068-513ac27f1487\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7j728" Jan 22 17:17:18 crc kubenswrapper[4758]: I0122 17:17:18.225539 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/7cbdeacc-f53e-43de-9068-513ac27f1487-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7j728\" (UID: \"7cbdeacc-f53e-43de-9068-513ac27f1487\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7j728" Jan 22 17:17:18 crc kubenswrapper[4758]: I0122 17:17:18.225556 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/7cbdeacc-f53e-43de-9068-513ac27f1487-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7j728\" (UID: \"7cbdeacc-f53e-43de-9068-513ac27f1487\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7j728" Jan 22 17:17:18 crc kubenswrapper[4758]: I0122 17:17:18.225697 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cbdeacc-f53e-43de-9068-513ac27f1487-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7j728\" (UID: \"7cbdeacc-f53e-43de-9068-513ac27f1487\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7j728" Jan 22 17:17:18 crc kubenswrapper[4758]: I0122 17:17:18.327343 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cbdeacc-f53e-43de-9068-513ac27f1487-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7j728\" (UID: \"7cbdeacc-f53e-43de-9068-513ac27f1487\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7j728" Jan 22 17:17:18 crc kubenswrapper[4758]: I0122 17:17:18.327704 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/7cbdeacc-f53e-43de-9068-513ac27f1487-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7j728\" (UID: \"7cbdeacc-f53e-43de-9068-513ac27f1487\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7j728" Jan 22 17:17:18 crc kubenswrapper[4758]: I0122 17:17:18.327835 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7cbdeacc-f53e-43de-9068-513ac27f1487-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7j728\" (UID: \"7cbdeacc-f53e-43de-9068-513ac27f1487\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7j728" Jan 22 17:17:18 crc kubenswrapper[4758]: I0122 17:17:18.327923 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbksx\" (UniqueName: \"kubernetes.io/projected/7cbdeacc-f53e-43de-9068-513ac27f1487-kube-api-access-mbksx\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7j728\" (UID: \"7cbdeacc-f53e-43de-9068-513ac27f1487\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7j728" Jan 22 17:17:18 crc kubenswrapper[4758]: I0122 17:17:18.328060 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/7cbdeacc-f53e-43de-9068-513ac27f1487-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7j728\" (UID: \"7cbdeacc-f53e-43de-9068-513ac27f1487\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7j728" Jan 22 17:17:18 crc kubenswrapper[4758]: I0122 17:17:18.328156 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7cbdeacc-f53e-43de-9068-513ac27f1487-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7j728\" (UID: \"7cbdeacc-f53e-43de-9068-513ac27f1487\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7j728" Jan 22 17:17:18 crc kubenswrapper[4758]: I0122 17:17:18.328297 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/7cbdeacc-f53e-43de-9068-513ac27f1487-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7j728\" (UID: \"7cbdeacc-f53e-43de-9068-513ac27f1487\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7j728" Jan 22 17:17:18 crc kubenswrapper[4758]: I0122 17:17:18.328403 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/7cbdeacc-f53e-43de-9068-513ac27f1487-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7j728\" (UID: \"7cbdeacc-f53e-43de-9068-513ac27f1487\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7j728" Jan 22 17:17:18 crc kubenswrapper[4758]: I0122 17:17:18.328476 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/7cbdeacc-f53e-43de-9068-513ac27f1487-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7j728\" (UID: \"7cbdeacc-f53e-43de-9068-513ac27f1487\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7j728" Jan 22 17:17:18 crc kubenswrapper[4758]: I0122 17:17:18.331247 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/7cbdeacc-f53e-43de-9068-513ac27f1487-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7j728\" (UID: \"7cbdeacc-f53e-43de-9068-513ac27f1487\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7j728" Jan 22 17:17:18 crc kubenswrapper[4758]: I0122 17:17:18.331288 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cbdeacc-f53e-43de-9068-513ac27f1487-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7j728\" (UID: \"7cbdeacc-f53e-43de-9068-513ac27f1487\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7j728" Jan 22 17:17:18 crc kubenswrapper[4758]: I0122 17:17:18.331418 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/7cbdeacc-f53e-43de-9068-513ac27f1487-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7j728\" (UID: \"7cbdeacc-f53e-43de-9068-513ac27f1487\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7j728" Jan 22 17:17:18 crc kubenswrapper[4758]: I0122 17:17:18.331700 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7cbdeacc-f53e-43de-9068-513ac27f1487-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7j728\" (UID: \"7cbdeacc-f53e-43de-9068-513ac27f1487\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7j728" Jan 22 17:17:18 crc kubenswrapper[4758]: I0122 17:17:18.331772 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/7cbdeacc-f53e-43de-9068-513ac27f1487-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7j728\" (UID: \"7cbdeacc-f53e-43de-9068-513ac27f1487\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7j728" Jan 22 17:17:18 crc kubenswrapper[4758]: I0122 17:17:18.332357 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/7cbdeacc-f53e-43de-9068-513ac27f1487-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7j728\" (UID: \"7cbdeacc-f53e-43de-9068-513ac27f1487\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7j728" Jan 22 17:17:18 crc kubenswrapper[4758]: I0122 17:17:18.332902 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/7cbdeacc-f53e-43de-9068-513ac27f1487-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7j728\" (UID: \"7cbdeacc-f53e-43de-9068-513ac27f1487\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7j728" Jan 22 17:17:18 crc kubenswrapper[4758]: I0122 17:17:18.333777 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7cbdeacc-f53e-43de-9068-513ac27f1487-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7j728\" (UID: \"7cbdeacc-f53e-43de-9068-513ac27f1487\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7j728" Jan 22 17:17:18 crc kubenswrapper[4758]: I0122 17:17:18.344367 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbksx\" (UniqueName: \"kubernetes.io/projected/7cbdeacc-f53e-43de-9068-513ac27f1487-kube-api-access-mbksx\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7j728\" (UID: \"7cbdeacc-f53e-43de-9068-513ac27f1487\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7j728" Jan 22 17:17:18 crc kubenswrapper[4758]: I0122 17:17:18.470601 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7j728" Jan 22 17:17:19 crc kubenswrapper[4758]: I0122 17:17:19.054139 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-7j728"] Jan 22 17:17:19 crc kubenswrapper[4758]: I0122 17:17:19.060404 4758 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 17:17:20 crc kubenswrapper[4758]: I0122 17:17:20.015113 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7j728" event={"ID":"7cbdeacc-f53e-43de-9068-513ac27f1487","Type":"ContainerStarted","Data":"78848e0d0494e06df5092db6877c882a0d760c262b115fbd5bc7c08ac0ea7452"} Jan 22 17:17:20 crc kubenswrapper[4758]: I0122 17:17:20.015692 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7j728" event={"ID":"7cbdeacc-f53e-43de-9068-513ac27f1487","Type":"ContainerStarted","Data":"f8a09b65d1c7aef1dcde2fb770b9b174c5c0dba0e5a079847a23ba9b36cae78c"} Jan 22 17:17:20 crc kubenswrapper[4758]: I0122 17:17:20.046782 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7j728" podStartSLOduration=1.389060076 podStartE2EDuration="2.046733592s" podCreationTimestamp="2026-01-22 17:17:18 +0000 UTC" firstStartedPulling="2026-01-22 17:17:19.060158021 +0000 UTC m=+2860.543497306" lastFinishedPulling="2026-01-22 17:17:19.717831537 +0000 UTC m=+2861.201170822" observedRunningTime="2026-01-22 17:17:20.038454016 +0000 UTC m=+2861.521793301" watchObservedRunningTime="2026-01-22 17:17:20.046733592 +0000 UTC m=+2861.530072877" Jan 22 17:17:43 crc kubenswrapper[4758]: I0122 17:17:43.837322 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 17:17:43 crc kubenswrapper[4758]: I0122 17:17:43.838067 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 17:18:13 crc kubenswrapper[4758]: I0122 17:18:13.837906 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 17:18:13 crc kubenswrapper[4758]: I0122 17:18:13.838711 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 17:18:13 crc kubenswrapper[4758]: I0122 17:18:13.838859 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" Jan 22 17:18:13 crc kubenswrapper[4758]: I0122 17:18:13.840236 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"01fb1e209dcbeaf3580f2514e490323105bdb6768d6254ceaacb76d57033f58c"} pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 17:18:13 crc kubenswrapper[4758]: I0122 17:18:13.840423 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" containerID="cri-o://01fb1e209dcbeaf3580f2514e490323105bdb6768d6254ceaacb76d57033f58c" gracePeriod=600 Jan 22 17:18:14 crc kubenswrapper[4758]: I0122 17:18:14.695401 4758 generic.go:334] "Generic (PLEG): container finished" podID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerID="01fb1e209dcbeaf3580f2514e490323105bdb6768d6254ceaacb76d57033f58c" exitCode=0 Jan 22 17:18:14 crc kubenswrapper[4758]: I0122 17:18:14.695468 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerDied","Data":"01fb1e209dcbeaf3580f2514e490323105bdb6768d6254ceaacb76d57033f58c"} Jan 22 17:18:14 crc kubenswrapper[4758]: I0122 17:18:14.696012 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerStarted","Data":"6b6038efa721e68032c4b8465c33e81c0d3698308aa5597c04600a44e4aa9178"} Jan 22 17:18:14 crc kubenswrapper[4758]: I0122 17:18:14.696046 4758 scope.go:117] "RemoveContainer" containerID="7b22a3b8055c9ca6f1b3b05a642218cc5ffe796314bc510e268584581f9db5e0" Jan 22 17:19:14 crc kubenswrapper[4758]: I0122 17:19:14.147560 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hdmsj"] Jan 22 17:19:14 crc kubenswrapper[4758]: I0122 17:19:14.150858 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hdmsj" Jan 22 17:19:14 crc kubenswrapper[4758]: I0122 17:19:14.162324 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hdmsj"] Jan 22 17:19:14 crc kubenswrapper[4758]: I0122 17:19:14.243859 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxsls\" (UniqueName: \"kubernetes.io/projected/c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52-kube-api-access-qxsls\") pod \"certified-operators-hdmsj\" (UID: \"c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52\") " pod="openshift-marketplace/certified-operators-hdmsj" Jan 22 17:19:14 crc kubenswrapper[4758]: I0122 17:19:14.243927 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52-utilities\") pod \"certified-operators-hdmsj\" (UID: \"c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52\") " pod="openshift-marketplace/certified-operators-hdmsj" Jan 22 17:19:14 crc kubenswrapper[4758]: I0122 17:19:14.244041 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52-catalog-content\") pod \"certified-operators-hdmsj\" (UID: \"c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52\") " pod="openshift-marketplace/certified-operators-hdmsj" Jan 22 17:19:14 crc kubenswrapper[4758]: I0122 17:19:14.345962 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52-utilities\") pod \"certified-operators-hdmsj\" (UID: \"c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52\") " pod="openshift-marketplace/certified-operators-hdmsj" Jan 22 17:19:14 crc kubenswrapper[4758]: I0122 17:19:14.346050 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52-catalog-content\") pod \"certified-operators-hdmsj\" (UID: \"c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52\") " pod="openshift-marketplace/certified-operators-hdmsj" Jan 22 17:19:14 crc kubenswrapper[4758]: I0122 17:19:14.346266 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxsls\" (UniqueName: \"kubernetes.io/projected/c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52-kube-api-access-qxsls\") pod \"certified-operators-hdmsj\" (UID: \"c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52\") " pod="openshift-marketplace/certified-operators-hdmsj" Jan 22 17:19:14 crc kubenswrapper[4758]: I0122 17:19:14.347279 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52-utilities\") pod \"certified-operators-hdmsj\" (UID: \"c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52\") " pod="openshift-marketplace/certified-operators-hdmsj" Jan 22 17:19:14 crc kubenswrapper[4758]: I0122 17:19:14.347283 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52-catalog-content\") pod \"certified-operators-hdmsj\" (UID: \"c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52\") " pod="openshift-marketplace/certified-operators-hdmsj" Jan 22 17:19:14 crc kubenswrapper[4758]: I0122 17:19:14.368584 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxsls\" (UniqueName: \"kubernetes.io/projected/c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52-kube-api-access-qxsls\") pod \"certified-operators-hdmsj\" (UID: \"c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52\") " pod="openshift-marketplace/certified-operators-hdmsj" Jan 22 17:19:14 crc kubenswrapper[4758]: I0122 17:19:14.495870 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hdmsj" Jan 22 17:19:15 crc kubenswrapper[4758]: I0122 17:19:15.065029 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hdmsj"] Jan 22 17:19:15 crc kubenswrapper[4758]: I0122 17:19:15.339108 4758 generic.go:334] "Generic (PLEG): container finished" podID="c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52" containerID="5f21c55cdce018c32fbdec9817e61675dc3daf521acf33799de97de693565e31" exitCode=0 Jan 22 17:19:15 crc kubenswrapper[4758]: I0122 17:19:15.339159 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hdmsj" event={"ID":"c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52","Type":"ContainerDied","Data":"5f21c55cdce018c32fbdec9817e61675dc3daf521acf33799de97de693565e31"} Jan 22 17:19:15 crc kubenswrapper[4758]: I0122 17:19:15.339187 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hdmsj" event={"ID":"c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52","Type":"ContainerStarted","Data":"f0b4976efd0fa58c1d1e5db0679c922c9f972cbc07e34ab8a3d8395ab79f1b43"} Jan 22 17:19:19 crc kubenswrapper[4758]: I0122 17:19:19.460321 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hdmsj" event={"ID":"c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52","Type":"ContainerStarted","Data":"fe4f9f9cb91bc814c7639949421fe6e054d72de9a1547f0ee667bf582b6bc06e"} Jan 22 17:19:20 crc kubenswrapper[4758]: I0122 17:19:20.472385 4758 generic.go:334] "Generic (PLEG): container finished" podID="c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52" containerID="fe4f9f9cb91bc814c7639949421fe6e054d72de9a1547f0ee667bf582b6bc06e" exitCode=0 Jan 22 17:19:20 crc kubenswrapper[4758]: I0122 17:19:20.472504 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hdmsj" event={"ID":"c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52","Type":"ContainerDied","Data":"fe4f9f9cb91bc814c7639949421fe6e054d72de9a1547f0ee667bf582b6bc06e"} Jan 22 17:19:21 crc kubenswrapper[4758]: I0122 17:19:21.489383 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hdmsj" event={"ID":"c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52","Type":"ContainerStarted","Data":"d6059ebdebaacf4505ad36caeb6fab6d221725f4b6be3264c6c54884754320a8"} Jan 22 17:19:21 crc kubenswrapper[4758]: I0122 17:19:21.522355 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hdmsj" podStartSLOduration=1.874972835 podStartE2EDuration="7.522296831s" podCreationTimestamp="2026-01-22 17:19:14 +0000 UTC" firstStartedPulling="2026-01-22 17:19:15.342294073 +0000 UTC m=+2976.825633378" lastFinishedPulling="2026-01-22 17:19:20.989618089 +0000 UTC m=+2982.472957374" observedRunningTime="2026-01-22 17:19:21.509287256 +0000 UTC m=+2982.992626551" watchObservedRunningTime="2026-01-22 17:19:21.522296831 +0000 UTC m=+2983.005636116" Jan 22 17:19:24 crc kubenswrapper[4758]: I0122 17:19:24.496712 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hdmsj" Jan 22 17:19:24 crc kubenswrapper[4758]: I0122 17:19:24.497205 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hdmsj" Jan 22 17:19:24 crc kubenswrapper[4758]: I0122 17:19:24.543905 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hdmsj" Jan 22 17:19:34 crc kubenswrapper[4758]: I0122 17:19:34.563841 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hdmsj" Jan 22 17:19:34 crc kubenswrapper[4758]: I0122 17:19:34.646919 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hdmsj"] Jan 22 17:19:34 crc kubenswrapper[4758]: I0122 17:19:34.699061 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qr56r"] Jan 22 17:19:34 crc kubenswrapper[4758]: I0122 17:19:34.806711 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qr56r" podUID="6adfceee-6a7d-49d0-9d6d-360ae6e1f64b" containerName="registry-server" containerID="cri-o://a18350b0296b5f6f4bdca454d954bd5b866cb834f2e7498bb3fd0a2b7821a2e1" gracePeriod=2 Jan 22 17:19:35 crc kubenswrapper[4758]: I0122 17:19:35.316571 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qr56r" Jan 22 17:19:35 crc kubenswrapper[4758]: I0122 17:19:35.429070 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dxn5\" (UniqueName: \"kubernetes.io/projected/6adfceee-6a7d-49d0-9d6d-360ae6e1f64b-kube-api-access-8dxn5\") pod \"6adfceee-6a7d-49d0-9d6d-360ae6e1f64b\" (UID: \"6adfceee-6a7d-49d0-9d6d-360ae6e1f64b\") " Jan 22 17:19:35 crc kubenswrapper[4758]: I0122 17:19:35.429125 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6adfceee-6a7d-49d0-9d6d-360ae6e1f64b-utilities\") pod \"6adfceee-6a7d-49d0-9d6d-360ae6e1f64b\" (UID: \"6adfceee-6a7d-49d0-9d6d-360ae6e1f64b\") " Jan 22 17:19:35 crc kubenswrapper[4758]: I0122 17:19:35.429361 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6adfceee-6a7d-49d0-9d6d-360ae6e1f64b-catalog-content\") pod \"6adfceee-6a7d-49d0-9d6d-360ae6e1f64b\" (UID: \"6adfceee-6a7d-49d0-9d6d-360ae6e1f64b\") " Jan 22 17:19:35 crc kubenswrapper[4758]: I0122 17:19:35.429728 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6adfceee-6a7d-49d0-9d6d-360ae6e1f64b-utilities" (OuterVolumeSpecName: "utilities") pod "6adfceee-6a7d-49d0-9d6d-360ae6e1f64b" (UID: "6adfceee-6a7d-49d0-9d6d-360ae6e1f64b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:19:35 crc kubenswrapper[4758]: I0122 17:19:35.445017 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6adfceee-6a7d-49d0-9d6d-360ae6e1f64b-kube-api-access-8dxn5" (OuterVolumeSpecName: "kube-api-access-8dxn5") pod "6adfceee-6a7d-49d0-9d6d-360ae6e1f64b" (UID: "6adfceee-6a7d-49d0-9d6d-360ae6e1f64b"). InnerVolumeSpecName "kube-api-access-8dxn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:19:35 crc kubenswrapper[4758]: I0122 17:19:35.481824 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6adfceee-6a7d-49d0-9d6d-360ae6e1f64b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6adfceee-6a7d-49d0-9d6d-360ae6e1f64b" (UID: "6adfceee-6a7d-49d0-9d6d-360ae6e1f64b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:19:35 crc kubenswrapper[4758]: I0122 17:19:35.531612 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6adfceee-6a7d-49d0-9d6d-360ae6e1f64b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 17:19:35 crc kubenswrapper[4758]: I0122 17:19:35.531645 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dxn5\" (UniqueName: \"kubernetes.io/projected/6adfceee-6a7d-49d0-9d6d-360ae6e1f64b-kube-api-access-8dxn5\") on node \"crc\" DevicePath \"\"" Jan 22 17:19:35 crc kubenswrapper[4758]: I0122 17:19:35.531660 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6adfceee-6a7d-49d0-9d6d-360ae6e1f64b-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 17:19:35 crc kubenswrapper[4758]: I0122 17:19:35.655809 4758 generic.go:334] "Generic (PLEG): container finished" podID="6adfceee-6a7d-49d0-9d6d-360ae6e1f64b" containerID="a18350b0296b5f6f4bdca454d954bd5b866cb834f2e7498bb3fd0a2b7821a2e1" exitCode=0 Jan 22 17:19:35 crc kubenswrapper[4758]: I0122 17:19:35.655910 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qr56r" event={"ID":"6adfceee-6a7d-49d0-9d6d-360ae6e1f64b","Type":"ContainerDied","Data":"a18350b0296b5f6f4bdca454d954bd5b866cb834f2e7498bb3fd0a2b7821a2e1"} Jan 22 17:19:35 crc kubenswrapper[4758]: I0122 17:19:35.656183 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qr56r" event={"ID":"6adfceee-6a7d-49d0-9d6d-360ae6e1f64b","Type":"ContainerDied","Data":"1988e36702381058069edf526f20750a432ebb6278e6559c0cf25969df23d2d9"} Jan 22 17:19:35 crc kubenswrapper[4758]: I0122 17:19:35.656219 4758 scope.go:117] "RemoveContainer" containerID="a18350b0296b5f6f4bdca454d954bd5b866cb834f2e7498bb3fd0a2b7821a2e1" Jan 22 17:19:35 crc kubenswrapper[4758]: I0122 17:19:35.656034 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qr56r" Jan 22 17:19:35 crc kubenswrapper[4758]: I0122 17:19:35.677521 4758 scope.go:117] "RemoveContainer" containerID="2919da381e9a7a5706bfdde62cbf196359fc0d29ae7fb21b8d8451d7ed8323e4" Jan 22 17:19:35 crc kubenswrapper[4758]: I0122 17:19:35.697847 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qr56r"] Jan 22 17:19:35 crc kubenswrapper[4758]: I0122 17:19:35.711946 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qr56r"] Jan 22 17:19:35 crc kubenswrapper[4758]: I0122 17:19:35.718049 4758 scope.go:117] "RemoveContainer" containerID="27db9651f3a3c037a333f762d769405db4e0d18065fe218cfdf673a69acb52a3" Jan 22 17:19:35 crc kubenswrapper[4758]: I0122 17:19:35.772881 4758 scope.go:117] "RemoveContainer" containerID="a18350b0296b5f6f4bdca454d954bd5b866cb834f2e7498bb3fd0a2b7821a2e1" Jan 22 17:19:35 crc kubenswrapper[4758]: E0122 17:19:35.775295 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a18350b0296b5f6f4bdca454d954bd5b866cb834f2e7498bb3fd0a2b7821a2e1\": container with ID starting with a18350b0296b5f6f4bdca454d954bd5b866cb834f2e7498bb3fd0a2b7821a2e1 not found: ID does not exist" containerID="a18350b0296b5f6f4bdca454d954bd5b866cb834f2e7498bb3fd0a2b7821a2e1" Jan 22 17:19:35 crc kubenswrapper[4758]: I0122 17:19:35.775347 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a18350b0296b5f6f4bdca454d954bd5b866cb834f2e7498bb3fd0a2b7821a2e1"} err="failed to get container status \"a18350b0296b5f6f4bdca454d954bd5b866cb834f2e7498bb3fd0a2b7821a2e1\": rpc error: code = NotFound desc = could not find container \"a18350b0296b5f6f4bdca454d954bd5b866cb834f2e7498bb3fd0a2b7821a2e1\": container with ID starting with a18350b0296b5f6f4bdca454d954bd5b866cb834f2e7498bb3fd0a2b7821a2e1 not found: ID does not exist" Jan 22 17:19:35 crc kubenswrapper[4758]: I0122 17:19:35.775375 4758 scope.go:117] "RemoveContainer" containerID="2919da381e9a7a5706bfdde62cbf196359fc0d29ae7fb21b8d8451d7ed8323e4" Jan 22 17:19:35 crc kubenswrapper[4758]: E0122 17:19:35.775829 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2919da381e9a7a5706bfdde62cbf196359fc0d29ae7fb21b8d8451d7ed8323e4\": container with ID starting with 2919da381e9a7a5706bfdde62cbf196359fc0d29ae7fb21b8d8451d7ed8323e4 not found: ID does not exist" containerID="2919da381e9a7a5706bfdde62cbf196359fc0d29ae7fb21b8d8451d7ed8323e4" Jan 22 17:19:35 crc kubenswrapper[4758]: I0122 17:19:35.775868 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2919da381e9a7a5706bfdde62cbf196359fc0d29ae7fb21b8d8451d7ed8323e4"} err="failed to get container status \"2919da381e9a7a5706bfdde62cbf196359fc0d29ae7fb21b8d8451d7ed8323e4\": rpc error: code = NotFound desc = could not find container \"2919da381e9a7a5706bfdde62cbf196359fc0d29ae7fb21b8d8451d7ed8323e4\": container with ID starting with 2919da381e9a7a5706bfdde62cbf196359fc0d29ae7fb21b8d8451d7ed8323e4 not found: ID does not exist" Jan 22 17:19:35 crc kubenswrapper[4758]: I0122 17:19:35.775894 4758 scope.go:117] "RemoveContainer" containerID="27db9651f3a3c037a333f762d769405db4e0d18065fe218cfdf673a69acb52a3" Jan 22 17:19:35 crc kubenswrapper[4758]: E0122 17:19:35.776195 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27db9651f3a3c037a333f762d769405db4e0d18065fe218cfdf673a69acb52a3\": container with ID starting with 27db9651f3a3c037a333f762d769405db4e0d18065fe218cfdf673a69acb52a3 not found: ID does not exist" containerID="27db9651f3a3c037a333f762d769405db4e0d18065fe218cfdf673a69acb52a3" Jan 22 17:19:35 crc kubenswrapper[4758]: I0122 17:19:35.776219 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27db9651f3a3c037a333f762d769405db4e0d18065fe218cfdf673a69acb52a3"} err="failed to get container status \"27db9651f3a3c037a333f762d769405db4e0d18065fe218cfdf673a69acb52a3\": rpc error: code = NotFound desc = could not find container \"27db9651f3a3c037a333f762d769405db4e0d18065fe218cfdf673a69acb52a3\": container with ID starting with 27db9651f3a3c037a333f762d769405db4e0d18065fe218cfdf673a69acb52a3 not found: ID does not exist" Jan 22 17:19:36 crc kubenswrapper[4758]: I0122 17:19:36.821069 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6adfceee-6a7d-49d0-9d6d-360ae6e1f64b" path="/var/lib/kubelet/pods/6adfceee-6a7d-49d0-9d6d-360ae6e1f64b/volumes" Jan 22 17:19:51 crc kubenswrapper[4758]: I0122 17:19:51.915735 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q6kbz"] Jan 22 17:19:51 crc kubenswrapper[4758]: E0122 17:19:51.916714 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6adfceee-6a7d-49d0-9d6d-360ae6e1f64b" containerName="extract-content" Jan 22 17:19:51 crc kubenswrapper[4758]: I0122 17:19:51.916728 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="6adfceee-6a7d-49d0-9d6d-360ae6e1f64b" containerName="extract-content" Jan 22 17:19:51 crc kubenswrapper[4758]: E0122 17:19:51.916783 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6adfceee-6a7d-49d0-9d6d-360ae6e1f64b" containerName="registry-server" Jan 22 17:19:51 crc kubenswrapper[4758]: I0122 17:19:51.916789 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="6adfceee-6a7d-49d0-9d6d-360ae6e1f64b" containerName="registry-server" Jan 22 17:19:51 crc kubenswrapper[4758]: E0122 17:19:51.916804 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6adfceee-6a7d-49d0-9d6d-360ae6e1f64b" containerName="extract-utilities" Jan 22 17:19:51 crc kubenswrapper[4758]: I0122 17:19:51.916811 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="6adfceee-6a7d-49d0-9d6d-360ae6e1f64b" containerName="extract-utilities" Jan 22 17:19:51 crc kubenswrapper[4758]: I0122 17:19:51.917028 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="6adfceee-6a7d-49d0-9d6d-360ae6e1f64b" containerName="registry-server" Jan 22 17:19:51 crc kubenswrapper[4758]: I0122 17:19:51.918514 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q6kbz" Jan 22 17:19:51 crc kubenswrapper[4758]: I0122 17:19:51.934408 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q6kbz"] Jan 22 17:19:52 crc kubenswrapper[4758]: I0122 17:19:52.091809 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acd70afd-8f49-499c-a1e5-32e13cca9ddf-utilities\") pod \"community-operators-q6kbz\" (UID: \"acd70afd-8f49-499c-a1e5-32e13cca9ddf\") " pod="openshift-marketplace/community-operators-q6kbz" Jan 22 17:19:52 crc kubenswrapper[4758]: I0122 17:19:52.091859 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acd70afd-8f49-499c-a1e5-32e13cca9ddf-catalog-content\") pod \"community-operators-q6kbz\" (UID: \"acd70afd-8f49-499c-a1e5-32e13cca9ddf\") " pod="openshift-marketplace/community-operators-q6kbz" Jan 22 17:19:52 crc kubenswrapper[4758]: I0122 17:19:52.092827 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9vvv\" (UniqueName: \"kubernetes.io/projected/acd70afd-8f49-499c-a1e5-32e13cca9ddf-kube-api-access-s9vvv\") pod \"community-operators-q6kbz\" (UID: \"acd70afd-8f49-499c-a1e5-32e13cca9ddf\") " pod="openshift-marketplace/community-operators-q6kbz" Jan 22 17:19:52 crc kubenswrapper[4758]: I0122 17:19:52.194596 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acd70afd-8f49-499c-a1e5-32e13cca9ddf-catalog-content\") pod \"community-operators-q6kbz\" (UID: \"acd70afd-8f49-499c-a1e5-32e13cca9ddf\") " pod="openshift-marketplace/community-operators-q6kbz" Jan 22 17:19:52 crc kubenswrapper[4758]: I0122 17:19:52.194702 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9vvv\" (UniqueName: \"kubernetes.io/projected/acd70afd-8f49-499c-a1e5-32e13cca9ddf-kube-api-access-s9vvv\") pod \"community-operators-q6kbz\" (UID: \"acd70afd-8f49-499c-a1e5-32e13cca9ddf\") " pod="openshift-marketplace/community-operators-q6kbz" Jan 22 17:19:52 crc kubenswrapper[4758]: I0122 17:19:52.194999 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acd70afd-8f49-499c-a1e5-32e13cca9ddf-utilities\") pod \"community-operators-q6kbz\" (UID: \"acd70afd-8f49-499c-a1e5-32e13cca9ddf\") " pod="openshift-marketplace/community-operators-q6kbz" Jan 22 17:19:52 crc kubenswrapper[4758]: I0122 17:19:52.195534 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acd70afd-8f49-499c-a1e5-32e13cca9ddf-utilities\") pod \"community-operators-q6kbz\" (UID: \"acd70afd-8f49-499c-a1e5-32e13cca9ddf\") " pod="openshift-marketplace/community-operators-q6kbz" Jan 22 17:19:52 crc kubenswrapper[4758]: I0122 17:19:52.195619 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acd70afd-8f49-499c-a1e5-32e13cca9ddf-catalog-content\") pod \"community-operators-q6kbz\" (UID: \"acd70afd-8f49-499c-a1e5-32e13cca9ddf\") " pod="openshift-marketplace/community-operators-q6kbz" Jan 22 17:19:52 crc kubenswrapper[4758]: I0122 17:19:52.221572 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9vvv\" (UniqueName: \"kubernetes.io/projected/acd70afd-8f49-499c-a1e5-32e13cca9ddf-kube-api-access-s9vvv\") pod \"community-operators-q6kbz\" (UID: \"acd70afd-8f49-499c-a1e5-32e13cca9ddf\") " pod="openshift-marketplace/community-operators-q6kbz" Jan 22 17:19:52 crc kubenswrapper[4758]: I0122 17:19:52.237778 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q6kbz" Jan 22 17:19:52 crc kubenswrapper[4758]: I0122 17:19:52.833864 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q6kbz"] Jan 22 17:19:53 crc kubenswrapper[4758]: I0122 17:19:53.818284 4758 generic.go:334] "Generic (PLEG): container finished" podID="acd70afd-8f49-499c-a1e5-32e13cca9ddf" containerID="ba5d616e53561115624bf6d418296f0d10a6be43cbdbfdb03da87a042b979763" exitCode=0 Jan 22 17:19:53 crc kubenswrapper[4758]: I0122 17:19:53.818401 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6kbz" event={"ID":"acd70afd-8f49-499c-a1e5-32e13cca9ddf","Type":"ContainerDied","Data":"ba5d616e53561115624bf6d418296f0d10a6be43cbdbfdb03da87a042b979763"} Jan 22 17:19:53 crc kubenswrapper[4758]: I0122 17:19:53.818690 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6kbz" event={"ID":"acd70afd-8f49-499c-a1e5-32e13cca9ddf","Type":"ContainerStarted","Data":"18e2d7615281ebc2894e3dbc6e0ee4cb38dee0e22167be36ab3866ceb110cf7c"} Jan 22 17:19:54 crc kubenswrapper[4758]: I0122 17:19:54.832104 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6kbz" event={"ID":"acd70afd-8f49-499c-a1e5-32e13cca9ddf","Type":"ContainerStarted","Data":"d9bf5a53001a8a9a8b5f6d0f5beab3d9d2a1a87754ea601b0fb8b3e4f64f5e79"} Jan 22 17:19:56 crc kubenswrapper[4758]: I0122 17:19:56.852563 4758 generic.go:334] "Generic (PLEG): container finished" podID="acd70afd-8f49-499c-a1e5-32e13cca9ddf" containerID="d9bf5a53001a8a9a8b5f6d0f5beab3d9d2a1a87754ea601b0fb8b3e4f64f5e79" exitCode=0 Jan 22 17:19:56 crc kubenswrapper[4758]: I0122 17:19:56.852642 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6kbz" event={"ID":"acd70afd-8f49-499c-a1e5-32e13cca9ddf","Type":"ContainerDied","Data":"d9bf5a53001a8a9a8b5f6d0f5beab3d9d2a1a87754ea601b0fb8b3e4f64f5e79"} Jan 22 17:19:57 crc kubenswrapper[4758]: I0122 17:19:57.863479 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6kbz" event={"ID":"acd70afd-8f49-499c-a1e5-32e13cca9ddf","Type":"ContainerStarted","Data":"9a2ed071a81a48608844f19baa2d60dc08ea7cef46b31332304cf96b04117466"} Jan 22 17:19:57 crc kubenswrapper[4758]: I0122 17:19:57.883351 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q6kbz" podStartSLOduration=3.442807043 podStartE2EDuration="6.883290525s" podCreationTimestamp="2026-01-22 17:19:51 +0000 UTC" firstStartedPulling="2026-01-22 17:19:53.819909643 +0000 UTC m=+3015.303248928" lastFinishedPulling="2026-01-22 17:19:57.260393125 +0000 UTC m=+3018.743732410" observedRunningTime="2026-01-22 17:19:57.879987034 +0000 UTC m=+3019.363326319" watchObservedRunningTime="2026-01-22 17:19:57.883290525 +0000 UTC m=+3019.366629810" Jan 22 17:20:02 crc kubenswrapper[4758]: I0122 17:20:02.238358 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q6kbz" Jan 22 17:20:02 crc kubenswrapper[4758]: I0122 17:20:02.238999 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q6kbz" Jan 22 17:20:02 crc kubenswrapper[4758]: I0122 17:20:02.285633 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q6kbz" Jan 22 17:20:02 crc kubenswrapper[4758]: I0122 17:20:02.949415 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q6kbz" Jan 22 17:20:02 crc kubenswrapper[4758]: I0122 17:20:02.998680 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q6kbz"] Jan 22 17:20:04 crc kubenswrapper[4758]: I0122 17:20:04.933290 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q6kbz" podUID="acd70afd-8f49-499c-a1e5-32e13cca9ddf" containerName="registry-server" containerID="cri-o://9a2ed071a81a48608844f19baa2d60dc08ea7cef46b31332304cf96b04117466" gracePeriod=2 Jan 22 17:20:05 crc kubenswrapper[4758]: I0122 17:20:05.435243 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q6kbz" Jan 22 17:20:05 crc kubenswrapper[4758]: I0122 17:20:05.576535 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acd70afd-8f49-499c-a1e5-32e13cca9ddf-catalog-content\") pod \"acd70afd-8f49-499c-a1e5-32e13cca9ddf\" (UID: \"acd70afd-8f49-499c-a1e5-32e13cca9ddf\") " Jan 22 17:20:05 crc kubenswrapper[4758]: I0122 17:20:05.576712 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acd70afd-8f49-499c-a1e5-32e13cca9ddf-utilities\") pod \"acd70afd-8f49-499c-a1e5-32e13cca9ddf\" (UID: \"acd70afd-8f49-499c-a1e5-32e13cca9ddf\") " Jan 22 17:20:05 crc kubenswrapper[4758]: I0122 17:20:05.576851 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9vvv\" (UniqueName: \"kubernetes.io/projected/acd70afd-8f49-499c-a1e5-32e13cca9ddf-kube-api-access-s9vvv\") pod \"acd70afd-8f49-499c-a1e5-32e13cca9ddf\" (UID: \"acd70afd-8f49-499c-a1e5-32e13cca9ddf\") " Jan 22 17:20:05 crc kubenswrapper[4758]: I0122 17:20:05.577636 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acd70afd-8f49-499c-a1e5-32e13cca9ddf-utilities" (OuterVolumeSpecName: "utilities") pod "acd70afd-8f49-499c-a1e5-32e13cca9ddf" (UID: "acd70afd-8f49-499c-a1e5-32e13cca9ddf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:20:05 crc kubenswrapper[4758]: I0122 17:20:05.587164 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acd70afd-8f49-499c-a1e5-32e13cca9ddf-kube-api-access-s9vvv" (OuterVolumeSpecName: "kube-api-access-s9vvv") pod "acd70afd-8f49-499c-a1e5-32e13cca9ddf" (UID: "acd70afd-8f49-499c-a1e5-32e13cca9ddf"). InnerVolumeSpecName "kube-api-access-s9vvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:20:05 crc kubenswrapper[4758]: I0122 17:20:05.645929 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acd70afd-8f49-499c-a1e5-32e13cca9ddf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "acd70afd-8f49-499c-a1e5-32e13cca9ddf" (UID: "acd70afd-8f49-499c-a1e5-32e13cca9ddf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:20:05 crc kubenswrapper[4758]: I0122 17:20:05.679932 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acd70afd-8f49-499c-a1e5-32e13cca9ddf-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 17:20:05 crc kubenswrapper[4758]: I0122 17:20:05.679998 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9vvv\" (UniqueName: \"kubernetes.io/projected/acd70afd-8f49-499c-a1e5-32e13cca9ddf-kube-api-access-s9vvv\") on node \"crc\" DevicePath \"\"" Jan 22 17:20:05 crc kubenswrapper[4758]: I0122 17:20:05.680013 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acd70afd-8f49-499c-a1e5-32e13cca9ddf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 17:20:05 crc kubenswrapper[4758]: I0122 17:20:05.948853 4758 generic.go:334] "Generic (PLEG): container finished" podID="acd70afd-8f49-499c-a1e5-32e13cca9ddf" containerID="9a2ed071a81a48608844f19baa2d60dc08ea7cef46b31332304cf96b04117466" exitCode=0 Jan 22 17:20:05 crc kubenswrapper[4758]: I0122 17:20:05.948977 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6kbz" event={"ID":"acd70afd-8f49-499c-a1e5-32e13cca9ddf","Type":"ContainerDied","Data":"9a2ed071a81a48608844f19baa2d60dc08ea7cef46b31332304cf96b04117466"} Jan 22 17:20:05 crc kubenswrapper[4758]: I0122 17:20:05.949053 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q6kbz" Jan 22 17:20:05 crc kubenswrapper[4758]: I0122 17:20:05.949087 4758 scope.go:117] "RemoveContainer" containerID="9a2ed071a81a48608844f19baa2d60dc08ea7cef46b31332304cf96b04117466" Jan 22 17:20:05 crc kubenswrapper[4758]: I0122 17:20:05.949071 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6kbz" event={"ID":"acd70afd-8f49-499c-a1e5-32e13cca9ddf","Type":"ContainerDied","Data":"18e2d7615281ebc2894e3dbc6e0ee4cb38dee0e22167be36ab3866ceb110cf7c"} Jan 22 17:20:05 crc kubenswrapper[4758]: I0122 17:20:05.979937 4758 scope.go:117] "RemoveContainer" containerID="d9bf5a53001a8a9a8b5f6d0f5beab3d9d2a1a87754ea601b0fb8b3e4f64f5e79" Jan 22 17:20:06 crc kubenswrapper[4758]: I0122 17:20:06.014812 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q6kbz"] Jan 22 17:20:06 crc kubenswrapper[4758]: I0122 17:20:06.037245 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q6kbz"] Jan 22 17:20:06 crc kubenswrapper[4758]: I0122 17:20:06.060999 4758 scope.go:117] "RemoveContainer" containerID="ba5d616e53561115624bf6d418296f0d10a6be43cbdbfdb03da87a042b979763" Jan 22 17:20:06 crc kubenswrapper[4758]: I0122 17:20:06.141290 4758 scope.go:117] "RemoveContainer" containerID="9a2ed071a81a48608844f19baa2d60dc08ea7cef46b31332304cf96b04117466" Jan 22 17:20:06 crc kubenswrapper[4758]: E0122 17:20:06.144883 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a2ed071a81a48608844f19baa2d60dc08ea7cef46b31332304cf96b04117466\": container with ID starting with 9a2ed071a81a48608844f19baa2d60dc08ea7cef46b31332304cf96b04117466 not found: ID does not exist" containerID="9a2ed071a81a48608844f19baa2d60dc08ea7cef46b31332304cf96b04117466" Jan 22 17:20:06 crc kubenswrapper[4758]: I0122 17:20:06.144934 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a2ed071a81a48608844f19baa2d60dc08ea7cef46b31332304cf96b04117466"} err="failed to get container status \"9a2ed071a81a48608844f19baa2d60dc08ea7cef46b31332304cf96b04117466\": rpc error: code = NotFound desc = could not find container \"9a2ed071a81a48608844f19baa2d60dc08ea7cef46b31332304cf96b04117466\": container with ID starting with 9a2ed071a81a48608844f19baa2d60dc08ea7cef46b31332304cf96b04117466 not found: ID does not exist" Jan 22 17:20:06 crc kubenswrapper[4758]: I0122 17:20:06.144966 4758 scope.go:117] "RemoveContainer" containerID="d9bf5a53001a8a9a8b5f6d0f5beab3d9d2a1a87754ea601b0fb8b3e4f64f5e79" Jan 22 17:20:06 crc kubenswrapper[4758]: E0122 17:20:06.145783 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9bf5a53001a8a9a8b5f6d0f5beab3d9d2a1a87754ea601b0fb8b3e4f64f5e79\": container with ID starting with d9bf5a53001a8a9a8b5f6d0f5beab3d9d2a1a87754ea601b0fb8b3e4f64f5e79 not found: ID does not exist" containerID="d9bf5a53001a8a9a8b5f6d0f5beab3d9d2a1a87754ea601b0fb8b3e4f64f5e79" Jan 22 17:20:06 crc kubenswrapper[4758]: I0122 17:20:06.145807 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9bf5a53001a8a9a8b5f6d0f5beab3d9d2a1a87754ea601b0fb8b3e4f64f5e79"} err="failed to get container status \"d9bf5a53001a8a9a8b5f6d0f5beab3d9d2a1a87754ea601b0fb8b3e4f64f5e79\": rpc error: code = NotFound desc = could not find container \"d9bf5a53001a8a9a8b5f6d0f5beab3d9d2a1a87754ea601b0fb8b3e4f64f5e79\": container with ID starting with d9bf5a53001a8a9a8b5f6d0f5beab3d9d2a1a87754ea601b0fb8b3e4f64f5e79 not found: ID does not exist" Jan 22 17:20:06 crc kubenswrapper[4758]: I0122 17:20:06.145821 4758 scope.go:117] "RemoveContainer" containerID="ba5d616e53561115624bf6d418296f0d10a6be43cbdbfdb03da87a042b979763" Jan 22 17:20:06 crc kubenswrapper[4758]: E0122 17:20:06.146176 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba5d616e53561115624bf6d418296f0d10a6be43cbdbfdb03da87a042b979763\": container with ID starting with ba5d616e53561115624bf6d418296f0d10a6be43cbdbfdb03da87a042b979763 not found: ID does not exist" containerID="ba5d616e53561115624bf6d418296f0d10a6be43cbdbfdb03da87a042b979763" Jan 22 17:20:06 crc kubenswrapper[4758]: I0122 17:20:06.146210 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba5d616e53561115624bf6d418296f0d10a6be43cbdbfdb03da87a042b979763"} err="failed to get container status \"ba5d616e53561115624bf6d418296f0d10a6be43cbdbfdb03da87a042b979763\": rpc error: code = NotFound desc = could not find container \"ba5d616e53561115624bf6d418296f0d10a6be43cbdbfdb03da87a042b979763\": container with ID starting with ba5d616e53561115624bf6d418296f0d10a6be43cbdbfdb03da87a042b979763 not found: ID does not exist" Jan 22 17:20:06 crc kubenswrapper[4758]: I0122 17:20:06.821138 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acd70afd-8f49-499c-a1e5-32e13cca9ddf" path="/var/lib/kubelet/pods/acd70afd-8f49-499c-a1e5-32e13cca9ddf/volumes" Jan 22 17:20:09 crc kubenswrapper[4758]: I0122 17:20:09.995297 4758 generic.go:334] "Generic (PLEG): container finished" podID="7cbdeacc-f53e-43de-9068-513ac27f1487" containerID="78848e0d0494e06df5092db6877c882a0d760c262b115fbd5bc7c08ac0ea7452" exitCode=0 Jan 22 17:20:09 crc kubenswrapper[4758]: I0122 17:20:09.995490 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7j728" event={"ID":"7cbdeacc-f53e-43de-9068-513ac27f1487","Type":"ContainerDied","Data":"78848e0d0494e06df5092db6877c882a0d760c262b115fbd5bc7c08ac0ea7452"} Jan 22 17:20:11 crc kubenswrapper[4758]: I0122 17:20:11.461837 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7j728" Jan 22 17:20:11 crc kubenswrapper[4758]: I0122 17:20:11.601331 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/7cbdeacc-f53e-43de-9068-513ac27f1487-nova-cell1-compute-config-0\") pod \"7cbdeacc-f53e-43de-9068-513ac27f1487\" (UID: \"7cbdeacc-f53e-43de-9068-513ac27f1487\") " Jan 22 17:20:11 crc kubenswrapper[4758]: I0122 17:20:11.601516 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/7cbdeacc-f53e-43de-9068-513ac27f1487-nova-cell1-compute-config-1\") pod \"7cbdeacc-f53e-43de-9068-513ac27f1487\" (UID: \"7cbdeacc-f53e-43de-9068-513ac27f1487\") " Jan 22 17:20:11 crc kubenswrapper[4758]: I0122 17:20:11.601546 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cbdeacc-f53e-43de-9068-513ac27f1487-nova-combined-ca-bundle\") pod \"7cbdeacc-f53e-43de-9068-513ac27f1487\" (UID: \"7cbdeacc-f53e-43de-9068-513ac27f1487\") " Jan 22 17:20:11 crc kubenswrapper[4758]: I0122 17:20:11.601608 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7cbdeacc-f53e-43de-9068-513ac27f1487-ssh-key-openstack-edpm-ipam\") pod \"7cbdeacc-f53e-43de-9068-513ac27f1487\" (UID: \"7cbdeacc-f53e-43de-9068-513ac27f1487\") " Jan 22 17:20:11 crc kubenswrapper[4758]: I0122 17:20:11.601651 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbksx\" (UniqueName: \"kubernetes.io/projected/7cbdeacc-f53e-43de-9068-513ac27f1487-kube-api-access-mbksx\") pod \"7cbdeacc-f53e-43de-9068-513ac27f1487\" (UID: \"7cbdeacc-f53e-43de-9068-513ac27f1487\") " Jan 22 17:20:11 crc kubenswrapper[4758]: I0122 17:20:11.601725 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/7cbdeacc-f53e-43de-9068-513ac27f1487-nova-extra-config-0\") pod \"7cbdeacc-f53e-43de-9068-513ac27f1487\" (UID: \"7cbdeacc-f53e-43de-9068-513ac27f1487\") " Jan 22 17:20:11 crc kubenswrapper[4758]: I0122 17:20:11.601759 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/7cbdeacc-f53e-43de-9068-513ac27f1487-nova-migration-ssh-key-1\") pod \"7cbdeacc-f53e-43de-9068-513ac27f1487\" (UID: \"7cbdeacc-f53e-43de-9068-513ac27f1487\") " Jan 22 17:20:11 crc kubenswrapper[4758]: I0122 17:20:11.601808 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/7cbdeacc-f53e-43de-9068-513ac27f1487-nova-migration-ssh-key-0\") pod \"7cbdeacc-f53e-43de-9068-513ac27f1487\" (UID: \"7cbdeacc-f53e-43de-9068-513ac27f1487\") " Jan 22 17:20:11 crc kubenswrapper[4758]: I0122 17:20:11.601838 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7cbdeacc-f53e-43de-9068-513ac27f1487-inventory\") pod \"7cbdeacc-f53e-43de-9068-513ac27f1487\" (UID: \"7cbdeacc-f53e-43de-9068-513ac27f1487\") " Jan 22 17:20:11 crc kubenswrapper[4758]: I0122 17:20:11.608399 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cbdeacc-f53e-43de-9068-513ac27f1487-kube-api-access-mbksx" (OuterVolumeSpecName: "kube-api-access-mbksx") pod "7cbdeacc-f53e-43de-9068-513ac27f1487" (UID: "7cbdeacc-f53e-43de-9068-513ac27f1487"). InnerVolumeSpecName "kube-api-access-mbksx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:20:11 crc kubenswrapper[4758]: I0122 17:20:11.608449 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cbdeacc-f53e-43de-9068-513ac27f1487-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "7cbdeacc-f53e-43de-9068-513ac27f1487" (UID: "7cbdeacc-f53e-43de-9068-513ac27f1487"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:20:11 crc kubenswrapper[4758]: I0122 17:20:11.634942 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cbdeacc-f53e-43de-9068-513ac27f1487-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "7cbdeacc-f53e-43de-9068-513ac27f1487" (UID: "7cbdeacc-f53e-43de-9068-513ac27f1487"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 17:20:11 crc kubenswrapper[4758]: I0122 17:20:11.636592 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cbdeacc-f53e-43de-9068-513ac27f1487-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7cbdeacc-f53e-43de-9068-513ac27f1487" (UID: "7cbdeacc-f53e-43de-9068-513ac27f1487"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:20:11 crc kubenswrapper[4758]: I0122 17:20:11.640454 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cbdeacc-f53e-43de-9068-513ac27f1487-inventory" (OuterVolumeSpecName: "inventory") pod "7cbdeacc-f53e-43de-9068-513ac27f1487" (UID: "7cbdeacc-f53e-43de-9068-513ac27f1487"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:20:11 crc kubenswrapper[4758]: I0122 17:20:11.642560 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cbdeacc-f53e-43de-9068-513ac27f1487-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "7cbdeacc-f53e-43de-9068-513ac27f1487" (UID: "7cbdeacc-f53e-43de-9068-513ac27f1487"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:20:11 crc kubenswrapper[4758]: I0122 17:20:11.652178 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cbdeacc-f53e-43de-9068-513ac27f1487-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "7cbdeacc-f53e-43de-9068-513ac27f1487" (UID: "7cbdeacc-f53e-43de-9068-513ac27f1487"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:20:11 crc kubenswrapper[4758]: I0122 17:20:11.660022 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cbdeacc-f53e-43de-9068-513ac27f1487-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "7cbdeacc-f53e-43de-9068-513ac27f1487" (UID: "7cbdeacc-f53e-43de-9068-513ac27f1487"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:20:11 crc kubenswrapper[4758]: I0122 17:20:11.664448 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cbdeacc-f53e-43de-9068-513ac27f1487-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "7cbdeacc-f53e-43de-9068-513ac27f1487" (UID: "7cbdeacc-f53e-43de-9068-513ac27f1487"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:20:11 crc kubenswrapper[4758]: I0122 17:20:11.704438 4758 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/7cbdeacc-f53e-43de-9068-513ac27f1487-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 22 17:20:11 crc kubenswrapper[4758]: I0122 17:20:11.704490 4758 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/7cbdeacc-f53e-43de-9068-513ac27f1487-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 22 17:20:11 crc kubenswrapper[4758]: I0122 17:20:11.704501 4758 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/7cbdeacc-f53e-43de-9068-513ac27f1487-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 22 17:20:11 crc kubenswrapper[4758]: I0122 17:20:11.704510 4758 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7cbdeacc-f53e-43de-9068-513ac27f1487-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 17:20:11 crc kubenswrapper[4758]: I0122 17:20:11.704534 4758 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/7cbdeacc-f53e-43de-9068-513ac27f1487-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 22 17:20:11 crc kubenswrapper[4758]: I0122 17:20:11.704543 4758 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/7cbdeacc-f53e-43de-9068-513ac27f1487-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 22 17:20:11 crc kubenswrapper[4758]: I0122 17:20:11.704553 4758 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cbdeacc-f53e-43de-9068-513ac27f1487-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:20:11 crc kubenswrapper[4758]: I0122 17:20:11.704561 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7cbdeacc-f53e-43de-9068-513ac27f1487-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 17:20:11 crc kubenswrapper[4758]: I0122 17:20:11.704570 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbksx\" (UniqueName: \"kubernetes.io/projected/7cbdeacc-f53e-43de-9068-513ac27f1487-kube-api-access-mbksx\") on node \"crc\" DevicePath \"\"" Jan 22 17:20:12 crc kubenswrapper[4758]: I0122 17:20:12.018860 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7j728" event={"ID":"7cbdeacc-f53e-43de-9068-513ac27f1487","Type":"ContainerDied","Data":"f8a09b65d1c7aef1dcde2fb770b9b174c5c0dba0e5a079847a23ba9b36cae78c"} Jan 22 17:20:12 crc kubenswrapper[4758]: I0122 17:20:12.019157 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8a09b65d1c7aef1dcde2fb770b9b174c5c0dba0e5a079847a23ba9b36cae78c" Jan 22 17:20:12 crc kubenswrapper[4758]: I0122 17:20:12.018947 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7j728" Jan 22 17:20:12 crc kubenswrapper[4758]: I0122 17:20:12.133265 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9"] Jan 22 17:20:12 crc kubenswrapper[4758]: E0122 17:20:12.133721 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acd70afd-8f49-499c-a1e5-32e13cca9ddf" containerName="extract-content" Jan 22 17:20:12 crc kubenswrapper[4758]: I0122 17:20:12.133761 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="acd70afd-8f49-499c-a1e5-32e13cca9ddf" containerName="extract-content" Jan 22 17:20:12 crc kubenswrapper[4758]: E0122 17:20:12.133791 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acd70afd-8f49-499c-a1e5-32e13cca9ddf" containerName="extract-utilities" Jan 22 17:20:12 crc kubenswrapper[4758]: I0122 17:20:12.133800 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="acd70afd-8f49-499c-a1e5-32e13cca9ddf" containerName="extract-utilities" Jan 22 17:20:12 crc kubenswrapper[4758]: E0122 17:20:12.133818 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acd70afd-8f49-499c-a1e5-32e13cca9ddf" containerName="registry-server" Jan 22 17:20:12 crc kubenswrapper[4758]: I0122 17:20:12.133827 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="acd70afd-8f49-499c-a1e5-32e13cca9ddf" containerName="registry-server" Jan 22 17:20:12 crc kubenswrapper[4758]: E0122 17:20:12.133860 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cbdeacc-f53e-43de-9068-513ac27f1487" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 22 17:20:12 crc kubenswrapper[4758]: I0122 17:20:12.133869 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cbdeacc-f53e-43de-9068-513ac27f1487" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 22 17:20:12 crc kubenswrapper[4758]: I0122 17:20:12.134122 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cbdeacc-f53e-43de-9068-513ac27f1487" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 22 17:20:12 crc kubenswrapper[4758]: I0122 17:20:12.134149 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="acd70afd-8f49-499c-a1e5-32e13cca9ddf" containerName="registry-server" Jan 22 17:20:12 crc kubenswrapper[4758]: I0122 17:20:12.135008 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9" Jan 22 17:20:12 crc kubenswrapper[4758]: I0122 17:20:12.137909 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 17:20:12 crc kubenswrapper[4758]: I0122 17:20:12.138278 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 17:20:12 crc kubenswrapper[4758]: I0122 17:20:12.138670 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 17:20:12 crc kubenswrapper[4758]: I0122 17:20:12.139977 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 22 17:20:12 crc kubenswrapper[4758]: I0122 17:20:12.146217 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5gz9n" Jan 22 17:20:12 crc kubenswrapper[4758]: I0122 17:20:12.148393 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9"] Jan 22 17:20:12 crc kubenswrapper[4758]: I0122 17:20:12.213849 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8778204-17cb-497b-a3d2-4d5f7709924d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9\" (UID: \"e8778204-17cb-497b-a3d2-4d5f7709924d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9" Jan 22 17:20:12 crc kubenswrapper[4758]: I0122 17:20:12.213929 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8778204-17cb-497b-a3d2-4d5f7709924d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9\" (UID: \"e8778204-17cb-497b-a3d2-4d5f7709924d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9" Jan 22 17:20:12 crc kubenswrapper[4758]: I0122 17:20:12.214080 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkpdt\" (UniqueName: \"kubernetes.io/projected/e8778204-17cb-497b-a3d2-4d5f7709924d-kube-api-access-vkpdt\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9\" (UID: \"e8778204-17cb-497b-a3d2-4d5f7709924d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9" Jan 22 17:20:12 crc kubenswrapper[4758]: I0122 17:20:12.214238 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e8778204-17cb-497b-a3d2-4d5f7709924d-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9\" (UID: \"e8778204-17cb-497b-a3d2-4d5f7709924d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9" Jan 22 17:20:12 crc kubenswrapper[4758]: I0122 17:20:12.214502 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e8778204-17cb-497b-a3d2-4d5f7709924d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9\" (UID: \"e8778204-17cb-497b-a3d2-4d5f7709924d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9" Jan 22 17:20:12 crc kubenswrapper[4758]: I0122 17:20:12.214699 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e8778204-17cb-497b-a3d2-4d5f7709924d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9\" (UID: \"e8778204-17cb-497b-a3d2-4d5f7709924d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9" Jan 22 17:20:12 crc kubenswrapper[4758]: I0122 17:20:12.215097 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e8778204-17cb-497b-a3d2-4d5f7709924d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9\" (UID: \"e8778204-17cb-497b-a3d2-4d5f7709924d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9" Jan 22 17:20:12 crc kubenswrapper[4758]: I0122 17:20:12.317544 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8778204-17cb-497b-a3d2-4d5f7709924d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9\" (UID: \"e8778204-17cb-497b-a3d2-4d5f7709924d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9" Jan 22 17:20:12 crc kubenswrapper[4758]: I0122 17:20:12.317639 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8778204-17cb-497b-a3d2-4d5f7709924d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9\" (UID: \"e8778204-17cb-497b-a3d2-4d5f7709924d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9" Jan 22 17:20:12 crc kubenswrapper[4758]: I0122 17:20:12.317694 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkpdt\" (UniqueName: \"kubernetes.io/projected/e8778204-17cb-497b-a3d2-4d5f7709924d-kube-api-access-vkpdt\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9\" (UID: \"e8778204-17cb-497b-a3d2-4d5f7709924d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9" Jan 22 17:20:12 crc kubenswrapper[4758]: I0122 17:20:12.317774 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e8778204-17cb-497b-a3d2-4d5f7709924d-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9\" (UID: \"e8778204-17cb-497b-a3d2-4d5f7709924d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9" Jan 22 17:20:12 crc kubenswrapper[4758]: I0122 17:20:12.317849 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e8778204-17cb-497b-a3d2-4d5f7709924d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9\" (UID: \"e8778204-17cb-497b-a3d2-4d5f7709924d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9" Jan 22 17:20:12 crc kubenswrapper[4758]: I0122 17:20:12.317907 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e8778204-17cb-497b-a3d2-4d5f7709924d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9\" (UID: \"e8778204-17cb-497b-a3d2-4d5f7709924d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9" Jan 22 17:20:12 crc kubenswrapper[4758]: I0122 17:20:12.317979 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e8778204-17cb-497b-a3d2-4d5f7709924d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9\" (UID: \"e8778204-17cb-497b-a3d2-4d5f7709924d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9" Jan 22 17:20:12 crc kubenswrapper[4758]: I0122 17:20:12.322074 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e8778204-17cb-497b-a3d2-4d5f7709924d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9\" (UID: \"e8778204-17cb-497b-a3d2-4d5f7709924d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9" Jan 22 17:20:12 crc kubenswrapper[4758]: I0122 17:20:12.322785 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e8778204-17cb-497b-a3d2-4d5f7709924d-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9\" (UID: \"e8778204-17cb-497b-a3d2-4d5f7709924d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9" Jan 22 17:20:12 crc kubenswrapper[4758]: I0122 17:20:12.323012 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8778204-17cb-497b-a3d2-4d5f7709924d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9\" (UID: \"e8778204-17cb-497b-a3d2-4d5f7709924d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9" Jan 22 17:20:12 crc kubenswrapper[4758]: I0122 17:20:12.326454 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e8778204-17cb-497b-a3d2-4d5f7709924d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9\" (UID: \"e8778204-17cb-497b-a3d2-4d5f7709924d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9" Jan 22 17:20:12 crc kubenswrapper[4758]: I0122 17:20:12.327133 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e8778204-17cb-497b-a3d2-4d5f7709924d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9\" (UID: \"e8778204-17cb-497b-a3d2-4d5f7709924d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9" Jan 22 17:20:12 crc kubenswrapper[4758]: I0122 17:20:12.331015 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8778204-17cb-497b-a3d2-4d5f7709924d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9\" (UID: \"e8778204-17cb-497b-a3d2-4d5f7709924d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9" Jan 22 17:20:12 crc kubenswrapper[4758]: I0122 17:20:12.334757 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkpdt\" (UniqueName: \"kubernetes.io/projected/e8778204-17cb-497b-a3d2-4d5f7709924d-kube-api-access-vkpdt\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9\" (UID: \"e8778204-17cb-497b-a3d2-4d5f7709924d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9" Jan 22 17:20:12 crc kubenswrapper[4758]: I0122 17:20:12.458044 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9" Jan 22 17:20:13 crc kubenswrapper[4758]: I0122 17:20:13.023826 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9"] Jan 22 17:20:14 crc kubenswrapper[4758]: I0122 17:20:14.043634 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9" event={"ID":"e8778204-17cb-497b-a3d2-4d5f7709924d","Type":"ContainerStarted","Data":"21c72f33a61f7a21d8511661a9f08733340ecc595543a41fa11187b9fa1e30dd"} Jan 22 17:20:14 crc kubenswrapper[4758]: I0122 17:20:14.044035 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9" event={"ID":"e8778204-17cb-497b-a3d2-4d5f7709924d","Type":"ContainerStarted","Data":"69f9acd31feba3b8d75472548c7d7613ac9694af78c664b541a5256e43587ed8"} Jan 22 17:20:14 crc kubenswrapper[4758]: I0122 17:20:14.069939 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9" podStartSLOduration=1.5232870379999999 podStartE2EDuration="2.06990578s" podCreationTimestamp="2026-01-22 17:20:12 +0000 UTC" firstStartedPulling="2026-01-22 17:20:13.027608814 +0000 UTC m=+3034.510948119" lastFinishedPulling="2026-01-22 17:20:13.574227576 +0000 UTC m=+3035.057566861" observedRunningTime="2026-01-22 17:20:14.063225969 +0000 UTC m=+3035.546565304" watchObservedRunningTime="2026-01-22 17:20:14.06990578 +0000 UTC m=+3035.553245085" Jan 22 17:20:43 crc kubenswrapper[4758]: I0122 17:20:43.837058 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 17:20:43 crc kubenswrapper[4758]: I0122 17:20:43.837559 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 17:21:13 crc kubenswrapper[4758]: I0122 17:21:13.837061 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 17:21:13 crc kubenswrapper[4758]: I0122 17:21:13.837665 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 17:21:43 crc kubenswrapper[4758]: I0122 17:21:43.837675 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 17:21:43 crc kubenswrapper[4758]: I0122 17:21:43.838442 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 17:21:43 crc kubenswrapper[4758]: I0122 17:21:43.838547 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" Jan 22 17:21:43 crc kubenswrapper[4758]: I0122 17:21:43.839791 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6b6038efa721e68032c4b8465c33e81c0d3698308aa5597c04600a44e4aa9178"} pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 17:21:43 crc kubenswrapper[4758]: I0122 17:21:43.839936 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" containerID="cri-o://6b6038efa721e68032c4b8465c33e81c0d3698308aa5597c04600a44e4aa9178" gracePeriod=600 Jan 22 17:21:43 crc kubenswrapper[4758]: E0122 17:21:43.976860 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:21:44 crc kubenswrapper[4758]: I0122 17:21:44.613877 4758 generic.go:334] "Generic (PLEG): container finished" podID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerID="6b6038efa721e68032c4b8465c33e81c0d3698308aa5597c04600a44e4aa9178" exitCode=0 Jan 22 17:21:44 crc kubenswrapper[4758]: I0122 17:21:44.614215 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerDied","Data":"6b6038efa721e68032c4b8465c33e81c0d3698308aa5597c04600a44e4aa9178"} Jan 22 17:21:44 crc kubenswrapper[4758]: I0122 17:21:44.614359 4758 scope.go:117] "RemoveContainer" containerID="01fb1e209dcbeaf3580f2514e490323105bdb6768d6254ceaacb76d57033f58c" Jan 22 17:21:44 crc kubenswrapper[4758]: I0122 17:21:44.615996 4758 scope.go:117] "RemoveContainer" containerID="6b6038efa721e68032c4b8465c33e81c0d3698308aa5597c04600a44e4aa9178" Jan 22 17:21:44 crc kubenswrapper[4758]: E0122 17:21:44.616566 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:21:57 crc kubenswrapper[4758]: I0122 17:21:57.808823 4758 scope.go:117] "RemoveContainer" containerID="6b6038efa721e68032c4b8465c33e81c0d3698308aa5597c04600a44e4aa9178" Jan 22 17:21:57 crc kubenswrapper[4758]: E0122 17:21:57.809762 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:22:09 crc kubenswrapper[4758]: I0122 17:22:09.808408 4758 scope.go:117] "RemoveContainer" containerID="6b6038efa721e68032c4b8465c33e81c0d3698308aa5597c04600a44e4aa9178" Jan 22 17:22:09 crc kubenswrapper[4758]: E0122 17:22:09.809263 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:22:21 crc kubenswrapper[4758]: I0122 17:22:21.811483 4758 scope.go:117] "RemoveContainer" containerID="6b6038efa721e68032c4b8465c33e81c0d3698308aa5597c04600a44e4aa9178" Jan 22 17:22:21 crc kubenswrapper[4758]: E0122 17:22:21.812241 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:22:26 crc kubenswrapper[4758]: I0122 17:22:26.572983 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8jgwb"] Jan 22 17:22:26 crc kubenswrapper[4758]: I0122 17:22:26.576694 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8jgwb" Jan 22 17:22:26 crc kubenswrapper[4758]: I0122 17:22:26.591890 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8jgwb"] Jan 22 17:22:26 crc kubenswrapper[4758]: I0122 17:22:26.749546 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jslg6\" (UniqueName: \"kubernetes.io/projected/c74fee2b-b06f-4d90-87cb-36a020ecdd6e-kube-api-access-jslg6\") pod \"redhat-marketplace-8jgwb\" (UID: \"c74fee2b-b06f-4d90-87cb-36a020ecdd6e\") " pod="openshift-marketplace/redhat-marketplace-8jgwb" Jan 22 17:22:26 crc kubenswrapper[4758]: I0122 17:22:26.749617 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c74fee2b-b06f-4d90-87cb-36a020ecdd6e-catalog-content\") pod \"redhat-marketplace-8jgwb\" (UID: \"c74fee2b-b06f-4d90-87cb-36a020ecdd6e\") " pod="openshift-marketplace/redhat-marketplace-8jgwb" Jan 22 17:22:26 crc kubenswrapper[4758]: I0122 17:22:26.749796 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c74fee2b-b06f-4d90-87cb-36a020ecdd6e-utilities\") pod \"redhat-marketplace-8jgwb\" (UID: \"c74fee2b-b06f-4d90-87cb-36a020ecdd6e\") " pod="openshift-marketplace/redhat-marketplace-8jgwb" Jan 22 17:22:26 crc kubenswrapper[4758]: I0122 17:22:26.852188 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c74fee2b-b06f-4d90-87cb-36a020ecdd6e-utilities\") pod \"redhat-marketplace-8jgwb\" (UID: \"c74fee2b-b06f-4d90-87cb-36a020ecdd6e\") " pod="openshift-marketplace/redhat-marketplace-8jgwb" Jan 22 17:22:26 crc kubenswrapper[4758]: I0122 17:22:26.852350 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jslg6\" (UniqueName: \"kubernetes.io/projected/c74fee2b-b06f-4d90-87cb-36a020ecdd6e-kube-api-access-jslg6\") pod \"redhat-marketplace-8jgwb\" (UID: \"c74fee2b-b06f-4d90-87cb-36a020ecdd6e\") " pod="openshift-marketplace/redhat-marketplace-8jgwb" Jan 22 17:22:26 crc kubenswrapper[4758]: I0122 17:22:26.852387 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c74fee2b-b06f-4d90-87cb-36a020ecdd6e-catalog-content\") pod \"redhat-marketplace-8jgwb\" (UID: \"c74fee2b-b06f-4d90-87cb-36a020ecdd6e\") " pod="openshift-marketplace/redhat-marketplace-8jgwb" Jan 22 17:22:26 crc kubenswrapper[4758]: I0122 17:22:26.852676 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c74fee2b-b06f-4d90-87cb-36a020ecdd6e-utilities\") pod \"redhat-marketplace-8jgwb\" (UID: \"c74fee2b-b06f-4d90-87cb-36a020ecdd6e\") " pod="openshift-marketplace/redhat-marketplace-8jgwb" Jan 22 17:22:26 crc kubenswrapper[4758]: I0122 17:22:26.852768 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c74fee2b-b06f-4d90-87cb-36a020ecdd6e-catalog-content\") pod \"redhat-marketplace-8jgwb\" (UID: \"c74fee2b-b06f-4d90-87cb-36a020ecdd6e\") " pod="openshift-marketplace/redhat-marketplace-8jgwb" Jan 22 17:22:26 crc kubenswrapper[4758]: I0122 17:22:26.882683 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jslg6\" (UniqueName: \"kubernetes.io/projected/c74fee2b-b06f-4d90-87cb-36a020ecdd6e-kube-api-access-jslg6\") pod \"redhat-marketplace-8jgwb\" (UID: \"c74fee2b-b06f-4d90-87cb-36a020ecdd6e\") " pod="openshift-marketplace/redhat-marketplace-8jgwb" Jan 22 17:22:26 crc kubenswrapper[4758]: I0122 17:22:26.921153 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8jgwb" Jan 22 17:22:27 crc kubenswrapper[4758]: I0122 17:22:27.490889 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8jgwb"] Jan 22 17:22:28 crc kubenswrapper[4758]: I0122 17:22:28.088609 4758 generic.go:334] "Generic (PLEG): container finished" podID="c74fee2b-b06f-4d90-87cb-36a020ecdd6e" containerID="cdb90c994199da42b700aa1cf69e8dc4aaf92612212ca60d852868f8170a744e" exitCode=0 Jan 22 17:22:28 crc kubenswrapper[4758]: I0122 17:22:28.088676 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8jgwb" event={"ID":"c74fee2b-b06f-4d90-87cb-36a020ecdd6e","Type":"ContainerDied","Data":"cdb90c994199da42b700aa1cf69e8dc4aaf92612212ca60d852868f8170a744e"} Jan 22 17:22:28 crc kubenswrapper[4758]: I0122 17:22:28.088938 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8jgwb" event={"ID":"c74fee2b-b06f-4d90-87cb-36a020ecdd6e","Type":"ContainerStarted","Data":"cdfbd39c3b9b2908cf08cf6ea08f360671aca615e0c0a371194198fdeb72aeb3"} Jan 22 17:22:28 crc kubenswrapper[4758]: I0122 17:22:28.091108 4758 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 17:22:29 crc kubenswrapper[4758]: I0122 17:22:29.100494 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8jgwb" event={"ID":"c74fee2b-b06f-4d90-87cb-36a020ecdd6e","Type":"ContainerStarted","Data":"33fe51c1859db13b2b7f5f08d59e256b8514ba17f6d6ba35c9d903f216fee806"} Jan 22 17:22:30 crc kubenswrapper[4758]: I0122 17:22:30.114626 4758 generic.go:334] "Generic (PLEG): container finished" podID="c74fee2b-b06f-4d90-87cb-36a020ecdd6e" containerID="33fe51c1859db13b2b7f5f08d59e256b8514ba17f6d6ba35c9d903f216fee806" exitCode=0 Jan 22 17:22:30 crc kubenswrapper[4758]: I0122 17:22:30.114754 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8jgwb" event={"ID":"c74fee2b-b06f-4d90-87cb-36a020ecdd6e","Type":"ContainerDied","Data":"33fe51c1859db13b2b7f5f08d59e256b8514ba17f6d6ba35c9d903f216fee806"} Jan 22 17:22:32 crc kubenswrapper[4758]: I0122 17:22:32.141169 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8jgwb" event={"ID":"c74fee2b-b06f-4d90-87cb-36a020ecdd6e","Type":"ContainerStarted","Data":"50a8cd51ba2660f28ea95f3ca478ba0551af7922531db57da273b407159b5adf"} Jan 22 17:22:32 crc kubenswrapper[4758]: I0122 17:22:32.166931 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8jgwb" podStartSLOduration=3.1843324490000002 podStartE2EDuration="6.166893105s" podCreationTimestamp="2026-01-22 17:22:26 +0000 UTC" firstStartedPulling="2026-01-22 17:22:28.090766476 +0000 UTC m=+3169.574105771" lastFinishedPulling="2026-01-22 17:22:31.073327142 +0000 UTC m=+3172.556666427" observedRunningTime="2026-01-22 17:22:32.160273915 +0000 UTC m=+3173.643613210" watchObservedRunningTime="2026-01-22 17:22:32.166893105 +0000 UTC m=+3173.650232390" Jan 22 17:22:34 crc kubenswrapper[4758]: I0122 17:22:34.808432 4758 scope.go:117] "RemoveContainer" containerID="6b6038efa721e68032c4b8465c33e81c0d3698308aa5597c04600a44e4aa9178" Jan 22 17:22:34 crc kubenswrapper[4758]: E0122 17:22:34.809221 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:22:36 crc kubenswrapper[4758]: I0122 17:22:36.229890 4758 generic.go:334] "Generic (PLEG): container finished" podID="e8778204-17cb-497b-a3d2-4d5f7709924d" containerID="21c72f33a61f7a21d8511661a9f08733340ecc595543a41fa11187b9fa1e30dd" exitCode=0 Jan 22 17:22:36 crc kubenswrapper[4758]: I0122 17:22:36.230079 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9" event={"ID":"e8778204-17cb-497b-a3d2-4d5f7709924d","Type":"ContainerDied","Data":"21c72f33a61f7a21d8511661a9f08733340ecc595543a41fa11187b9fa1e30dd"} Jan 22 17:22:36 crc kubenswrapper[4758]: I0122 17:22:36.919852 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8jgwb" Jan 22 17:22:36 crc kubenswrapper[4758]: I0122 17:22:36.921339 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8jgwb" Jan 22 17:22:36 crc kubenswrapper[4758]: I0122 17:22:36.993319 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8jgwb" Jan 22 17:22:37 crc kubenswrapper[4758]: I0122 17:22:37.315897 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8jgwb" Jan 22 17:22:37 crc kubenswrapper[4758]: I0122 17:22:37.380554 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8jgwb"] Jan 22 17:22:37 crc kubenswrapper[4758]: I0122 17:22:37.724038 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9" Jan 22 17:22:37 crc kubenswrapper[4758]: I0122 17:22:37.825898 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8778204-17cb-497b-a3d2-4d5f7709924d-inventory\") pod \"e8778204-17cb-497b-a3d2-4d5f7709924d\" (UID: \"e8778204-17cb-497b-a3d2-4d5f7709924d\") " Jan 22 17:22:37 crc kubenswrapper[4758]: I0122 17:22:37.825978 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e8778204-17cb-497b-a3d2-4d5f7709924d-ceilometer-compute-config-data-1\") pod \"e8778204-17cb-497b-a3d2-4d5f7709924d\" (UID: \"e8778204-17cb-497b-a3d2-4d5f7709924d\") " Jan 22 17:22:37 crc kubenswrapper[4758]: I0122 17:22:37.826064 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e8778204-17cb-497b-a3d2-4d5f7709924d-ssh-key-openstack-edpm-ipam\") pod \"e8778204-17cb-497b-a3d2-4d5f7709924d\" (UID: \"e8778204-17cb-497b-a3d2-4d5f7709924d\") " Jan 22 17:22:37 crc kubenswrapper[4758]: I0122 17:22:37.826160 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e8778204-17cb-497b-a3d2-4d5f7709924d-ceilometer-compute-config-data-0\") pod \"e8778204-17cb-497b-a3d2-4d5f7709924d\" (UID: \"e8778204-17cb-497b-a3d2-4d5f7709924d\") " Jan 22 17:22:37 crc kubenswrapper[4758]: I0122 17:22:37.826195 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkpdt\" (UniqueName: \"kubernetes.io/projected/e8778204-17cb-497b-a3d2-4d5f7709924d-kube-api-access-vkpdt\") pod \"e8778204-17cb-497b-a3d2-4d5f7709924d\" (UID: \"e8778204-17cb-497b-a3d2-4d5f7709924d\") " Jan 22 17:22:37 crc kubenswrapper[4758]: I0122 17:22:37.826317 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8778204-17cb-497b-a3d2-4d5f7709924d-telemetry-combined-ca-bundle\") pod \"e8778204-17cb-497b-a3d2-4d5f7709924d\" (UID: \"e8778204-17cb-497b-a3d2-4d5f7709924d\") " Jan 22 17:22:37 crc kubenswrapper[4758]: I0122 17:22:37.826356 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e8778204-17cb-497b-a3d2-4d5f7709924d-ceilometer-compute-config-data-2\") pod \"e8778204-17cb-497b-a3d2-4d5f7709924d\" (UID: \"e8778204-17cb-497b-a3d2-4d5f7709924d\") " Jan 22 17:22:37 crc kubenswrapper[4758]: I0122 17:22:37.894073 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8778204-17cb-497b-a3d2-4d5f7709924d-kube-api-access-vkpdt" (OuterVolumeSpecName: "kube-api-access-vkpdt") pod "e8778204-17cb-497b-a3d2-4d5f7709924d" (UID: "e8778204-17cb-497b-a3d2-4d5f7709924d"). InnerVolumeSpecName "kube-api-access-vkpdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:22:37 crc kubenswrapper[4758]: I0122 17:22:37.900268 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8778204-17cb-497b-a3d2-4d5f7709924d-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "e8778204-17cb-497b-a3d2-4d5f7709924d" (UID: "e8778204-17cb-497b-a3d2-4d5f7709924d"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:22:37 crc kubenswrapper[4758]: I0122 17:22:37.905403 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8778204-17cb-497b-a3d2-4d5f7709924d-inventory" (OuterVolumeSpecName: "inventory") pod "e8778204-17cb-497b-a3d2-4d5f7709924d" (UID: "e8778204-17cb-497b-a3d2-4d5f7709924d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:22:37 crc kubenswrapper[4758]: I0122 17:22:37.919672 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8778204-17cb-497b-a3d2-4d5f7709924d-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "e8778204-17cb-497b-a3d2-4d5f7709924d" (UID: "e8778204-17cb-497b-a3d2-4d5f7709924d"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:22:37 crc kubenswrapper[4758]: I0122 17:22:37.923906 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8778204-17cb-497b-a3d2-4d5f7709924d-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "e8778204-17cb-497b-a3d2-4d5f7709924d" (UID: "e8778204-17cb-497b-a3d2-4d5f7709924d"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:22:37 crc kubenswrapper[4758]: I0122 17:22:37.930180 4758 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8778204-17cb-497b-a3d2-4d5f7709924d-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 17:22:37 crc kubenswrapper[4758]: I0122 17:22:37.930428 4758 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e8778204-17cb-497b-a3d2-4d5f7709924d-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 22 17:22:37 crc kubenswrapper[4758]: I0122 17:22:37.930551 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkpdt\" (UniqueName: \"kubernetes.io/projected/e8778204-17cb-497b-a3d2-4d5f7709924d-kube-api-access-vkpdt\") on node \"crc\" DevicePath \"\"" Jan 22 17:22:37 crc kubenswrapper[4758]: I0122 17:22:37.930641 4758 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8778204-17cb-497b-a3d2-4d5f7709924d-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:22:37 crc kubenswrapper[4758]: I0122 17:22:37.930727 4758 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e8778204-17cb-497b-a3d2-4d5f7709924d-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 22 17:22:37 crc kubenswrapper[4758]: I0122 17:22:37.934326 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8778204-17cb-497b-a3d2-4d5f7709924d-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "e8778204-17cb-497b-a3d2-4d5f7709924d" (UID: "e8778204-17cb-497b-a3d2-4d5f7709924d"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:22:37 crc kubenswrapper[4758]: I0122 17:22:37.934723 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8778204-17cb-497b-a3d2-4d5f7709924d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e8778204-17cb-497b-a3d2-4d5f7709924d" (UID: "e8778204-17cb-497b-a3d2-4d5f7709924d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:22:38 crc kubenswrapper[4758]: I0122 17:22:38.032072 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e8778204-17cb-497b-a3d2-4d5f7709924d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 17:22:38 crc kubenswrapper[4758]: I0122 17:22:38.032099 4758 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e8778204-17cb-497b-a3d2-4d5f7709924d-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 22 17:22:38 crc kubenswrapper[4758]: I0122 17:22:38.315411 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9" Jan 22 17:22:38 crc kubenswrapper[4758]: I0122 17:22:38.320405 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9" event={"ID":"e8778204-17cb-497b-a3d2-4d5f7709924d","Type":"ContainerDied","Data":"69f9acd31feba3b8d75472548c7d7613ac9694af78c664b541a5256e43587ed8"} Jan 22 17:22:38 crc kubenswrapper[4758]: I0122 17:22:38.320478 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69f9acd31feba3b8d75472548c7d7613ac9694af78c664b541a5256e43587ed8" Jan 22 17:22:39 crc kubenswrapper[4758]: I0122 17:22:39.325529 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8jgwb" podUID="c74fee2b-b06f-4d90-87cb-36a020ecdd6e" containerName="registry-server" containerID="cri-o://50a8cd51ba2660f28ea95f3ca478ba0551af7922531db57da273b407159b5adf" gracePeriod=2 Jan 22 17:22:40 crc kubenswrapper[4758]: I0122 17:22:40.057141 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8jgwb" Jan 22 17:22:40 crc kubenswrapper[4758]: I0122 17:22:40.188399 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c74fee2b-b06f-4d90-87cb-36a020ecdd6e-catalog-content\") pod \"c74fee2b-b06f-4d90-87cb-36a020ecdd6e\" (UID: \"c74fee2b-b06f-4d90-87cb-36a020ecdd6e\") " Jan 22 17:22:40 crc kubenswrapper[4758]: I0122 17:22:40.188709 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jslg6\" (UniqueName: \"kubernetes.io/projected/c74fee2b-b06f-4d90-87cb-36a020ecdd6e-kube-api-access-jslg6\") pod \"c74fee2b-b06f-4d90-87cb-36a020ecdd6e\" (UID: \"c74fee2b-b06f-4d90-87cb-36a020ecdd6e\") " Jan 22 17:22:40 crc kubenswrapper[4758]: I0122 17:22:40.188941 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c74fee2b-b06f-4d90-87cb-36a020ecdd6e-utilities\") pod \"c74fee2b-b06f-4d90-87cb-36a020ecdd6e\" (UID: \"c74fee2b-b06f-4d90-87cb-36a020ecdd6e\") " Jan 22 17:22:40 crc kubenswrapper[4758]: I0122 17:22:40.189973 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c74fee2b-b06f-4d90-87cb-36a020ecdd6e-utilities" (OuterVolumeSpecName: "utilities") pod "c74fee2b-b06f-4d90-87cb-36a020ecdd6e" (UID: "c74fee2b-b06f-4d90-87cb-36a020ecdd6e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:22:40 crc kubenswrapper[4758]: I0122 17:22:40.195685 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c74fee2b-b06f-4d90-87cb-36a020ecdd6e-kube-api-access-jslg6" (OuterVolumeSpecName: "kube-api-access-jslg6") pod "c74fee2b-b06f-4d90-87cb-36a020ecdd6e" (UID: "c74fee2b-b06f-4d90-87cb-36a020ecdd6e"). InnerVolumeSpecName "kube-api-access-jslg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:22:40 crc kubenswrapper[4758]: I0122 17:22:40.220194 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c74fee2b-b06f-4d90-87cb-36a020ecdd6e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c74fee2b-b06f-4d90-87cb-36a020ecdd6e" (UID: "c74fee2b-b06f-4d90-87cb-36a020ecdd6e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:22:40 crc kubenswrapper[4758]: I0122 17:22:40.290953 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c74fee2b-b06f-4d90-87cb-36a020ecdd6e-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 17:22:40 crc kubenswrapper[4758]: I0122 17:22:40.290986 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c74fee2b-b06f-4d90-87cb-36a020ecdd6e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 17:22:40 crc kubenswrapper[4758]: I0122 17:22:40.291002 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jslg6\" (UniqueName: \"kubernetes.io/projected/c74fee2b-b06f-4d90-87cb-36a020ecdd6e-kube-api-access-jslg6\") on node \"crc\" DevicePath \"\"" Jan 22 17:22:40 crc kubenswrapper[4758]: I0122 17:22:40.337387 4758 generic.go:334] "Generic (PLEG): container finished" podID="c74fee2b-b06f-4d90-87cb-36a020ecdd6e" containerID="50a8cd51ba2660f28ea95f3ca478ba0551af7922531db57da273b407159b5adf" exitCode=0 Jan 22 17:22:40 crc kubenswrapper[4758]: I0122 17:22:40.337450 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8jgwb" event={"ID":"c74fee2b-b06f-4d90-87cb-36a020ecdd6e","Type":"ContainerDied","Data":"50a8cd51ba2660f28ea95f3ca478ba0551af7922531db57da273b407159b5adf"} Jan 22 17:22:40 crc kubenswrapper[4758]: I0122 17:22:40.337497 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8jgwb" event={"ID":"c74fee2b-b06f-4d90-87cb-36a020ecdd6e","Type":"ContainerDied","Data":"cdfbd39c3b9b2908cf08cf6ea08f360671aca615e0c0a371194198fdeb72aeb3"} Jan 22 17:22:40 crc kubenswrapper[4758]: I0122 17:22:40.337535 4758 scope.go:117] "RemoveContainer" containerID="50a8cd51ba2660f28ea95f3ca478ba0551af7922531db57da273b407159b5adf" Jan 22 17:22:40 crc kubenswrapper[4758]: I0122 17:22:40.337800 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8jgwb" Jan 22 17:22:40 crc kubenswrapper[4758]: I0122 17:22:40.363857 4758 scope.go:117] "RemoveContainer" containerID="33fe51c1859db13b2b7f5f08d59e256b8514ba17f6d6ba35c9d903f216fee806" Jan 22 17:22:40 crc kubenswrapper[4758]: I0122 17:22:40.391354 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8jgwb"] Jan 22 17:22:40 crc kubenswrapper[4758]: I0122 17:22:40.394691 4758 scope.go:117] "RemoveContainer" containerID="cdb90c994199da42b700aa1cf69e8dc4aaf92612212ca60d852868f8170a744e" Jan 22 17:22:40 crc kubenswrapper[4758]: I0122 17:22:40.403302 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8jgwb"] Jan 22 17:22:40 crc kubenswrapper[4758]: I0122 17:22:40.451708 4758 scope.go:117] "RemoveContainer" containerID="50a8cd51ba2660f28ea95f3ca478ba0551af7922531db57da273b407159b5adf" Jan 22 17:22:40 crc kubenswrapper[4758]: E0122 17:22:40.452284 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50a8cd51ba2660f28ea95f3ca478ba0551af7922531db57da273b407159b5adf\": container with ID starting with 50a8cd51ba2660f28ea95f3ca478ba0551af7922531db57da273b407159b5adf not found: ID does not exist" containerID="50a8cd51ba2660f28ea95f3ca478ba0551af7922531db57da273b407159b5adf" Jan 22 17:22:40 crc kubenswrapper[4758]: I0122 17:22:40.452375 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50a8cd51ba2660f28ea95f3ca478ba0551af7922531db57da273b407159b5adf"} err="failed to get container status \"50a8cd51ba2660f28ea95f3ca478ba0551af7922531db57da273b407159b5adf\": rpc error: code = NotFound desc = could not find container \"50a8cd51ba2660f28ea95f3ca478ba0551af7922531db57da273b407159b5adf\": container with ID starting with 50a8cd51ba2660f28ea95f3ca478ba0551af7922531db57da273b407159b5adf not found: ID does not exist" Jan 22 17:22:40 crc kubenswrapper[4758]: I0122 17:22:40.452440 4758 scope.go:117] "RemoveContainer" containerID="33fe51c1859db13b2b7f5f08d59e256b8514ba17f6d6ba35c9d903f216fee806" Jan 22 17:22:40 crc kubenswrapper[4758]: E0122 17:22:40.452823 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33fe51c1859db13b2b7f5f08d59e256b8514ba17f6d6ba35c9d903f216fee806\": container with ID starting with 33fe51c1859db13b2b7f5f08d59e256b8514ba17f6d6ba35c9d903f216fee806 not found: ID does not exist" containerID="33fe51c1859db13b2b7f5f08d59e256b8514ba17f6d6ba35c9d903f216fee806" Jan 22 17:22:40 crc kubenswrapper[4758]: I0122 17:22:40.452866 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33fe51c1859db13b2b7f5f08d59e256b8514ba17f6d6ba35c9d903f216fee806"} err="failed to get container status \"33fe51c1859db13b2b7f5f08d59e256b8514ba17f6d6ba35c9d903f216fee806\": rpc error: code = NotFound desc = could not find container \"33fe51c1859db13b2b7f5f08d59e256b8514ba17f6d6ba35c9d903f216fee806\": container with ID starting with 33fe51c1859db13b2b7f5f08d59e256b8514ba17f6d6ba35c9d903f216fee806 not found: ID does not exist" Jan 22 17:22:40 crc kubenswrapper[4758]: I0122 17:22:40.452897 4758 scope.go:117] "RemoveContainer" containerID="cdb90c994199da42b700aa1cf69e8dc4aaf92612212ca60d852868f8170a744e" Jan 22 17:22:40 crc kubenswrapper[4758]: E0122 17:22:40.453263 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdb90c994199da42b700aa1cf69e8dc4aaf92612212ca60d852868f8170a744e\": container with ID starting with cdb90c994199da42b700aa1cf69e8dc4aaf92612212ca60d852868f8170a744e not found: ID does not exist" containerID="cdb90c994199da42b700aa1cf69e8dc4aaf92612212ca60d852868f8170a744e" Jan 22 17:22:40 crc kubenswrapper[4758]: I0122 17:22:40.453326 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdb90c994199da42b700aa1cf69e8dc4aaf92612212ca60d852868f8170a744e"} err="failed to get container status \"cdb90c994199da42b700aa1cf69e8dc4aaf92612212ca60d852868f8170a744e\": rpc error: code = NotFound desc = could not find container \"cdb90c994199da42b700aa1cf69e8dc4aaf92612212ca60d852868f8170a744e\": container with ID starting with cdb90c994199da42b700aa1cf69e8dc4aaf92612212ca60d852868f8170a744e not found: ID does not exist" Jan 22 17:22:40 crc kubenswrapper[4758]: I0122 17:22:40.821697 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c74fee2b-b06f-4d90-87cb-36a020ecdd6e" path="/var/lib/kubelet/pods/c74fee2b-b06f-4d90-87cb-36a020ecdd6e/volumes" Jan 22 17:22:47 crc kubenswrapper[4758]: I0122 17:22:47.810632 4758 scope.go:117] "RemoveContainer" containerID="6b6038efa721e68032c4b8465c33e81c0d3698308aa5597c04600a44e4aa9178" Jan 22 17:22:47 crc kubenswrapper[4758]: E0122 17:22:47.811417 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:23:00 crc kubenswrapper[4758]: I0122 17:23:00.808056 4758 scope.go:117] "RemoveContainer" containerID="6b6038efa721e68032c4b8465c33e81c0d3698308aa5597c04600a44e4aa9178" Jan 22 17:23:00 crc kubenswrapper[4758]: E0122 17:23:00.808892 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:23:13 crc kubenswrapper[4758]: I0122 17:23:13.808170 4758 scope.go:117] "RemoveContainer" containerID="6b6038efa721e68032c4b8465c33e81c0d3698308aa5597c04600a44e4aa9178" Jan 22 17:23:13 crc kubenswrapper[4758]: E0122 17:23:13.810944 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.442352 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Jan 22 17:23:16 crc kubenswrapper[4758]: E0122 17:23:16.443445 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c74fee2b-b06f-4d90-87cb-36a020ecdd6e" containerName="extract-utilities" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.443470 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="c74fee2b-b06f-4d90-87cb-36a020ecdd6e" containerName="extract-utilities" Jan 22 17:23:16 crc kubenswrapper[4758]: E0122 17:23:16.443497 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8778204-17cb-497b-a3d2-4d5f7709924d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.443507 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8778204-17cb-497b-a3d2-4d5f7709924d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 22 17:23:16 crc kubenswrapper[4758]: E0122 17:23:16.443532 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c74fee2b-b06f-4d90-87cb-36a020ecdd6e" containerName="extract-content" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.443540 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="c74fee2b-b06f-4d90-87cb-36a020ecdd6e" containerName="extract-content" Jan 22 17:23:16 crc kubenswrapper[4758]: E0122 17:23:16.443554 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c74fee2b-b06f-4d90-87cb-36a020ecdd6e" containerName="registry-server" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.443561 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="c74fee2b-b06f-4d90-87cb-36a020ecdd6e" containerName="registry-server" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.443855 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="c74fee2b-b06f-4d90-87cb-36a020ecdd6e" containerName="registry-server" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.443881 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8778204-17cb-497b-a3d2-4d5f7709924d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.450927 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.454218 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.477149 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.554986 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/9246ea76-1e99-4458-86ef-6ca8d66b6eba-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"9246ea76-1e99-4458-86ef-6ca8d66b6eba\") " pod="openstack/cinder-backup-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.555066 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9246ea76-1e99-4458-86ef-6ca8d66b6eba-config-data\") pod \"cinder-backup-0\" (UID: \"9246ea76-1e99-4458-86ef-6ca8d66b6eba\") " pod="openstack/cinder-backup-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.555119 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/9246ea76-1e99-4458-86ef-6ca8d66b6eba-etc-nvme\") pod \"cinder-backup-0\" (UID: \"9246ea76-1e99-4458-86ef-6ca8d66b6eba\") " pod="openstack/cinder-backup-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.555157 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9246ea76-1e99-4458-86ef-6ca8d66b6eba-config-data-custom\") pod \"cinder-backup-0\" (UID: \"9246ea76-1e99-4458-86ef-6ca8d66b6eba\") " pod="openstack/cinder-backup-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.555202 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9246ea76-1e99-4458-86ef-6ca8d66b6eba-lib-modules\") pod \"cinder-backup-0\" (UID: \"9246ea76-1e99-4458-86ef-6ca8d66b6eba\") " pod="openstack/cinder-backup-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.555241 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9246ea76-1e99-4458-86ef-6ca8d66b6eba-scripts\") pod \"cinder-backup-0\" (UID: \"9246ea76-1e99-4458-86ef-6ca8d66b6eba\") " pod="openstack/cinder-backup-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.555267 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/9246ea76-1e99-4458-86ef-6ca8d66b6eba-dev\") pod \"cinder-backup-0\" (UID: \"9246ea76-1e99-4458-86ef-6ca8d66b6eba\") " pod="openstack/cinder-backup-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.555293 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/9246ea76-1e99-4458-86ef-6ca8d66b6eba-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"9246ea76-1e99-4458-86ef-6ca8d66b6eba\") " pod="openstack/cinder-backup-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.555336 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9246ea76-1e99-4458-86ef-6ca8d66b6eba-run\") pod \"cinder-backup-0\" (UID: \"9246ea76-1e99-4458-86ef-6ca8d66b6eba\") " pod="openstack/cinder-backup-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.555362 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgxq6\" (UniqueName: \"kubernetes.io/projected/9246ea76-1e99-4458-86ef-6ca8d66b6eba-kube-api-access-rgxq6\") pod \"cinder-backup-0\" (UID: \"9246ea76-1e99-4458-86ef-6ca8d66b6eba\") " pod="openstack/cinder-backup-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.555403 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9246ea76-1e99-4458-86ef-6ca8d66b6eba-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"9246ea76-1e99-4458-86ef-6ca8d66b6eba\") " pod="openstack/cinder-backup-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.555452 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/9246ea76-1e99-4458-86ef-6ca8d66b6eba-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"9246ea76-1e99-4458-86ef-6ca8d66b6eba\") " pod="openstack/cinder-backup-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.555513 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/9246ea76-1e99-4458-86ef-6ca8d66b6eba-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"9246ea76-1e99-4458-86ef-6ca8d66b6eba\") " pod="openstack/cinder-backup-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.555541 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9246ea76-1e99-4458-86ef-6ca8d66b6eba-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"9246ea76-1e99-4458-86ef-6ca8d66b6eba\") " pod="openstack/cinder-backup-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.555561 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9246ea76-1e99-4458-86ef-6ca8d66b6eba-sys\") pod \"cinder-backup-0\" (UID: \"9246ea76-1e99-4458-86ef-6ca8d66b6eba\") " pod="openstack/cinder-backup-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.638538 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-nfs-0"] Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.642090 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.647288 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-config-data" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.656841 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9246ea76-1e99-4458-86ef-6ca8d66b6eba-config-data-custom\") pod \"cinder-backup-0\" (UID: \"9246ea76-1e99-4458-86ef-6ca8d66b6eba\") " pod="openstack/cinder-backup-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.656895 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9246ea76-1e99-4458-86ef-6ca8d66b6eba-lib-modules\") pod \"cinder-backup-0\" (UID: \"9246ea76-1e99-4458-86ef-6ca8d66b6eba\") " pod="openstack/cinder-backup-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.656931 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9246ea76-1e99-4458-86ef-6ca8d66b6eba-scripts\") pod \"cinder-backup-0\" (UID: \"9246ea76-1e99-4458-86ef-6ca8d66b6eba\") " pod="openstack/cinder-backup-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.656947 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/9246ea76-1e99-4458-86ef-6ca8d66b6eba-dev\") pod \"cinder-backup-0\" (UID: \"9246ea76-1e99-4458-86ef-6ca8d66b6eba\") " pod="openstack/cinder-backup-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.656963 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/9246ea76-1e99-4458-86ef-6ca8d66b6eba-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"9246ea76-1e99-4458-86ef-6ca8d66b6eba\") " pod="openstack/cinder-backup-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.656994 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9246ea76-1e99-4458-86ef-6ca8d66b6eba-run\") pod \"cinder-backup-0\" (UID: \"9246ea76-1e99-4458-86ef-6ca8d66b6eba\") " pod="openstack/cinder-backup-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.657009 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgxq6\" (UniqueName: \"kubernetes.io/projected/9246ea76-1e99-4458-86ef-6ca8d66b6eba-kube-api-access-rgxq6\") pod \"cinder-backup-0\" (UID: \"9246ea76-1e99-4458-86ef-6ca8d66b6eba\") " pod="openstack/cinder-backup-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.657035 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9246ea76-1e99-4458-86ef-6ca8d66b6eba-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"9246ea76-1e99-4458-86ef-6ca8d66b6eba\") " pod="openstack/cinder-backup-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.657071 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/9246ea76-1e99-4458-86ef-6ca8d66b6eba-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"9246ea76-1e99-4458-86ef-6ca8d66b6eba\") " pod="openstack/cinder-backup-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.657109 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/9246ea76-1e99-4458-86ef-6ca8d66b6eba-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"9246ea76-1e99-4458-86ef-6ca8d66b6eba\") " pod="openstack/cinder-backup-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.657125 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9246ea76-1e99-4458-86ef-6ca8d66b6eba-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"9246ea76-1e99-4458-86ef-6ca8d66b6eba\") " pod="openstack/cinder-backup-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.657141 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9246ea76-1e99-4458-86ef-6ca8d66b6eba-sys\") pod \"cinder-backup-0\" (UID: \"9246ea76-1e99-4458-86ef-6ca8d66b6eba\") " pod="openstack/cinder-backup-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.657176 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/9246ea76-1e99-4458-86ef-6ca8d66b6eba-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"9246ea76-1e99-4458-86ef-6ca8d66b6eba\") " pod="openstack/cinder-backup-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.657190 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9246ea76-1e99-4458-86ef-6ca8d66b6eba-config-data\") pod \"cinder-backup-0\" (UID: \"9246ea76-1e99-4458-86ef-6ca8d66b6eba\") " pod="openstack/cinder-backup-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.657220 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/9246ea76-1e99-4458-86ef-6ca8d66b6eba-etc-nvme\") pod \"cinder-backup-0\" (UID: \"9246ea76-1e99-4458-86ef-6ca8d66b6eba\") " pod="openstack/cinder-backup-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.657281 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9246ea76-1e99-4458-86ef-6ca8d66b6eba-lib-modules\") pod \"cinder-backup-0\" (UID: \"9246ea76-1e99-4458-86ef-6ca8d66b6eba\") " pod="openstack/cinder-backup-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.657420 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/9246ea76-1e99-4458-86ef-6ca8d66b6eba-etc-nvme\") pod \"cinder-backup-0\" (UID: \"9246ea76-1e99-4458-86ef-6ca8d66b6eba\") " pod="openstack/cinder-backup-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.657517 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9246ea76-1e99-4458-86ef-6ca8d66b6eba-run\") pod \"cinder-backup-0\" (UID: \"9246ea76-1e99-4458-86ef-6ca8d66b6eba\") " pod="openstack/cinder-backup-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.657575 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/9246ea76-1e99-4458-86ef-6ca8d66b6eba-dev\") pod \"cinder-backup-0\" (UID: \"9246ea76-1e99-4458-86ef-6ca8d66b6eba\") " pod="openstack/cinder-backup-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.657615 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9246ea76-1e99-4458-86ef-6ca8d66b6eba-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"9246ea76-1e99-4458-86ef-6ca8d66b6eba\") " pod="openstack/cinder-backup-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.657672 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9246ea76-1e99-4458-86ef-6ca8d66b6eba-sys\") pod \"cinder-backup-0\" (UID: \"9246ea76-1e99-4458-86ef-6ca8d66b6eba\") " pod="openstack/cinder-backup-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.657704 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/9246ea76-1e99-4458-86ef-6ca8d66b6eba-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"9246ea76-1e99-4458-86ef-6ca8d66b6eba\") " pod="openstack/cinder-backup-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.657759 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/9246ea76-1e99-4458-86ef-6ca8d66b6eba-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"9246ea76-1e99-4458-86ef-6ca8d66b6eba\") " pod="openstack/cinder-backup-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.657859 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/9246ea76-1e99-4458-86ef-6ca8d66b6eba-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"9246ea76-1e99-4458-86ef-6ca8d66b6eba\") " pod="openstack/cinder-backup-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.657952 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/9246ea76-1e99-4458-86ef-6ca8d66b6eba-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"9246ea76-1e99-4458-86ef-6ca8d66b6eba\") " pod="openstack/cinder-backup-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.673382 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9246ea76-1e99-4458-86ef-6ca8d66b6eba-scripts\") pod \"cinder-backup-0\" (UID: \"9246ea76-1e99-4458-86ef-6ca8d66b6eba\") " pod="openstack/cinder-backup-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.673539 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9246ea76-1e99-4458-86ef-6ca8d66b6eba-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"9246ea76-1e99-4458-86ef-6ca8d66b6eba\") " pod="openstack/cinder-backup-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.676262 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9246ea76-1e99-4458-86ef-6ca8d66b6eba-config-data\") pod \"cinder-backup-0\" (UID: \"9246ea76-1e99-4458-86ef-6ca8d66b6eba\") " pod="openstack/cinder-backup-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.695369 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgxq6\" (UniqueName: \"kubernetes.io/projected/9246ea76-1e99-4458-86ef-6ca8d66b6eba-kube-api-access-rgxq6\") pod \"cinder-backup-0\" (UID: \"9246ea76-1e99-4458-86ef-6ca8d66b6eba\") " pod="openstack/cinder-backup-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.695620 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9246ea76-1e99-4458-86ef-6ca8d66b6eba-config-data-custom\") pod \"cinder-backup-0\" (UID: \"9246ea76-1e99-4458-86ef-6ca8d66b6eba\") " pod="openstack/cinder-backup-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.697883 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-0"] Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.741734 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.743765 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-2-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.760354 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-2-config-data" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.762153 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d027f54f-c313-4750-b9ba-18241f322033-run\") pod \"cinder-volume-nfs-0\" (UID: \"d027f54f-c313-4750-b9ba-18241f322033\") " pod="openstack/cinder-volume-nfs-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.762191 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d027f54f-c313-4750-b9ba-18241f322033-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"d027f54f-c313-4750-b9ba-18241f322033\") " pod="openstack/cinder-volume-nfs-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.762217 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkq9f\" (UniqueName: \"kubernetes.io/projected/d027f54f-c313-4750-b9ba-18241f322033-kube-api-access-vkq9f\") pod \"cinder-volume-nfs-0\" (UID: \"d027f54f-c313-4750-b9ba-18241f322033\") " pod="openstack/cinder-volume-nfs-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.762234 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d027f54f-c313-4750-b9ba-18241f322033-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"d027f54f-c313-4750-b9ba-18241f322033\") " pod="openstack/cinder-volume-nfs-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.762249 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d027f54f-c313-4750-b9ba-18241f322033-dev\") pod \"cinder-volume-nfs-0\" (UID: \"d027f54f-c313-4750-b9ba-18241f322033\") " pod="openstack/cinder-volume-nfs-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.762270 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d027f54f-c313-4750-b9ba-18241f322033-sys\") pod \"cinder-volume-nfs-0\" (UID: \"d027f54f-c313-4750-b9ba-18241f322033\") " pod="openstack/cinder-volume-nfs-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.762286 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d027f54f-c313-4750-b9ba-18241f322033-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"d027f54f-c313-4750-b9ba-18241f322033\") " pod="openstack/cinder-volume-nfs-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.762305 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d027f54f-c313-4750-b9ba-18241f322033-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"d027f54f-c313-4750-b9ba-18241f322033\") " pod="openstack/cinder-volume-nfs-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.762378 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d027f54f-c313-4750-b9ba-18241f322033-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"d027f54f-c313-4750-b9ba-18241f322033\") " pod="openstack/cinder-volume-nfs-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.762400 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d027f54f-c313-4750-b9ba-18241f322033-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"d027f54f-c313-4750-b9ba-18241f322033\") " pod="openstack/cinder-volume-nfs-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.762426 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d027f54f-c313-4750-b9ba-18241f322033-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"d027f54f-c313-4750-b9ba-18241f322033\") " pod="openstack/cinder-volume-nfs-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.762458 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d027f54f-c313-4750-b9ba-18241f322033-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"d027f54f-c313-4750-b9ba-18241f322033\") " pod="openstack/cinder-volume-nfs-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.762482 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d027f54f-c313-4750-b9ba-18241f322033-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"d027f54f-c313-4750-b9ba-18241f322033\") " pod="openstack/cinder-volume-nfs-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.762518 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d027f54f-c313-4750-b9ba-18241f322033-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"d027f54f-c313-4750-b9ba-18241f322033\") " pod="openstack/cinder-volume-nfs-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.762567 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d027f54f-c313-4750-b9ba-18241f322033-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"d027f54f-c313-4750-b9ba-18241f322033\") " pod="openstack/cinder-volume-nfs-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.772593 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.790370 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.867001 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.867072 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d027f54f-c313-4750-b9ba-18241f322033-run\") pod \"cinder-volume-nfs-0\" (UID: \"d027f54f-c313-4750-b9ba-18241f322033\") " pod="openstack/cinder-volume-nfs-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.867109 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.867161 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d027f54f-c313-4750-b9ba-18241f322033-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"d027f54f-c313-4750-b9ba-18241f322033\") " pod="openstack/cinder-volume-nfs-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.867184 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.867187 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d027f54f-c313-4750-b9ba-18241f322033-run\") pod \"cinder-volume-nfs-0\" (UID: \"d027f54f-c313-4750-b9ba-18241f322033\") " pod="openstack/cinder-volume-nfs-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.867204 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkq9f\" (UniqueName: \"kubernetes.io/projected/d027f54f-c313-4750-b9ba-18241f322033-kube-api-access-vkq9f\") pod \"cinder-volume-nfs-0\" (UID: \"d027f54f-c313-4750-b9ba-18241f322033\") " pod="openstack/cinder-volume-nfs-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.867242 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d027f54f-c313-4750-b9ba-18241f322033-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"d027f54f-c313-4750-b9ba-18241f322033\") " pod="openstack/cinder-volume-nfs-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.867284 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d027f54f-c313-4750-b9ba-18241f322033-dev\") pod \"cinder-volume-nfs-0\" (UID: \"d027f54f-c313-4750-b9ba-18241f322033\") " pod="openstack/cinder-volume-nfs-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.867325 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d027f54f-c313-4750-b9ba-18241f322033-sys\") pod \"cinder-volume-nfs-0\" (UID: \"d027f54f-c313-4750-b9ba-18241f322033\") " pod="openstack/cinder-volume-nfs-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.867336 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d027f54f-c313-4750-b9ba-18241f322033-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"d027f54f-c313-4750-b9ba-18241f322033\") " pod="openstack/cinder-volume-nfs-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.867359 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.867378 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d027f54f-c313-4750-b9ba-18241f322033-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"d027f54f-c313-4750-b9ba-18241f322033\") " pod="openstack/cinder-volume-nfs-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.867400 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d027f54f-c313-4750-b9ba-18241f322033-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"d027f54f-c313-4750-b9ba-18241f322033\") " pod="openstack/cinder-volume-nfs-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.867468 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d027f54f-c313-4750-b9ba-18241f322033-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"d027f54f-c313-4750-b9ba-18241f322033\") " pod="openstack/cinder-volume-nfs-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.867508 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d027f54f-c313-4750-b9ba-18241f322033-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"d027f54f-c313-4750-b9ba-18241f322033\") " pod="openstack/cinder-volume-nfs-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.867565 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d027f54f-c313-4750-b9ba-18241f322033-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"d027f54f-c313-4750-b9ba-18241f322033\") " pod="openstack/cinder-volume-nfs-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.867585 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.867600 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.867634 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d027f54f-c313-4750-b9ba-18241f322033-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"d027f54f-c313-4750-b9ba-18241f322033\") " pod="openstack/cinder-volume-nfs-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.867635 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.867679 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.867696 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66nf6\" (UniqueName: \"kubernetes.io/projected/bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5-kube-api-access-66nf6\") pod \"cinder-volume-nfs-2-0\" (UID: \"bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.867717 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d027f54f-c313-4750-b9ba-18241f322033-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"d027f54f-c313-4750-b9ba-18241f322033\") " pod="openstack/cinder-volume-nfs-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.867758 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d027f54f-c313-4750-b9ba-18241f322033-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"d027f54f-c313-4750-b9ba-18241f322033\") " pod="openstack/cinder-volume-nfs-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.867782 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.867837 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d027f54f-c313-4750-b9ba-18241f322033-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"d027f54f-c313-4750-b9ba-18241f322033\") " pod="openstack/cinder-volume-nfs-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.867856 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.867882 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.867915 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d027f54f-c313-4750-b9ba-18241f322033-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"d027f54f-c313-4750-b9ba-18241f322033\") " pod="openstack/cinder-volume-nfs-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.867944 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.867959 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.867998 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.868077 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d027f54f-c313-4750-b9ba-18241f322033-dev\") pod \"cinder-volume-nfs-0\" (UID: \"d027f54f-c313-4750-b9ba-18241f322033\") " pod="openstack/cinder-volume-nfs-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.867285 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d027f54f-c313-4750-b9ba-18241f322033-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"d027f54f-c313-4750-b9ba-18241f322033\") " pod="openstack/cinder-volume-nfs-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.868106 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d027f54f-c313-4750-b9ba-18241f322033-sys\") pod \"cinder-volume-nfs-0\" (UID: \"d027f54f-c313-4750-b9ba-18241f322033\") " pod="openstack/cinder-volume-nfs-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.868153 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d027f54f-c313-4750-b9ba-18241f322033-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"d027f54f-c313-4750-b9ba-18241f322033\") " pod="openstack/cinder-volume-nfs-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.868549 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d027f54f-c313-4750-b9ba-18241f322033-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"d027f54f-c313-4750-b9ba-18241f322033\") " pod="openstack/cinder-volume-nfs-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.868611 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d027f54f-c313-4750-b9ba-18241f322033-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"d027f54f-c313-4750-b9ba-18241f322033\") " pod="openstack/cinder-volume-nfs-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.869601 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d027f54f-c313-4750-b9ba-18241f322033-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"d027f54f-c313-4750-b9ba-18241f322033\") " pod="openstack/cinder-volume-nfs-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.871392 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d027f54f-c313-4750-b9ba-18241f322033-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"d027f54f-c313-4750-b9ba-18241f322033\") " pod="openstack/cinder-volume-nfs-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.878623 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d027f54f-c313-4750-b9ba-18241f322033-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"d027f54f-c313-4750-b9ba-18241f322033\") " pod="openstack/cinder-volume-nfs-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.879535 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d027f54f-c313-4750-b9ba-18241f322033-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"d027f54f-c313-4750-b9ba-18241f322033\") " pod="openstack/cinder-volume-nfs-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.880437 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d027f54f-c313-4750-b9ba-18241f322033-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"d027f54f-c313-4750-b9ba-18241f322033\") " pod="openstack/cinder-volume-nfs-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.884510 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkq9f\" (UniqueName: \"kubernetes.io/projected/d027f54f-c313-4750-b9ba-18241f322033-kube-api-access-vkq9f\") pod \"cinder-volume-nfs-0\" (UID: \"d027f54f-c313-4750-b9ba-18241f322033\") " pod="openstack/cinder-volume-nfs-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.967652 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.971678 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.972502 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.972718 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.973158 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.973201 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.973241 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.973378 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.973398 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.973425 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.973451 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66nf6\" (UniqueName: \"kubernetes.io/projected/bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5-kube-api-access-66nf6\") pod \"cinder-volume-nfs-2-0\" (UID: \"bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.973468 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.973799 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.973871 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.973905 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.973966 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.973983 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.974084 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.974132 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.975149 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.977516 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.977875 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.977933 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.978084 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.978129 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.978155 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.978624 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.979082 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.980490 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.981266 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 17:23:16 crc kubenswrapper[4758]: I0122 17:23:16.994821 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66nf6\" (UniqueName: \"kubernetes.io/projected/bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5-kube-api-access-66nf6\") pod \"cinder-volume-nfs-2-0\" (UID: \"bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 17:23:17 crc kubenswrapper[4758]: I0122 17:23:17.103579 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-2-0" Jan 22 17:23:17 crc kubenswrapper[4758]: I0122 17:23:17.446903 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 22 17:23:17 crc kubenswrapper[4758]: I0122 17:23:17.650970 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-0"] Jan 22 17:23:17 crc kubenswrapper[4758]: I0122 17:23:17.722397 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"9246ea76-1e99-4458-86ef-6ca8d66b6eba","Type":"ContainerStarted","Data":"7453c719c7c333756584250e9b2d6de82558a28610d68edffcc69ee426b1435e"} Jan 22 17:23:17 crc kubenswrapper[4758]: I0122 17:23:17.724387 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"d027f54f-c313-4750-b9ba-18241f322033","Type":"ContainerStarted","Data":"99b96d30b227ae08b81e6f3f3c77fcb502e8cb0144f6d6dfa0279e708a300b99"} Jan 22 17:23:17 crc kubenswrapper[4758]: I0122 17:23:17.784275 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Jan 22 17:23:17 crc kubenswrapper[4758]: W0122 17:23:17.845985 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc682df5_1ce8_4c38_aea1_2c1d3e2f78b5.slice/crio-20a1a519af44773c0bfd3ab4dc886e23920882464639a511f87791e7eb43cc1b WatchSource:0}: Error finding container 20a1a519af44773c0bfd3ab4dc886e23920882464639a511f87791e7eb43cc1b: Status 404 returned error can't find the container with id 20a1a519af44773c0bfd3ab4dc886e23920882464639a511f87791e7eb43cc1b Jan 22 17:23:18 crc kubenswrapper[4758]: I0122 17:23:18.734524 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"d027f54f-c313-4750-b9ba-18241f322033","Type":"ContainerStarted","Data":"19b6ec4050ea9713c5c9a751599c594aa0943edd4933a64f6b38fdedea0e733e"} Jan 22 17:23:18 crc kubenswrapper[4758]: I0122 17:23:18.735225 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"d027f54f-c313-4750-b9ba-18241f322033","Type":"ContainerStarted","Data":"45bc8bcb9e2a5e9d3557f251bef154b88c74d1ea55953365ba8097f1c8560f4b"} Jan 22 17:23:18 crc kubenswrapper[4758]: I0122 17:23:18.737792 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"9246ea76-1e99-4458-86ef-6ca8d66b6eba","Type":"ContainerStarted","Data":"850df7411115b592cf9f3f25614d91c22d4971020b2a744725d5ee904847a000"} Jan 22 17:23:18 crc kubenswrapper[4758]: I0122 17:23:18.737835 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"9246ea76-1e99-4458-86ef-6ca8d66b6eba","Type":"ContainerStarted","Data":"3996005e757dc4557c61517265bd26b78eee3b9679ade87c56a73e3f31b81e42"} Jan 22 17:23:18 crc kubenswrapper[4758]: I0122 17:23:18.740095 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5","Type":"ContainerStarted","Data":"0d6db8c30bab480fd5cb243290cfe717b2cee8e9b357caa3f078f233f11fa72e"} Jan 22 17:23:18 crc kubenswrapper[4758]: I0122 17:23:18.740120 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5","Type":"ContainerStarted","Data":"5face0d97b1dd0b53bc87c67909d394eeff06fa3b7554b55fdd0430fcd30047f"} Jan 22 17:23:18 crc kubenswrapper[4758]: I0122 17:23:18.740130 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5","Type":"ContainerStarted","Data":"20a1a519af44773c0bfd3ab4dc886e23920882464639a511f87791e7eb43cc1b"} Jan 22 17:23:18 crc kubenswrapper[4758]: I0122 17:23:18.772959 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-nfs-0" podStartSLOduration=2.527239528 podStartE2EDuration="2.772922867s" podCreationTimestamp="2026-01-22 17:23:16 +0000 UTC" firstStartedPulling="2026-01-22 17:23:17.654929384 +0000 UTC m=+3219.138268669" lastFinishedPulling="2026-01-22 17:23:17.900612723 +0000 UTC m=+3219.383952008" observedRunningTime="2026-01-22 17:23:18.766835002 +0000 UTC m=+3220.250174287" watchObservedRunningTime="2026-01-22 17:23:18.772922867 +0000 UTC m=+3220.256262142" Jan 22 17:23:18 crc kubenswrapper[4758]: I0122 17:23:18.797553 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-nfs-2-0" podStartSLOduration=2.744141684 podStartE2EDuration="2.797527329s" podCreationTimestamp="2026-01-22 17:23:16 +0000 UTC" firstStartedPulling="2026-01-22 17:23:17.888152394 +0000 UTC m=+3219.371491679" lastFinishedPulling="2026-01-22 17:23:17.941538039 +0000 UTC m=+3219.424877324" observedRunningTime="2026-01-22 17:23:18.792619175 +0000 UTC m=+3220.275958470" watchObservedRunningTime="2026-01-22 17:23:18.797527329 +0000 UTC m=+3220.280866614" Jan 22 17:23:18 crc kubenswrapper[4758]: I0122 17:23:18.837343 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=2.633266108 podStartE2EDuration="2.837322993s" podCreationTimestamp="2026-01-22 17:23:16 +0000 UTC" firstStartedPulling="2026-01-22 17:23:17.453672806 +0000 UTC m=+3218.937012091" lastFinishedPulling="2026-01-22 17:23:17.657729691 +0000 UTC m=+3219.141068976" observedRunningTime="2026-01-22 17:23:18.826439527 +0000 UTC m=+3220.309778812" watchObservedRunningTime="2026-01-22 17:23:18.837322993 +0000 UTC m=+3220.320662278" Jan 22 17:23:21 crc kubenswrapper[4758]: I0122 17:23:21.773311 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Jan 22 17:23:21 crc kubenswrapper[4758]: I0122 17:23:21.968008 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-nfs-0" Jan 22 17:23:22 crc kubenswrapper[4758]: I0122 17:23:22.105941 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-nfs-2-0" Jan 22 17:23:26 crc kubenswrapper[4758]: I0122 17:23:26.964212 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Jan 22 17:23:27 crc kubenswrapper[4758]: I0122 17:23:27.305241 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-nfs-2-0" Jan 22 17:23:27 crc kubenswrapper[4758]: I0122 17:23:27.326361 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-nfs-0" Jan 22 17:23:28 crc kubenswrapper[4758]: I0122 17:23:28.824582 4758 scope.go:117] "RemoveContainer" containerID="6b6038efa721e68032c4b8465c33e81c0d3698308aa5597c04600a44e4aa9178" Jan 22 17:23:28 crc kubenswrapper[4758]: E0122 17:23:28.825220 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:23:39 crc kubenswrapper[4758]: I0122 17:23:39.807888 4758 scope.go:117] "RemoveContainer" containerID="6b6038efa721e68032c4b8465c33e81c0d3698308aa5597c04600a44e4aa9178" Jan 22 17:23:39 crc kubenswrapper[4758]: E0122 17:23:39.808854 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:23:50 crc kubenswrapper[4758]: I0122 17:23:50.808137 4758 scope.go:117] "RemoveContainer" containerID="6b6038efa721e68032c4b8465c33e81c0d3698308aa5597c04600a44e4aa9178" Jan 22 17:23:50 crc kubenswrapper[4758]: E0122 17:23:50.808944 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:24:05 crc kubenswrapper[4758]: I0122 17:24:05.808225 4758 scope.go:117] "RemoveContainer" containerID="6b6038efa721e68032c4b8465c33e81c0d3698308aa5597c04600a44e4aa9178" Jan 22 17:24:05 crc kubenswrapper[4758]: E0122 17:24:05.809401 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:24:18 crc kubenswrapper[4758]: I0122 17:24:18.816844 4758 scope.go:117] "RemoveContainer" containerID="6b6038efa721e68032c4b8465c33e81c0d3698308aa5597c04600a44e4aa9178" Jan 22 17:24:18 crc kubenswrapper[4758]: E0122 17:24:18.817650 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:24:20 crc kubenswrapper[4758]: I0122 17:24:20.744987 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 22 17:24:20 crc kubenswrapper[4758]: I0122 17:24:20.745650 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="d7a10f61-441f-4ec1-a6fa-c34ff9a75956" containerName="prometheus" containerID="cri-o://bbafea354bc8ab01a2fcb8bfb3408bcbad92c9e0c5610e8f2ca2556cf992d016" gracePeriod=600 Jan 22 17:24:20 crc kubenswrapper[4758]: I0122 17:24:20.745801 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="d7a10f61-441f-4ec1-a6fa-c34ff9a75956" containerName="config-reloader" containerID="cri-o://547f95b8f34477876f50da9b88337e0c77beed73657f61dd5718d36d559d828b" gracePeriod=600 Jan 22 17:24:20 crc kubenswrapper[4758]: I0122 17:24:20.745831 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="d7a10f61-441f-4ec1-a6fa-c34ff9a75956" containerName="thanos-sidecar" containerID="cri-o://92e384cbef786adb26478411db336ab78d74400a9286cc97b00a8476a2853b59" gracePeriod=600 Jan 22 17:24:21 crc kubenswrapper[4758]: I0122 17:24:21.499094 4758 generic.go:334] "Generic (PLEG): container finished" podID="d7a10f61-441f-4ec1-a6fa-c34ff9a75956" containerID="92e384cbef786adb26478411db336ab78d74400a9286cc97b00a8476a2853b59" exitCode=0 Jan 22 17:24:21 crc kubenswrapper[4758]: I0122 17:24:21.499444 4758 generic.go:334] "Generic (PLEG): container finished" podID="d7a10f61-441f-4ec1-a6fa-c34ff9a75956" containerID="547f95b8f34477876f50da9b88337e0c77beed73657f61dd5718d36d559d828b" exitCode=0 Jan 22 17:24:21 crc kubenswrapper[4758]: I0122 17:24:21.499460 4758 generic.go:334] "Generic (PLEG): container finished" podID="d7a10f61-441f-4ec1-a6fa-c34ff9a75956" containerID="bbafea354bc8ab01a2fcb8bfb3408bcbad92c9e0c5610e8f2ca2556cf992d016" exitCode=0 Jan 22 17:24:21 crc kubenswrapper[4758]: I0122 17:24:21.499510 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d7a10f61-441f-4ec1-a6fa-c34ff9a75956","Type":"ContainerDied","Data":"92e384cbef786adb26478411db336ab78d74400a9286cc97b00a8476a2853b59"} Jan 22 17:24:21 crc kubenswrapper[4758]: I0122 17:24:21.499543 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d7a10f61-441f-4ec1-a6fa-c34ff9a75956","Type":"ContainerDied","Data":"547f95b8f34477876f50da9b88337e0c77beed73657f61dd5718d36d559d828b"} Jan 22 17:24:21 crc kubenswrapper[4758]: I0122 17:24:21.499560 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d7a10f61-441f-4ec1-a6fa-c34ff9a75956","Type":"ContainerDied","Data":"bbafea354bc8ab01a2fcb8bfb3408bcbad92c9e0c5610e8f2ca2556cf992d016"} Jan 22 17:24:21 crc kubenswrapper[4758]: I0122 17:24:21.706061 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 22 17:24:21 crc kubenswrapper[4758]: I0122 17:24:21.746843 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-web-config\") pod \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " Jan 22 17:24:21 crc kubenswrapper[4758]: I0122 17:24:21.746979 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-prometheus-metric-storage-rulefiles-0\") pod \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " Jan 22 17:24:21 crc kubenswrapper[4758]: I0122 17:24:21.747006 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmltc\" (UniqueName: \"kubernetes.io/projected/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-kube-api-access-tmltc\") pod \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " Jan 22 17:24:21 crc kubenswrapper[4758]: I0122 17:24:21.747876 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "d7a10f61-441f-4ec1-a6fa-c34ff9a75956" (UID: "d7a10f61-441f-4ec1-a6fa-c34ff9a75956"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 17:24:21 crc kubenswrapper[4758]: I0122 17:24:21.769999 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-kube-api-access-tmltc" (OuterVolumeSpecName: "kube-api-access-tmltc") pod "d7a10f61-441f-4ec1-a6fa-c34ff9a75956" (UID: "d7a10f61-441f-4ec1-a6fa-c34ff9a75956"). InnerVolumeSpecName "kube-api-access-tmltc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:24:21 crc kubenswrapper[4758]: I0122 17:24:21.896226 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-90012821-fb2f-4f8d-aaca-e2d78515e50d\") pod \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " Jan 22 17:24:21 crc kubenswrapper[4758]: I0122 17:24:21.896315 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " Jan 22 17:24:21 crc kubenswrapper[4758]: I0122 17:24:21.946992 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-prometheus-metric-storage-rulefiles-2\") pod \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " Jan 22 17:24:21 crc kubenswrapper[4758]: I0122 17:24:21.947062 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-secret-combined-ca-bundle\") pod \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " Jan 22 17:24:21 crc kubenswrapper[4758]: I0122 17:24:21.947139 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-config-out\") pod \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " Jan 22 17:24:21 crc kubenswrapper[4758]: I0122 17:24:21.947164 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-thanos-prometheus-http-client-file\") pod \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " Jan 22 17:24:21 crc kubenswrapper[4758]: I0122 17:24:21.947209 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-tls-assets\") pod \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " Jan 22 17:24:21 crc kubenswrapper[4758]: I0122 17:24:21.947238 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-prometheus-metric-storage-rulefiles-1\") pod \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " Jan 22 17:24:21 crc kubenswrapper[4758]: I0122 17:24:21.947282 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " Jan 22 17:24:21 crc kubenswrapper[4758]: I0122 17:24:21.947325 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-config\") pod \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\" (UID: \"d7a10f61-441f-4ec1-a6fa-c34ff9a75956\") " Jan 22 17:24:21 crc kubenswrapper[4758]: I0122 17:24:21.948071 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmltc\" (UniqueName: \"kubernetes.io/projected/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-kube-api-access-tmltc\") on node \"crc\" DevicePath \"\"" Jan 22 17:24:21 crc kubenswrapper[4758]: I0122 17:24:21.948086 4758 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 22 17:24:21 crc kubenswrapper[4758]: I0122 17:24:21.952397 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "d7a10f61-441f-4ec1-a6fa-c34ff9a75956" (UID: "d7a10f61-441f-4ec1-a6fa-c34ff9a75956"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 17:24:21 crc kubenswrapper[4758]: I0122 17:24:21.953201 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "d7a10f61-441f-4ec1-a6fa-c34ff9a75956" (UID: "d7a10f61-441f-4ec1-a6fa-c34ff9a75956"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 17:24:21 crc kubenswrapper[4758]: I0122 17:24:21.963071 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "d7a10f61-441f-4ec1-a6fa-c34ff9a75956" (UID: "d7a10f61-441f-4ec1-a6fa-c34ff9a75956"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:24:21 crc kubenswrapper[4758]: I0122 17:24:21.963832 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-config" (OuterVolumeSpecName: "config") pod "d7a10f61-441f-4ec1-a6fa-c34ff9a75956" (UID: "d7a10f61-441f-4ec1-a6fa-c34ff9a75956"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:24:21 crc kubenswrapper[4758]: I0122 17:24:21.963880 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "d7a10f61-441f-4ec1-a6fa-c34ff9a75956" (UID: "d7a10f61-441f-4ec1-a6fa-c34ff9a75956"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:24:21 crc kubenswrapper[4758]: I0122 17:24:21.964016 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "d7a10f61-441f-4ec1-a6fa-c34ff9a75956" (UID: "d7a10f61-441f-4ec1-a6fa-c34ff9a75956"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:24:21 crc kubenswrapper[4758]: I0122 17:24:21.984958 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "d7a10f61-441f-4ec1-a6fa-c34ff9a75956" (UID: "d7a10f61-441f-4ec1-a6fa-c34ff9a75956"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:24:21 crc kubenswrapper[4758]: I0122 17:24:21.987942 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-config-out" (OuterVolumeSpecName: "config-out") pod "d7a10f61-441f-4ec1-a6fa-c34ff9a75956" (UID: "d7a10f61-441f-4ec1-a6fa-c34ff9a75956"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:24:21 crc kubenswrapper[4758]: I0122 17:24:21.994827 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "d7a10f61-441f-4ec1-a6fa-c34ff9a75956" (UID: "d7a10f61-441f-4ec1-a6fa-c34ff9a75956"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.033634 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-90012821-fb2f-4f8d-aaca-e2d78515e50d" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "d7a10f61-441f-4ec1-a6fa-c34ff9a75956" (UID: "d7a10f61-441f-4ec1-a6fa-c34ff9a75956"). InnerVolumeSpecName "pvc-90012821-fb2f-4f8d-aaca-e2d78515e50d". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.052825 4758 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.052867 4758 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.052881 4758 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-config-out\") on node \"crc\" DevicePath \"\"" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.052893 4758 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.053089 4758 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.053100 4758 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.053112 4758 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.053126 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-config\") on node \"crc\" DevicePath \"\"" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.053170 4758 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-90012821-fb2f-4f8d-aaca-e2d78515e50d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-90012821-fb2f-4f8d-aaca-e2d78515e50d\") on node \"crc\" " Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.053183 4758 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.136895 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-web-config" (OuterVolumeSpecName: "web-config") pod "d7a10f61-441f-4ec1-a6fa-c34ff9a75956" (UID: "d7a10f61-441f-4ec1-a6fa-c34ff9a75956"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.154808 4758 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d7a10f61-441f-4ec1-a6fa-c34ff9a75956-web-config\") on node \"crc\" DevicePath \"\"" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.238006 4758 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.238611 4758 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-90012821-fb2f-4f8d-aaca-e2d78515e50d" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-90012821-fb2f-4f8d-aaca-e2d78515e50d") on node "crc" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.257181 4758 reconciler_common.go:293] "Volume detached for volume \"pvc-90012821-fb2f-4f8d-aaca-e2d78515e50d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-90012821-fb2f-4f8d-aaca-e2d78515e50d\") on node \"crc\" DevicePath \"\"" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.511527 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d7a10f61-441f-4ec1-a6fa-c34ff9a75956","Type":"ContainerDied","Data":"cd2e3ab4842ba9acedd02ccea2520aa7383259dfbbdc8651d3d46ea9f99551cb"} Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.511631 4758 scope.go:117] "RemoveContainer" containerID="92e384cbef786adb26478411db336ab78d74400a9286cc97b00a8476a2853b59" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.511640 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.540535 4758 scope.go:117] "RemoveContainer" containerID="547f95b8f34477876f50da9b88337e0c77beed73657f61dd5718d36d559d828b" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.556584 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.571734 4758 scope.go:117] "RemoveContainer" containerID="bbafea354bc8ab01a2fcb8bfb3408bcbad92c9e0c5610e8f2ca2556cf992d016" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.580858 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.698844 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 22 17:24:22 crc kubenswrapper[4758]: E0122 17:24:22.699396 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7a10f61-441f-4ec1-a6fa-c34ff9a75956" containerName="config-reloader" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.699420 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7a10f61-441f-4ec1-a6fa-c34ff9a75956" containerName="config-reloader" Jan 22 17:24:22 crc kubenswrapper[4758]: E0122 17:24:22.699432 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7a10f61-441f-4ec1-a6fa-c34ff9a75956" containerName="prometheus" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.699440 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7a10f61-441f-4ec1-a6fa-c34ff9a75956" containerName="prometheus" Jan 22 17:24:22 crc kubenswrapper[4758]: E0122 17:24:22.699464 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7a10f61-441f-4ec1-a6fa-c34ff9a75956" containerName="init-config-reloader" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.699472 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7a10f61-441f-4ec1-a6fa-c34ff9a75956" containerName="init-config-reloader" Jan 22 17:24:22 crc kubenswrapper[4758]: E0122 17:24:22.699486 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7a10f61-441f-4ec1-a6fa-c34ff9a75956" containerName="thanos-sidecar" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.699493 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7a10f61-441f-4ec1-a6fa-c34ff9a75956" containerName="thanos-sidecar" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.699719 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7a10f61-441f-4ec1-a6fa-c34ff9a75956" containerName="prometheus" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.699758 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7a10f61-441f-4ec1-a6fa-c34ff9a75956" containerName="config-reloader" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.699783 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7a10f61-441f-4ec1-a6fa-c34ff9a75956" containerName="thanos-sidecar" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.701814 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.708554 4758 scope.go:117] "RemoveContainer" containerID="46011a6b7ef00eea6019191c83e55d400ae06da48727b373c7d23e640120b934" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.709014 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.709066 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.709203 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.709330 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.709413 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.709019 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-4ftsd" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.712537 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.715302 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.732819 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.824243 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7a10f61-441f-4ec1-a6fa-c34ff9a75956" path="/var/lib/kubelet/pods/d7a10f61-441f-4ec1-a6fa-c34ff9a75956/volumes" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.895182 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/743945d0-7488-4665-beaf-f2026e10a424-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"743945d0-7488-4665-beaf-f2026e10a424\") " pod="openstack/prometheus-metric-storage-0" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.895238 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjd7v\" (UniqueName: \"kubernetes.io/projected/743945d0-7488-4665-beaf-f2026e10a424-kube-api-access-sjd7v\") pod \"prometheus-metric-storage-0\" (UID: \"743945d0-7488-4665-beaf-f2026e10a424\") " pod="openstack/prometheus-metric-storage-0" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.895314 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/743945d0-7488-4665-beaf-f2026e10a424-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"743945d0-7488-4665-beaf-f2026e10a424\") " pod="openstack/prometheus-metric-storage-0" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.895335 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/743945d0-7488-4665-beaf-f2026e10a424-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"743945d0-7488-4665-beaf-f2026e10a424\") " pod="openstack/prometheus-metric-storage-0" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.895373 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/743945d0-7488-4665-beaf-f2026e10a424-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"743945d0-7488-4665-beaf-f2026e10a424\") " pod="openstack/prometheus-metric-storage-0" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.895414 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/743945d0-7488-4665-beaf-f2026e10a424-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"743945d0-7488-4665-beaf-f2026e10a424\") " pod="openstack/prometheus-metric-storage-0" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.895453 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/743945d0-7488-4665-beaf-f2026e10a424-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"743945d0-7488-4665-beaf-f2026e10a424\") " pod="openstack/prometheus-metric-storage-0" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.895482 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-90012821-fb2f-4f8d-aaca-e2d78515e50d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-90012821-fb2f-4f8d-aaca-e2d78515e50d\") pod \"prometheus-metric-storage-0\" (UID: \"743945d0-7488-4665-beaf-f2026e10a424\") " pod="openstack/prometheus-metric-storage-0" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.895534 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/743945d0-7488-4665-beaf-f2026e10a424-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"743945d0-7488-4665-beaf-f2026e10a424\") " pod="openstack/prometheus-metric-storage-0" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.895629 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/743945d0-7488-4665-beaf-f2026e10a424-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"743945d0-7488-4665-beaf-f2026e10a424\") " pod="openstack/prometheus-metric-storage-0" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.895679 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/743945d0-7488-4665-beaf-f2026e10a424-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"743945d0-7488-4665-beaf-f2026e10a424\") " pod="openstack/prometheus-metric-storage-0" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.895708 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/743945d0-7488-4665-beaf-f2026e10a424-config\") pod \"prometheus-metric-storage-0\" (UID: \"743945d0-7488-4665-beaf-f2026e10a424\") " pod="openstack/prometheus-metric-storage-0" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.895734 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/743945d0-7488-4665-beaf-f2026e10a424-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"743945d0-7488-4665-beaf-f2026e10a424\") " pod="openstack/prometheus-metric-storage-0" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.997887 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/743945d0-7488-4665-beaf-f2026e10a424-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"743945d0-7488-4665-beaf-f2026e10a424\") " pod="openstack/prometheus-metric-storage-0" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.998722 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/743945d0-7488-4665-beaf-f2026e10a424-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"743945d0-7488-4665-beaf-f2026e10a424\") " pod="openstack/prometheus-metric-storage-0" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.998784 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/743945d0-7488-4665-beaf-f2026e10a424-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"743945d0-7488-4665-beaf-f2026e10a424\") " pod="openstack/prometheus-metric-storage-0" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.998814 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/743945d0-7488-4665-beaf-f2026e10a424-config\") pod \"prometheus-metric-storage-0\" (UID: \"743945d0-7488-4665-beaf-f2026e10a424\") " pod="openstack/prometheus-metric-storage-0" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.998857 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/743945d0-7488-4665-beaf-f2026e10a424-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"743945d0-7488-4665-beaf-f2026e10a424\") " pod="openstack/prometheus-metric-storage-0" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.998999 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/743945d0-7488-4665-beaf-f2026e10a424-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"743945d0-7488-4665-beaf-f2026e10a424\") " pod="openstack/prometheus-metric-storage-0" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.999055 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjd7v\" (UniqueName: \"kubernetes.io/projected/743945d0-7488-4665-beaf-f2026e10a424-kube-api-access-sjd7v\") pod \"prometheus-metric-storage-0\" (UID: \"743945d0-7488-4665-beaf-f2026e10a424\") " pod="openstack/prometheus-metric-storage-0" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.999113 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/743945d0-7488-4665-beaf-f2026e10a424-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"743945d0-7488-4665-beaf-f2026e10a424\") " pod="openstack/prometheus-metric-storage-0" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.999137 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/743945d0-7488-4665-beaf-f2026e10a424-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"743945d0-7488-4665-beaf-f2026e10a424\") " pod="openstack/prometheus-metric-storage-0" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.999187 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/743945d0-7488-4665-beaf-f2026e10a424-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"743945d0-7488-4665-beaf-f2026e10a424\") " pod="openstack/prometheus-metric-storage-0" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.999267 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/743945d0-7488-4665-beaf-f2026e10a424-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"743945d0-7488-4665-beaf-f2026e10a424\") " pod="openstack/prometheus-metric-storage-0" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.999360 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/743945d0-7488-4665-beaf-f2026e10a424-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"743945d0-7488-4665-beaf-f2026e10a424\") " pod="openstack/prometheus-metric-storage-0" Jan 22 17:24:22 crc kubenswrapper[4758]: I0122 17:24:22.999426 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-90012821-fb2f-4f8d-aaca-e2d78515e50d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-90012821-fb2f-4f8d-aaca-e2d78515e50d\") pod \"prometheus-metric-storage-0\" (UID: \"743945d0-7488-4665-beaf-f2026e10a424\") " pod="openstack/prometheus-metric-storage-0" Jan 22 17:24:23 crc kubenswrapper[4758]: I0122 17:24:22.999839 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/743945d0-7488-4665-beaf-f2026e10a424-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"743945d0-7488-4665-beaf-f2026e10a424\") " pod="openstack/prometheus-metric-storage-0" Jan 22 17:24:23 crc kubenswrapper[4758]: I0122 17:24:23.001230 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/743945d0-7488-4665-beaf-f2026e10a424-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"743945d0-7488-4665-beaf-f2026e10a424\") " pod="openstack/prometheus-metric-storage-0" Jan 22 17:24:23 crc kubenswrapper[4758]: I0122 17:24:23.001297 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/743945d0-7488-4665-beaf-f2026e10a424-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"743945d0-7488-4665-beaf-f2026e10a424\") " pod="openstack/prometheus-metric-storage-0" Jan 22 17:24:23 crc kubenswrapper[4758]: I0122 17:24:23.005044 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/743945d0-7488-4665-beaf-f2026e10a424-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"743945d0-7488-4665-beaf-f2026e10a424\") " pod="openstack/prometheus-metric-storage-0" Jan 22 17:24:23 crc kubenswrapper[4758]: I0122 17:24:23.005818 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/743945d0-7488-4665-beaf-f2026e10a424-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"743945d0-7488-4665-beaf-f2026e10a424\") " pod="openstack/prometheus-metric-storage-0" Jan 22 17:24:23 crc kubenswrapper[4758]: I0122 17:24:23.006230 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/743945d0-7488-4665-beaf-f2026e10a424-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"743945d0-7488-4665-beaf-f2026e10a424\") " pod="openstack/prometheus-metric-storage-0" Jan 22 17:24:23 crc kubenswrapper[4758]: I0122 17:24:23.006465 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/743945d0-7488-4665-beaf-f2026e10a424-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"743945d0-7488-4665-beaf-f2026e10a424\") " pod="openstack/prometheus-metric-storage-0" Jan 22 17:24:23 crc kubenswrapper[4758]: I0122 17:24:23.006764 4758 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 17:24:23 crc kubenswrapper[4758]: I0122 17:24:23.006802 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-90012821-fb2f-4f8d-aaca-e2d78515e50d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-90012821-fb2f-4f8d-aaca-e2d78515e50d\") pod \"prometheus-metric-storage-0\" (UID: \"743945d0-7488-4665-beaf-f2026e10a424\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/51d824e7b7431a599087fae5dbad8d5d5ded71f29385012a23b0aa020d358d8d/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 22 17:24:23 crc kubenswrapper[4758]: I0122 17:24:23.007003 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/743945d0-7488-4665-beaf-f2026e10a424-config\") pod \"prometheus-metric-storage-0\" (UID: \"743945d0-7488-4665-beaf-f2026e10a424\") " pod="openstack/prometheus-metric-storage-0" Jan 22 17:24:23 crc kubenswrapper[4758]: I0122 17:24:23.008710 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/743945d0-7488-4665-beaf-f2026e10a424-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"743945d0-7488-4665-beaf-f2026e10a424\") " pod="openstack/prometheus-metric-storage-0" Jan 22 17:24:23 crc kubenswrapper[4758]: I0122 17:24:23.008731 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/743945d0-7488-4665-beaf-f2026e10a424-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"743945d0-7488-4665-beaf-f2026e10a424\") " pod="openstack/prometheus-metric-storage-0" Jan 22 17:24:23 crc kubenswrapper[4758]: I0122 17:24:23.019877 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/743945d0-7488-4665-beaf-f2026e10a424-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"743945d0-7488-4665-beaf-f2026e10a424\") " pod="openstack/prometheus-metric-storage-0" Jan 22 17:24:23 crc kubenswrapper[4758]: I0122 17:24:23.023824 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjd7v\" (UniqueName: \"kubernetes.io/projected/743945d0-7488-4665-beaf-f2026e10a424-kube-api-access-sjd7v\") pod \"prometheus-metric-storage-0\" (UID: \"743945d0-7488-4665-beaf-f2026e10a424\") " pod="openstack/prometheus-metric-storage-0" Jan 22 17:24:23 crc kubenswrapper[4758]: I0122 17:24:23.049699 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-90012821-fb2f-4f8d-aaca-e2d78515e50d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-90012821-fb2f-4f8d-aaca-e2d78515e50d\") pod \"prometheus-metric-storage-0\" (UID: \"743945d0-7488-4665-beaf-f2026e10a424\") " pod="openstack/prometheus-metric-storage-0" Jan 22 17:24:23 crc kubenswrapper[4758]: I0122 17:24:23.081484 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 22 17:24:23 crc kubenswrapper[4758]: I0122 17:24:23.674468 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 22 17:24:24 crc kubenswrapper[4758]: I0122 17:24:24.532687 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"743945d0-7488-4665-beaf-f2026e10a424","Type":"ContainerStarted","Data":"8da001cada8f7d468d87b204f423e14a2de56183a74b08fc636e828dde43359c"} Jan 22 17:24:27 crc kubenswrapper[4758]: I0122 17:24:27.652222 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"743945d0-7488-4665-beaf-f2026e10a424","Type":"ContainerStarted","Data":"b64c6208f3dba17bf27251a264fa3ffcaa124fe49899b4b405f3c4f8cec599e0"} Jan 22 17:24:29 crc kubenswrapper[4758]: I0122 17:24:29.809023 4758 scope.go:117] "RemoveContainer" containerID="6b6038efa721e68032c4b8465c33e81c0d3698308aa5597c04600a44e4aa9178" Jan 22 17:24:29 crc kubenswrapper[4758]: E0122 17:24:29.809992 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:24:35 crc kubenswrapper[4758]: I0122 17:24:35.728683 4758 generic.go:334] "Generic (PLEG): container finished" podID="743945d0-7488-4665-beaf-f2026e10a424" containerID="b64c6208f3dba17bf27251a264fa3ffcaa124fe49899b4b405f3c4f8cec599e0" exitCode=0 Jan 22 17:24:35 crc kubenswrapper[4758]: I0122 17:24:35.728891 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"743945d0-7488-4665-beaf-f2026e10a424","Type":"ContainerDied","Data":"b64c6208f3dba17bf27251a264fa3ffcaa124fe49899b4b405f3c4f8cec599e0"} Jan 22 17:24:36 crc kubenswrapper[4758]: I0122 17:24:36.758401 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"743945d0-7488-4665-beaf-f2026e10a424","Type":"ContainerStarted","Data":"09e62bd9446228dc9079aba87d364a05a222ecf34dd4b39cccc6bf02aa404a90"} Jan 22 17:24:39 crc kubenswrapper[4758]: I0122 17:24:39.784366 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"743945d0-7488-4665-beaf-f2026e10a424","Type":"ContainerStarted","Data":"a9d4da706f65aef6f3a2675a36b4dc6fb788a98bb5ef5706c243e149d064a2e9"} Jan 22 17:24:39 crc kubenswrapper[4758]: I0122 17:24:39.785010 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"743945d0-7488-4665-beaf-f2026e10a424","Type":"ContainerStarted","Data":"26998a9480020e04f749b6341aa16daaa107820e15afbe6a581051d0ac2bb7f2"} Jan 22 17:24:42 crc kubenswrapper[4758]: I0122 17:24:42.808347 4758 scope.go:117] "RemoveContainer" containerID="6b6038efa721e68032c4b8465c33e81c0d3698308aa5597c04600a44e4aa9178" Jan 22 17:24:42 crc kubenswrapper[4758]: E0122 17:24:42.808805 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:24:43 crc kubenswrapper[4758]: I0122 17:24:43.082195 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 22 17:24:53 crc kubenswrapper[4758]: I0122 17:24:53.082044 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 22 17:24:53 crc kubenswrapper[4758]: I0122 17:24:53.088534 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 22 17:24:53 crc kubenswrapper[4758]: I0122 17:24:53.150041 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=31.150004432 podStartE2EDuration="31.150004432s" podCreationTimestamp="2026-01-22 17:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 17:24:39.820209658 +0000 UTC m=+3301.303548943" watchObservedRunningTime="2026-01-22 17:24:53.150004432 +0000 UTC m=+3314.633343717" Jan 22 17:24:53 crc kubenswrapper[4758]: I0122 17:24:53.949935 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 22 17:24:55 crc kubenswrapper[4758]: I0122 17:24:55.809144 4758 scope.go:117] "RemoveContainer" containerID="6b6038efa721e68032c4b8465c33e81c0d3698308aa5597c04600a44e4aa9178" Jan 22 17:24:55 crc kubenswrapper[4758]: E0122 17:24:55.809764 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:25:06 crc kubenswrapper[4758]: I0122 17:25:06.252432 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4bnwz"] Jan 22 17:25:06 crc kubenswrapper[4758]: I0122 17:25:06.255625 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4bnwz" Jan 22 17:25:06 crc kubenswrapper[4758]: I0122 17:25:06.262342 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4bnwz"] Jan 22 17:25:06 crc kubenswrapper[4758]: I0122 17:25:06.447838 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8357021e-5cae-4974-9a56-b9eb2fc7d157-catalog-content\") pod \"redhat-operators-4bnwz\" (UID: \"8357021e-5cae-4974-9a56-b9eb2fc7d157\") " pod="openshift-marketplace/redhat-operators-4bnwz" Jan 22 17:25:06 crc kubenswrapper[4758]: I0122 17:25:06.447923 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6qt2\" (UniqueName: \"kubernetes.io/projected/8357021e-5cae-4974-9a56-b9eb2fc7d157-kube-api-access-f6qt2\") pod \"redhat-operators-4bnwz\" (UID: \"8357021e-5cae-4974-9a56-b9eb2fc7d157\") " pod="openshift-marketplace/redhat-operators-4bnwz" Jan 22 17:25:06 crc kubenswrapper[4758]: I0122 17:25:06.448002 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8357021e-5cae-4974-9a56-b9eb2fc7d157-utilities\") pod \"redhat-operators-4bnwz\" (UID: \"8357021e-5cae-4974-9a56-b9eb2fc7d157\") " pod="openshift-marketplace/redhat-operators-4bnwz" Jan 22 17:25:06 crc kubenswrapper[4758]: I0122 17:25:06.549573 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8357021e-5cae-4974-9a56-b9eb2fc7d157-catalog-content\") pod \"redhat-operators-4bnwz\" (UID: \"8357021e-5cae-4974-9a56-b9eb2fc7d157\") " pod="openshift-marketplace/redhat-operators-4bnwz" Jan 22 17:25:06 crc kubenswrapper[4758]: I0122 17:25:06.549646 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6qt2\" (UniqueName: \"kubernetes.io/projected/8357021e-5cae-4974-9a56-b9eb2fc7d157-kube-api-access-f6qt2\") pod \"redhat-operators-4bnwz\" (UID: \"8357021e-5cae-4974-9a56-b9eb2fc7d157\") " pod="openshift-marketplace/redhat-operators-4bnwz" Jan 22 17:25:06 crc kubenswrapper[4758]: I0122 17:25:06.549710 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8357021e-5cae-4974-9a56-b9eb2fc7d157-utilities\") pod \"redhat-operators-4bnwz\" (UID: \"8357021e-5cae-4974-9a56-b9eb2fc7d157\") " pod="openshift-marketplace/redhat-operators-4bnwz" Jan 22 17:25:06 crc kubenswrapper[4758]: I0122 17:25:06.550134 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8357021e-5cae-4974-9a56-b9eb2fc7d157-catalog-content\") pod \"redhat-operators-4bnwz\" (UID: \"8357021e-5cae-4974-9a56-b9eb2fc7d157\") " pod="openshift-marketplace/redhat-operators-4bnwz" Jan 22 17:25:06 crc kubenswrapper[4758]: I0122 17:25:06.550175 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8357021e-5cae-4974-9a56-b9eb2fc7d157-utilities\") pod \"redhat-operators-4bnwz\" (UID: \"8357021e-5cae-4974-9a56-b9eb2fc7d157\") " pod="openshift-marketplace/redhat-operators-4bnwz" Jan 22 17:25:06 crc kubenswrapper[4758]: I0122 17:25:06.574679 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6qt2\" (UniqueName: \"kubernetes.io/projected/8357021e-5cae-4974-9a56-b9eb2fc7d157-kube-api-access-f6qt2\") pod \"redhat-operators-4bnwz\" (UID: \"8357021e-5cae-4974-9a56-b9eb2fc7d157\") " pod="openshift-marketplace/redhat-operators-4bnwz" Jan 22 17:25:06 crc kubenswrapper[4758]: I0122 17:25:06.598603 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4bnwz" Jan 22 17:25:07 crc kubenswrapper[4758]: I0122 17:25:07.345661 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4bnwz"] Jan 22 17:25:08 crc kubenswrapper[4758]: I0122 17:25:08.262805 4758 generic.go:334] "Generic (PLEG): container finished" podID="8357021e-5cae-4974-9a56-b9eb2fc7d157" containerID="5c62c423e481a43c8aed5e173d5557fd6ee4aeaf0348f6219654593e3478da28" exitCode=0 Jan 22 17:25:08 crc kubenswrapper[4758]: I0122 17:25:08.262820 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bnwz" event={"ID":"8357021e-5cae-4974-9a56-b9eb2fc7d157","Type":"ContainerDied","Data":"5c62c423e481a43c8aed5e173d5557fd6ee4aeaf0348f6219654593e3478da28"} Jan 22 17:25:08 crc kubenswrapper[4758]: I0122 17:25:08.263178 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bnwz" event={"ID":"8357021e-5cae-4974-9a56-b9eb2fc7d157","Type":"ContainerStarted","Data":"37ffda7d77b37f31bd7e09e56e62d22276cecd56f0fdfada6ea1a34d3058db68"} Jan 22 17:25:08 crc kubenswrapper[4758]: I0122 17:25:08.814706 4758 scope.go:117] "RemoveContainer" containerID="6b6038efa721e68032c4b8465c33e81c0d3698308aa5597c04600a44e4aa9178" Jan 22 17:25:08 crc kubenswrapper[4758]: E0122 17:25:08.815282 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:25:10 crc kubenswrapper[4758]: I0122 17:25:10.280417 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bnwz" event={"ID":"8357021e-5cae-4974-9a56-b9eb2fc7d157","Type":"ContainerStarted","Data":"14cd6cf2f3002097c75546746fcd6521b28a3bf4704d35a90db25ad1355ef9ad"} Jan 22 17:25:14 crc kubenswrapper[4758]: I0122 17:25:14.342290 4758 generic.go:334] "Generic (PLEG): container finished" podID="8357021e-5cae-4974-9a56-b9eb2fc7d157" containerID="14cd6cf2f3002097c75546746fcd6521b28a3bf4704d35a90db25ad1355ef9ad" exitCode=0 Jan 22 17:25:14 crc kubenswrapper[4758]: I0122 17:25:14.342375 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bnwz" event={"ID":"8357021e-5cae-4974-9a56-b9eb2fc7d157","Type":"ContainerDied","Data":"14cd6cf2f3002097c75546746fcd6521b28a3bf4704d35a90db25ad1355ef9ad"} Jan 22 17:25:15 crc kubenswrapper[4758]: I0122 17:25:15.354895 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bnwz" event={"ID":"8357021e-5cae-4974-9a56-b9eb2fc7d157","Type":"ContainerStarted","Data":"10cc333c841e4c1b24a8003ba23bc87a611779757d625bd6e9c428104afacf33"} Jan 22 17:25:15 crc kubenswrapper[4758]: I0122 17:25:15.382952 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4bnwz" podStartSLOduration=2.879912608 podStartE2EDuration="9.382924392s" podCreationTimestamp="2026-01-22 17:25:06 +0000 UTC" firstStartedPulling="2026-01-22 17:25:08.266533634 +0000 UTC m=+3329.749872939" lastFinishedPulling="2026-01-22 17:25:14.769545418 +0000 UTC m=+3336.252884723" observedRunningTime="2026-01-22 17:25:15.377657528 +0000 UTC m=+3336.860996823" watchObservedRunningTime="2026-01-22 17:25:15.382924392 +0000 UTC m=+3336.866263707" Jan 22 17:25:16 crc kubenswrapper[4758]: I0122 17:25:16.599065 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4bnwz" Jan 22 17:25:16 crc kubenswrapper[4758]: I0122 17:25:16.599408 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4bnwz" Jan 22 17:25:17 crc kubenswrapper[4758]: I0122 17:25:17.643652 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4bnwz" podUID="8357021e-5cae-4974-9a56-b9eb2fc7d157" containerName="registry-server" probeResult="failure" output=< Jan 22 17:25:17 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 22 17:25:17 crc kubenswrapper[4758]: > Jan 22 17:25:18 crc kubenswrapper[4758]: I0122 17:25:18.002344 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 22 17:25:18 crc kubenswrapper[4758]: I0122 17:25:18.004121 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 22 17:25:18 crc kubenswrapper[4758]: I0122 17:25:18.011886 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 22 17:25:18 crc kubenswrapper[4758]: I0122 17:25:18.011947 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 22 17:25:18 crc kubenswrapper[4758]: I0122 17:25:18.012125 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-d4w66" Jan 22 17:25:18 crc kubenswrapper[4758]: I0122 17:25:18.011952 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 22 17:25:18 crc kubenswrapper[4758]: I0122 17:25:18.031351 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 22 17:25:18 crc kubenswrapper[4758]: I0122 17:25:18.060433 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"tempest-tests-tempest\" (UID: \"0a5885aa-206d-4176-bc4b-2967b7391af9\") " pod="openstack/tempest-tests-tempest" Jan 22 17:25:18 crc kubenswrapper[4758]: I0122 17:25:18.060543 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0a5885aa-206d-4176-bc4b-2967b7391af9-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"0a5885aa-206d-4176-bc4b-2967b7391af9\") " pod="openstack/tempest-tests-tempest" Jan 22 17:25:18 crc kubenswrapper[4758]: I0122 17:25:18.060604 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0a5885aa-206d-4176-bc4b-2967b7391af9-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"0a5885aa-206d-4176-bc4b-2967b7391af9\") " pod="openstack/tempest-tests-tempest" Jan 22 17:25:18 crc kubenswrapper[4758]: I0122 17:25:18.060651 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0a5885aa-206d-4176-bc4b-2967b7391af9-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"0a5885aa-206d-4176-bc4b-2967b7391af9\") " pod="openstack/tempest-tests-tempest" Jan 22 17:25:18 crc kubenswrapper[4758]: I0122 17:25:18.060714 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrd6m\" (UniqueName: \"kubernetes.io/projected/0a5885aa-206d-4176-bc4b-2967b7391af9-kube-api-access-wrd6m\") pod \"tempest-tests-tempest\" (UID: \"0a5885aa-206d-4176-bc4b-2967b7391af9\") " pod="openstack/tempest-tests-tempest" Jan 22 17:25:18 crc kubenswrapper[4758]: I0122 17:25:18.060851 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0a5885aa-206d-4176-bc4b-2967b7391af9-config-data\") pod \"tempest-tests-tempest\" (UID: \"0a5885aa-206d-4176-bc4b-2967b7391af9\") " pod="openstack/tempest-tests-tempest" Jan 22 17:25:18 crc kubenswrapper[4758]: I0122 17:25:18.060885 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0a5885aa-206d-4176-bc4b-2967b7391af9-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"0a5885aa-206d-4176-bc4b-2967b7391af9\") " pod="openstack/tempest-tests-tempest" Jan 22 17:25:18 crc kubenswrapper[4758]: I0122 17:25:18.060954 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0a5885aa-206d-4176-bc4b-2967b7391af9-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"0a5885aa-206d-4176-bc4b-2967b7391af9\") " pod="openstack/tempest-tests-tempest" Jan 22 17:25:18 crc kubenswrapper[4758]: I0122 17:25:18.060988 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0a5885aa-206d-4176-bc4b-2967b7391af9-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"0a5885aa-206d-4176-bc4b-2967b7391af9\") " pod="openstack/tempest-tests-tempest" Jan 22 17:25:18 crc kubenswrapper[4758]: I0122 17:25:18.162726 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0a5885aa-206d-4176-bc4b-2967b7391af9-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"0a5885aa-206d-4176-bc4b-2967b7391af9\") " pod="openstack/tempest-tests-tempest" Jan 22 17:25:18 crc kubenswrapper[4758]: I0122 17:25:18.163064 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0a5885aa-206d-4176-bc4b-2967b7391af9-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"0a5885aa-206d-4176-bc4b-2967b7391af9\") " pod="openstack/tempest-tests-tempest" Jan 22 17:25:18 crc kubenswrapper[4758]: I0122 17:25:18.163111 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0a5885aa-206d-4176-bc4b-2967b7391af9-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"0a5885aa-206d-4176-bc4b-2967b7391af9\") " pod="openstack/tempest-tests-tempest" Jan 22 17:25:18 crc kubenswrapper[4758]: I0122 17:25:18.163139 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd6m\" (UniqueName: \"kubernetes.io/projected/0a5885aa-206d-4176-bc4b-2967b7391af9-kube-api-access-wrd6m\") pod \"tempest-tests-tempest\" (UID: \"0a5885aa-206d-4176-bc4b-2967b7391af9\") " pod="openstack/tempest-tests-tempest" Jan 22 17:25:18 crc kubenswrapper[4758]: I0122 17:25:18.163266 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0a5885aa-206d-4176-bc4b-2967b7391af9-config-data\") pod \"tempest-tests-tempest\" (UID: \"0a5885aa-206d-4176-bc4b-2967b7391af9\") " pod="openstack/tempest-tests-tempest" Jan 22 17:25:18 crc kubenswrapper[4758]: I0122 17:25:18.163288 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0a5885aa-206d-4176-bc4b-2967b7391af9-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"0a5885aa-206d-4176-bc4b-2967b7391af9\") " pod="openstack/tempest-tests-tempest" Jan 22 17:25:18 crc kubenswrapper[4758]: I0122 17:25:18.163336 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0a5885aa-206d-4176-bc4b-2967b7391af9-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"0a5885aa-206d-4176-bc4b-2967b7391af9\") " pod="openstack/tempest-tests-tempest" Jan 22 17:25:18 crc kubenswrapper[4758]: I0122 17:25:18.163359 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0a5885aa-206d-4176-bc4b-2967b7391af9-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"0a5885aa-206d-4176-bc4b-2967b7391af9\") " pod="openstack/tempest-tests-tempest" Jan 22 17:25:18 crc kubenswrapper[4758]: I0122 17:25:18.163438 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"tempest-tests-tempest\" (UID: \"0a5885aa-206d-4176-bc4b-2967b7391af9\") " pod="openstack/tempest-tests-tempest" Jan 22 17:25:18 crc kubenswrapper[4758]: I0122 17:25:18.163563 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0a5885aa-206d-4176-bc4b-2967b7391af9-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"0a5885aa-206d-4176-bc4b-2967b7391af9\") " pod="openstack/tempest-tests-tempest" Jan 22 17:25:18 crc kubenswrapper[4758]: I0122 17:25:18.163697 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"tempest-tests-tempest\" (UID: \"0a5885aa-206d-4176-bc4b-2967b7391af9\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/tempest-tests-tempest" Jan 22 17:25:18 crc kubenswrapper[4758]: I0122 17:25:18.163933 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0a5885aa-206d-4176-bc4b-2967b7391af9-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"0a5885aa-206d-4176-bc4b-2967b7391af9\") " pod="openstack/tempest-tests-tempest" Jan 22 17:25:18 crc kubenswrapper[4758]: I0122 17:25:18.164248 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0a5885aa-206d-4176-bc4b-2967b7391af9-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"0a5885aa-206d-4176-bc4b-2967b7391af9\") " pod="openstack/tempest-tests-tempest" Jan 22 17:25:18 crc kubenswrapper[4758]: I0122 17:25:18.165237 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0a5885aa-206d-4176-bc4b-2967b7391af9-config-data\") pod \"tempest-tests-tempest\" (UID: \"0a5885aa-206d-4176-bc4b-2967b7391af9\") " pod="openstack/tempest-tests-tempest" Jan 22 17:25:18 crc kubenswrapper[4758]: I0122 17:25:18.169551 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0a5885aa-206d-4176-bc4b-2967b7391af9-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"0a5885aa-206d-4176-bc4b-2967b7391af9\") " pod="openstack/tempest-tests-tempest" Jan 22 17:25:18 crc kubenswrapper[4758]: I0122 17:25:18.170211 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0a5885aa-206d-4176-bc4b-2967b7391af9-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"0a5885aa-206d-4176-bc4b-2967b7391af9\") " pod="openstack/tempest-tests-tempest" Jan 22 17:25:18 crc kubenswrapper[4758]: I0122 17:25:18.172418 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0a5885aa-206d-4176-bc4b-2967b7391af9-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"0a5885aa-206d-4176-bc4b-2967b7391af9\") " pod="openstack/tempest-tests-tempest" Jan 22 17:25:18 crc kubenswrapper[4758]: I0122 17:25:18.184351 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrd6m\" (UniqueName: \"kubernetes.io/projected/0a5885aa-206d-4176-bc4b-2967b7391af9-kube-api-access-wrd6m\") pod \"tempest-tests-tempest\" (UID: \"0a5885aa-206d-4176-bc4b-2967b7391af9\") " pod="openstack/tempest-tests-tempest" Jan 22 17:25:18 crc kubenswrapper[4758]: I0122 17:25:18.201723 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"tempest-tests-tempest\" (UID: \"0a5885aa-206d-4176-bc4b-2967b7391af9\") " pod="openstack/tempest-tests-tempest" Jan 22 17:25:18 crc kubenswrapper[4758]: I0122 17:25:18.340326 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 22 17:25:18 crc kubenswrapper[4758]: I0122 17:25:18.667214 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 22 17:25:19 crc kubenswrapper[4758]: I0122 17:25:19.408099 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"0a5885aa-206d-4176-bc4b-2967b7391af9","Type":"ContainerStarted","Data":"fe44fe5b32c00306a553984d9c248aa8d68428539000f9fb1b42da31417503a2"} Jan 22 17:25:21 crc kubenswrapper[4758]: I0122 17:25:21.808294 4758 scope.go:117] "RemoveContainer" containerID="6b6038efa721e68032c4b8465c33e81c0d3698308aa5597c04600a44e4aa9178" Jan 22 17:25:21 crc kubenswrapper[4758]: E0122 17:25:21.809390 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:25:27 crc kubenswrapper[4758]: I0122 17:25:27.648824 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4bnwz" podUID="8357021e-5cae-4974-9a56-b9eb2fc7d157" containerName="registry-server" probeResult="failure" output=< Jan 22 17:25:27 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 22 17:25:27 crc kubenswrapper[4758]: > Jan 22 17:25:31 crc kubenswrapper[4758]: I0122 17:25:31.536832 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"0a5885aa-206d-4176-bc4b-2967b7391af9","Type":"ContainerStarted","Data":"e90652289c842c810f4cd9c8ff8ec05f5de8d96b6b4f5d153399640d4d156034"} Jan 22 17:25:31 crc kubenswrapper[4758]: I0122 17:25:31.562249 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.055609379 podStartE2EDuration="15.562225931s" podCreationTimestamp="2026-01-22 17:25:16 +0000 UTC" firstStartedPulling="2026-01-22 17:25:18.69096606 +0000 UTC m=+3340.174305365" lastFinishedPulling="2026-01-22 17:25:30.197582632 +0000 UTC m=+3351.680921917" observedRunningTime="2026-01-22 17:25:31.555048895 +0000 UTC m=+3353.038388180" watchObservedRunningTime="2026-01-22 17:25:31.562225931 +0000 UTC m=+3353.045565226" Jan 22 17:25:35 crc kubenswrapper[4758]: I0122 17:25:35.808646 4758 scope.go:117] "RemoveContainer" containerID="6b6038efa721e68032c4b8465c33e81c0d3698308aa5597c04600a44e4aa9178" Jan 22 17:25:35 crc kubenswrapper[4758]: E0122 17:25:35.809596 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:25:36 crc kubenswrapper[4758]: I0122 17:25:36.674285 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4bnwz" Jan 22 17:25:36 crc kubenswrapper[4758]: I0122 17:25:36.722835 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4bnwz" Jan 22 17:25:37 crc kubenswrapper[4758]: I0122 17:25:37.461008 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4bnwz"] Jan 22 17:25:38 crc kubenswrapper[4758]: I0122 17:25:38.615924 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4bnwz" podUID="8357021e-5cae-4974-9a56-b9eb2fc7d157" containerName="registry-server" containerID="cri-o://10cc333c841e4c1b24a8003ba23bc87a611779757d625bd6e9c428104afacf33" gracePeriod=2 Jan 22 17:25:39 crc kubenswrapper[4758]: I0122 17:25:39.172073 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4bnwz" Jan 22 17:25:39 crc kubenswrapper[4758]: I0122 17:25:39.328053 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6qt2\" (UniqueName: \"kubernetes.io/projected/8357021e-5cae-4974-9a56-b9eb2fc7d157-kube-api-access-f6qt2\") pod \"8357021e-5cae-4974-9a56-b9eb2fc7d157\" (UID: \"8357021e-5cae-4974-9a56-b9eb2fc7d157\") " Jan 22 17:25:39 crc kubenswrapper[4758]: I0122 17:25:39.328480 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8357021e-5cae-4974-9a56-b9eb2fc7d157-utilities\") pod \"8357021e-5cae-4974-9a56-b9eb2fc7d157\" (UID: \"8357021e-5cae-4974-9a56-b9eb2fc7d157\") " Jan 22 17:25:39 crc kubenswrapper[4758]: I0122 17:25:39.328690 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8357021e-5cae-4974-9a56-b9eb2fc7d157-catalog-content\") pod \"8357021e-5cae-4974-9a56-b9eb2fc7d157\" (UID: \"8357021e-5cae-4974-9a56-b9eb2fc7d157\") " Jan 22 17:25:39 crc kubenswrapper[4758]: I0122 17:25:39.329202 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8357021e-5cae-4974-9a56-b9eb2fc7d157-utilities" (OuterVolumeSpecName: "utilities") pod "8357021e-5cae-4974-9a56-b9eb2fc7d157" (UID: "8357021e-5cae-4974-9a56-b9eb2fc7d157"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:25:39 crc kubenswrapper[4758]: I0122 17:25:39.329583 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8357021e-5cae-4974-9a56-b9eb2fc7d157-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 17:25:39 crc kubenswrapper[4758]: I0122 17:25:39.335652 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8357021e-5cae-4974-9a56-b9eb2fc7d157-kube-api-access-f6qt2" (OuterVolumeSpecName: "kube-api-access-f6qt2") pod "8357021e-5cae-4974-9a56-b9eb2fc7d157" (UID: "8357021e-5cae-4974-9a56-b9eb2fc7d157"). InnerVolumeSpecName "kube-api-access-f6qt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:25:39 crc kubenswrapper[4758]: I0122 17:25:39.433802 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6qt2\" (UniqueName: \"kubernetes.io/projected/8357021e-5cae-4974-9a56-b9eb2fc7d157-kube-api-access-f6qt2\") on node \"crc\" DevicePath \"\"" Jan 22 17:25:39 crc kubenswrapper[4758]: I0122 17:25:39.465222 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8357021e-5cae-4974-9a56-b9eb2fc7d157-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8357021e-5cae-4974-9a56-b9eb2fc7d157" (UID: "8357021e-5cae-4974-9a56-b9eb2fc7d157"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:25:39 crc kubenswrapper[4758]: I0122 17:25:39.535831 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8357021e-5cae-4974-9a56-b9eb2fc7d157-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 17:25:39 crc kubenswrapper[4758]: I0122 17:25:39.629160 4758 generic.go:334] "Generic (PLEG): container finished" podID="8357021e-5cae-4974-9a56-b9eb2fc7d157" containerID="10cc333c841e4c1b24a8003ba23bc87a611779757d625bd6e9c428104afacf33" exitCode=0 Jan 22 17:25:39 crc kubenswrapper[4758]: I0122 17:25:39.629237 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bnwz" event={"ID":"8357021e-5cae-4974-9a56-b9eb2fc7d157","Type":"ContainerDied","Data":"10cc333c841e4c1b24a8003ba23bc87a611779757d625bd6e9c428104afacf33"} Jan 22 17:25:39 crc kubenswrapper[4758]: I0122 17:25:39.629275 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bnwz" event={"ID":"8357021e-5cae-4974-9a56-b9eb2fc7d157","Type":"ContainerDied","Data":"37ffda7d77b37f31bd7e09e56e62d22276cecd56f0fdfada6ea1a34d3058db68"} Jan 22 17:25:39 crc kubenswrapper[4758]: I0122 17:25:39.629298 4758 scope.go:117] "RemoveContainer" containerID="10cc333c841e4c1b24a8003ba23bc87a611779757d625bd6e9c428104afacf33" Jan 22 17:25:39 crc kubenswrapper[4758]: I0122 17:25:39.629491 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4bnwz" Jan 22 17:25:39 crc kubenswrapper[4758]: I0122 17:25:39.663919 4758 scope.go:117] "RemoveContainer" containerID="14cd6cf2f3002097c75546746fcd6521b28a3bf4704d35a90db25ad1355ef9ad" Jan 22 17:25:39 crc kubenswrapper[4758]: I0122 17:25:39.681033 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4bnwz"] Jan 22 17:25:39 crc kubenswrapper[4758]: I0122 17:25:39.696847 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4bnwz"] Jan 22 17:25:39 crc kubenswrapper[4758]: I0122 17:25:39.714392 4758 scope.go:117] "RemoveContainer" containerID="5c62c423e481a43c8aed5e173d5557fd6ee4aeaf0348f6219654593e3478da28" Jan 22 17:25:39 crc kubenswrapper[4758]: I0122 17:25:39.760642 4758 scope.go:117] "RemoveContainer" containerID="10cc333c841e4c1b24a8003ba23bc87a611779757d625bd6e9c428104afacf33" Jan 22 17:25:39 crc kubenswrapper[4758]: E0122 17:25:39.761671 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10cc333c841e4c1b24a8003ba23bc87a611779757d625bd6e9c428104afacf33\": container with ID starting with 10cc333c841e4c1b24a8003ba23bc87a611779757d625bd6e9c428104afacf33 not found: ID does not exist" containerID="10cc333c841e4c1b24a8003ba23bc87a611779757d625bd6e9c428104afacf33" Jan 22 17:25:39 crc kubenswrapper[4758]: I0122 17:25:39.761730 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10cc333c841e4c1b24a8003ba23bc87a611779757d625bd6e9c428104afacf33"} err="failed to get container status \"10cc333c841e4c1b24a8003ba23bc87a611779757d625bd6e9c428104afacf33\": rpc error: code = NotFound desc = could not find container \"10cc333c841e4c1b24a8003ba23bc87a611779757d625bd6e9c428104afacf33\": container with ID starting with 10cc333c841e4c1b24a8003ba23bc87a611779757d625bd6e9c428104afacf33 not found: ID does not exist" Jan 22 17:25:39 crc kubenswrapper[4758]: I0122 17:25:39.761807 4758 scope.go:117] "RemoveContainer" containerID="14cd6cf2f3002097c75546746fcd6521b28a3bf4704d35a90db25ad1355ef9ad" Jan 22 17:25:39 crc kubenswrapper[4758]: E0122 17:25:39.762145 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14cd6cf2f3002097c75546746fcd6521b28a3bf4704d35a90db25ad1355ef9ad\": container with ID starting with 14cd6cf2f3002097c75546746fcd6521b28a3bf4704d35a90db25ad1355ef9ad not found: ID does not exist" containerID="14cd6cf2f3002097c75546746fcd6521b28a3bf4704d35a90db25ad1355ef9ad" Jan 22 17:25:39 crc kubenswrapper[4758]: I0122 17:25:39.762182 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14cd6cf2f3002097c75546746fcd6521b28a3bf4704d35a90db25ad1355ef9ad"} err="failed to get container status \"14cd6cf2f3002097c75546746fcd6521b28a3bf4704d35a90db25ad1355ef9ad\": rpc error: code = NotFound desc = could not find container \"14cd6cf2f3002097c75546746fcd6521b28a3bf4704d35a90db25ad1355ef9ad\": container with ID starting with 14cd6cf2f3002097c75546746fcd6521b28a3bf4704d35a90db25ad1355ef9ad not found: ID does not exist" Jan 22 17:25:39 crc kubenswrapper[4758]: I0122 17:25:39.762221 4758 scope.go:117] "RemoveContainer" containerID="5c62c423e481a43c8aed5e173d5557fd6ee4aeaf0348f6219654593e3478da28" Jan 22 17:25:39 crc kubenswrapper[4758]: E0122 17:25:39.762440 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c62c423e481a43c8aed5e173d5557fd6ee4aeaf0348f6219654593e3478da28\": container with ID starting with 5c62c423e481a43c8aed5e173d5557fd6ee4aeaf0348f6219654593e3478da28 not found: ID does not exist" containerID="5c62c423e481a43c8aed5e173d5557fd6ee4aeaf0348f6219654593e3478da28" Jan 22 17:25:39 crc kubenswrapper[4758]: I0122 17:25:39.762460 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c62c423e481a43c8aed5e173d5557fd6ee4aeaf0348f6219654593e3478da28"} err="failed to get container status \"5c62c423e481a43c8aed5e173d5557fd6ee4aeaf0348f6219654593e3478da28\": rpc error: code = NotFound desc = could not find container \"5c62c423e481a43c8aed5e173d5557fd6ee4aeaf0348f6219654593e3478da28\": container with ID starting with 5c62c423e481a43c8aed5e173d5557fd6ee4aeaf0348f6219654593e3478da28 not found: ID does not exist" Jan 22 17:25:40 crc kubenswrapper[4758]: I0122 17:25:40.821668 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8357021e-5cae-4974-9a56-b9eb2fc7d157" path="/var/lib/kubelet/pods/8357021e-5cae-4974-9a56-b9eb2fc7d157/volumes" Jan 22 17:25:46 crc kubenswrapper[4758]: I0122 17:25:46.808081 4758 scope.go:117] "RemoveContainer" containerID="6b6038efa721e68032c4b8465c33e81c0d3698308aa5597c04600a44e4aa9178" Jan 22 17:25:46 crc kubenswrapper[4758]: E0122 17:25:46.808851 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:26:00 crc kubenswrapper[4758]: I0122 17:26:00.809267 4758 scope.go:117] "RemoveContainer" containerID="6b6038efa721e68032c4b8465c33e81c0d3698308aa5597c04600a44e4aa9178" Jan 22 17:26:00 crc kubenswrapper[4758]: E0122 17:26:00.810479 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:26:15 crc kubenswrapper[4758]: I0122 17:26:15.809417 4758 scope.go:117] "RemoveContainer" containerID="6b6038efa721e68032c4b8465c33e81c0d3698308aa5597c04600a44e4aa9178" Jan 22 17:26:15 crc kubenswrapper[4758]: E0122 17:26:15.810656 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:26:29 crc kubenswrapper[4758]: I0122 17:26:29.808290 4758 scope.go:117] "RemoveContainer" containerID="6b6038efa721e68032c4b8465c33e81c0d3698308aa5597c04600a44e4aa9178" Jan 22 17:26:29 crc kubenswrapper[4758]: E0122 17:26:29.809229 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:26:42 crc kubenswrapper[4758]: I0122 17:26:42.808573 4758 scope.go:117] "RemoveContainer" containerID="6b6038efa721e68032c4b8465c33e81c0d3698308aa5597c04600a44e4aa9178" Jan 22 17:26:42 crc kubenswrapper[4758]: E0122 17:26:42.809451 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:26:57 crc kubenswrapper[4758]: I0122 17:26:57.808360 4758 scope.go:117] "RemoveContainer" containerID="6b6038efa721e68032c4b8465c33e81c0d3698308aa5597c04600a44e4aa9178" Jan 22 17:26:58 crc kubenswrapper[4758]: I0122 17:26:58.522307 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerStarted","Data":"1a6b57c06e858afc9440772312f2f1d6c577633fd537cacb24d567278025f461"} Jan 22 17:29:13 crc kubenswrapper[4758]: I0122 17:29:13.836988 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 17:29:13 crc kubenswrapper[4758]: I0122 17:29:13.837851 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 17:29:43 crc kubenswrapper[4758]: I0122 17:29:43.837123 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 17:29:43 crc kubenswrapper[4758]: I0122 17:29:43.837575 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 17:30:00 crc kubenswrapper[4758]: I0122 17:30:00.176587 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485050-t4brq"] Jan 22 17:30:00 crc kubenswrapper[4758]: E0122 17:30:00.177924 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8357021e-5cae-4974-9a56-b9eb2fc7d157" containerName="extract-content" Jan 22 17:30:00 crc kubenswrapper[4758]: I0122 17:30:00.177953 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="8357021e-5cae-4974-9a56-b9eb2fc7d157" containerName="extract-content" Jan 22 17:30:00 crc kubenswrapper[4758]: E0122 17:30:00.177985 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8357021e-5cae-4974-9a56-b9eb2fc7d157" containerName="registry-server" Jan 22 17:30:00 crc kubenswrapper[4758]: I0122 17:30:00.177994 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="8357021e-5cae-4974-9a56-b9eb2fc7d157" containerName="registry-server" Jan 22 17:30:00 crc kubenswrapper[4758]: E0122 17:30:00.178017 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8357021e-5cae-4974-9a56-b9eb2fc7d157" containerName="extract-utilities" Jan 22 17:30:00 crc kubenswrapper[4758]: I0122 17:30:00.178025 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="8357021e-5cae-4974-9a56-b9eb2fc7d157" containerName="extract-utilities" Jan 22 17:30:00 crc kubenswrapper[4758]: I0122 17:30:00.178321 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="8357021e-5cae-4974-9a56-b9eb2fc7d157" containerName="registry-server" Jan 22 17:30:00 crc kubenswrapper[4758]: I0122 17:30:00.179326 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485050-t4brq" Jan 22 17:30:00 crc kubenswrapper[4758]: I0122 17:30:00.183551 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 17:30:00 crc kubenswrapper[4758]: I0122 17:30:00.184214 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 17:30:00 crc kubenswrapper[4758]: I0122 17:30:00.189225 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485050-t4brq"] Jan 22 17:30:00 crc kubenswrapper[4758]: I0122 17:30:00.223884 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1366d10c-135c-489b-920a-3aef5896bbb6-secret-volume\") pod \"collect-profiles-29485050-t4brq\" (UID: \"1366d10c-135c-489b-920a-3aef5896bbb6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485050-t4brq" Jan 22 17:30:00 crc kubenswrapper[4758]: I0122 17:30:00.224365 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bplht\" (UniqueName: \"kubernetes.io/projected/1366d10c-135c-489b-920a-3aef5896bbb6-kube-api-access-bplht\") pod \"collect-profiles-29485050-t4brq\" (UID: \"1366d10c-135c-489b-920a-3aef5896bbb6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485050-t4brq" Jan 22 17:30:00 crc kubenswrapper[4758]: I0122 17:30:00.224517 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1366d10c-135c-489b-920a-3aef5896bbb6-config-volume\") pod \"collect-profiles-29485050-t4brq\" (UID: \"1366d10c-135c-489b-920a-3aef5896bbb6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485050-t4brq" Jan 22 17:30:00 crc kubenswrapper[4758]: I0122 17:30:00.326629 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bplht\" (UniqueName: \"kubernetes.io/projected/1366d10c-135c-489b-920a-3aef5896bbb6-kube-api-access-bplht\") pod \"collect-profiles-29485050-t4brq\" (UID: \"1366d10c-135c-489b-920a-3aef5896bbb6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485050-t4brq" Jan 22 17:30:00 crc kubenswrapper[4758]: I0122 17:30:00.326790 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1366d10c-135c-489b-920a-3aef5896bbb6-config-volume\") pod \"collect-profiles-29485050-t4brq\" (UID: \"1366d10c-135c-489b-920a-3aef5896bbb6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485050-t4brq" Jan 22 17:30:00 crc kubenswrapper[4758]: I0122 17:30:00.326841 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1366d10c-135c-489b-920a-3aef5896bbb6-secret-volume\") pod \"collect-profiles-29485050-t4brq\" (UID: \"1366d10c-135c-489b-920a-3aef5896bbb6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485050-t4brq" Jan 22 17:30:00 crc kubenswrapper[4758]: I0122 17:30:00.328436 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1366d10c-135c-489b-920a-3aef5896bbb6-config-volume\") pod \"collect-profiles-29485050-t4brq\" (UID: \"1366d10c-135c-489b-920a-3aef5896bbb6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485050-t4brq" Jan 22 17:30:00 crc kubenswrapper[4758]: I0122 17:30:00.342964 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1366d10c-135c-489b-920a-3aef5896bbb6-secret-volume\") pod \"collect-profiles-29485050-t4brq\" (UID: \"1366d10c-135c-489b-920a-3aef5896bbb6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485050-t4brq" Jan 22 17:30:00 crc kubenswrapper[4758]: I0122 17:30:00.348179 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bplht\" (UniqueName: \"kubernetes.io/projected/1366d10c-135c-489b-920a-3aef5896bbb6-kube-api-access-bplht\") pod \"collect-profiles-29485050-t4brq\" (UID: \"1366d10c-135c-489b-920a-3aef5896bbb6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485050-t4brq" Jan 22 17:30:00 crc kubenswrapper[4758]: I0122 17:30:00.509313 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485050-t4brq" Jan 22 17:30:01 crc kubenswrapper[4758]: I0122 17:30:01.091192 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485050-t4brq"] Jan 22 17:30:01 crc kubenswrapper[4758]: W0122 17:30:01.103333 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1366d10c_135c_489b_920a_3aef5896bbb6.slice/crio-5ceb87fccac309a8a401851d05e645c4ce92b8f40553e8ddb3886e041eeda4a0 WatchSource:0}: Error finding container 5ceb87fccac309a8a401851d05e645c4ce92b8f40553e8ddb3886e041eeda4a0: Status 404 returned error can't find the container with id 5ceb87fccac309a8a401851d05e645c4ce92b8f40553e8ddb3886e041eeda4a0 Jan 22 17:30:01 crc kubenswrapper[4758]: I0122 17:30:01.818088 4758 generic.go:334] "Generic (PLEG): container finished" podID="1366d10c-135c-489b-920a-3aef5896bbb6" containerID="78d02e6c4f17eb6bf6983110aa8404fd5f5a2595921a8bfebef37f381564adc8" exitCode=0 Jan 22 17:30:01 crc kubenswrapper[4758]: I0122 17:30:01.818479 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485050-t4brq" event={"ID":"1366d10c-135c-489b-920a-3aef5896bbb6","Type":"ContainerDied","Data":"78d02e6c4f17eb6bf6983110aa8404fd5f5a2595921a8bfebef37f381564adc8"} Jan 22 17:30:01 crc kubenswrapper[4758]: I0122 17:30:01.818511 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485050-t4brq" event={"ID":"1366d10c-135c-489b-920a-3aef5896bbb6","Type":"ContainerStarted","Data":"5ceb87fccac309a8a401851d05e645c4ce92b8f40553e8ddb3886e041eeda4a0"} Jan 22 17:30:03 crc kubenswrapper[4758]: I0122 17:30:03.245287 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485050-t4brq" Jan 22 17:30:03 crc kubenswrapper[4758]: I0122 17:30:03.291978 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1366d10c-135c-489b-920a-3aef5896bbb6-secret-volume\") pod \"1366d10c-135c-489b-920a-3aef5896bbb6\" (UID: \"1366d10c-135c-489b-920a-3aef5896bbb6\") " Jan 22 17:30:03 crc kubenswrapper[4758]: I0122 17:30:03.292203 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1366d10c-135c-489b-920a-3aef5896bbb6-config-volume\") pod \"1366d10c-135c-489b-920a-3aef5896bbb6\" (UID: \"1366d10c-135c-489b-920a-3aef5896bbb6\") " Jan 22 17:30:03 crc kubenswrapper[4758]: I0122 17:30:03.292292 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bplht\" (UniqueName: \"kubernetes.io/projected/1366d10c-135c-489b-920a-3aef5896bbb6-kube-api-access-bplht\") pod \"1366d10c-135c-489b-920a-3aef5896bbb6\" (UID: \"1366d10c-135c-489b-920a-3aef5896bbb6\") " Jan 22 17:30:03 crc kubenswrapper[4758]: I0122 17:30:03.293302 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1366d10c-135c-489b-920a-3aef5896bbb6-config-volume" (OuterVolumeSpecName: "config-volume") pod "1366d10c-135c-489b-920a-3aef5896bbb6" (UID: "1366d10c-135c-489b-920a-3aef5896bbb6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 17:30:03 crc kubenswrapper[4758]: I0122 17:30:03.298975 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1366d10c-135c-489b-920a-3aef5896bbb6-kube-api-access-bplht" (OuterVolumeSpecName: "kube-api-access-bplht") pod "1366d10c-135c-489b-920a-3aef5896bbb6" (UID: "1366d10c-135c-489b-920a-3aef5896bbb6"). InnerVolumeSpecName "kube-api-access-bplht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:30:03 crc kubenswrapper[4758]: I0122 17:30:03.299984 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1366d10c-135c-489b-920a-3aef5896bbb6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1366d10c-135c-489b-920a-3aef5896bbb6" (UID: "1366d10c-135c-489b-920a-3aef5896bbb6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:30:03 crc kubenswrapper[4758]: I0122 17:30:03.394138 4758 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1366d10c-135c-489b-920a-3aef5896bbb6-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 17:30:03 crc kubenswrapper[4758]: I0122 17:30:03.394179 4758 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1366d10c-135c-489b-920a-3aef5896bbb6-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 17:30:03 crc kubenswrapper[4758]: I0122 17:30:03.394192 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bplht\" (UniqueName: \"kubernetes.io/projected/1366d10c-135c-489b-920a-3aef5896bbb6-kube-api-access-bplht\") on node \"crc\" DevicePath \"\"" Jan 22 17:30:03 crc kubenswrapper[4758]: I0122 17:30:03.843437 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485050-t4brq" event={"ID":"1366d10c-135c-489b-920a-3aef5896bbb6","Type":"ContainerDied","Data":"5ceb87fccac309a8a401851d05e645c4ce92b8f40553e8ddb3886e041eeda4a0"} Jan 22 17:30:03 crc kubenswrapper[4758]: I0122 17:30:03.843505 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ceb87fccac309a8a401851d05e645c4ce92b8f40553e8ddb3886e041eeda4a0" Jan 22 17:30:03 crc kubenswrapper[4758]: I0122 17:30:03.843534 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485050-t4brq" Jan 22 17:30:04 crc kubenswrapper[4758]: I0122 17:30:04.329728 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485005-rdjt8"] Jan 22 17:30:04 crc kubenswrapper[4758]: I0122 17:30:04.339458 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485005-rdjt8"] Jan 22 17:30:04 crc kubenswrapper[4758]: I0122 17:30:04.826659 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e688668d-0d28-4d1b-aa2a-4bba257e9093" path="/var/lib/kubelet/pods/e688668d-0d28-4d1b-aa2a-4bba257e9093/volumes" Jan 22 17:30:09 crc kubenswrapper[4758]: I0122 17:30:09.592868 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sfkpq"] Jan 22 17:30:09 crc kubenswrapper[4758]: E0122 17:30:09.595177 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1366d10c-135c-489b-920a-3aef5896bbb6" containerName="collect-profiles" Jan 22 17:30:09 crc kubenswrapper[4758]: I0122 17:30:09.595207 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="1366d10c-135c-489b-920a-3aef5896bbb6" containerName="collect-profiles" Jan 22 17:30:09 crc kubenswrapper[4758]: I0122 17:30:09.595648 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="1366d10c-135c-489b-920a-3aef5896bbb6" containerName="collect-profiles" Jan 22 17:30:09 crc kubenswrapper[4758]: I0122 17:30:09.599411 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sfkpq" Jan 22 17:30:09 crc kubenswrapper[4758]: I0122 17:30:09.606236 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sfkpq"] Jan 22 17:30:09 crc kubenswrapper[4758]: I0122 17:30:09.749563 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9961771-fe17-45c0-ba58-04a487d45f06-utilities\") pod \"community-operators-sfkpq\" (UID: \"c9961771-fe17-45c0-ba58-04a487d45f06\") " pod="openshift-marketplace/community-operators-sfkpq" Jan 22 17:30:09 crc kubenswrapper[4758]: I0122 17:30:09.749933 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2v9w\" (UniqueName: \"kubernetes.io/projected/c9961771-fe17-45c0-ba58-04a487d45f06-kube-api-access-h2v9w\") pod \"community-operators-sfkpq\" (UID: \"c9961771-fe17-45c0-ba58-04a487d45f06\") " pod="openshift-marketplace/community-operators-sfkpq" Jan 22 17:30:09 crc kubenswrapper[4758]: I0122 17:30:09.750006 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9961771-fe17-45c0-ba58-04a487d45f06-catalog-content\") pod \"community-operators-sfkpq\" (UID: \"c9961771-fe17-45c0-ba58-04a487d45f06\") " pod="openshift-marketplace/community-operators-sfkpq" Jan 22 17:30:09 crc kubenswrapper[4758]: I0122 17:30:09.853230 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2v9w\" (UniqueName: \"kubernetes.io/projected/c9961771-fe17-45c0-ba58-04a487d45f06-kube-api-access-h2v9w\") pod \"community-operators-sfkpq\" (UID: \"c9961771-fe17-45c0-ba58-04a487d45f06\") " pod="openshift-marketplace/community-operators-sfkpq" Jan 22 17:30:09 crc kubenswrapper[4758]: I0122 17:30:09.853387 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9961771-fe17-45c0-ba58-04a487d45f06-catalog-content\") pod \"community-operators-sfkpq\" (UID: \"c9961771-fe17-45c0-ba58-04a487d45f06\") " pod="openshift-marketplace/community-operators-sfkpq" Jan 22 17:30:09 crc kubenswrapper[4758]: I0122 17:30:09.853557 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9961771-fe17-45c0-ba58-04a487d45f06-utilities\") pod \"community-operators-sfkpq\" (UID: \"c9961771-fe17-45c0-ba58-04a487d45f06\") " pod="openshift-marketplace/community-operators-sfkpq" Jan 22 17:30:09 crc kubenswrapper[4758]: I0122 17:30:09.854230 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9961771-fe17-45c0-ba58-04a487d45f06-utilities\") pod \"community-operators-sfkpq\" (UID: \"c9961771-fe17-45c0-ba58-04a487d45f06\") " pod="openshift-marketplace/community-operators-sfkpq" Jan 22 17:30:09 crc kubenswrapper[4758]: I0122 17:30:09.854241 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9961771-fe17-45c0-ba58-04a487d45f06-catalog-content\") pod \"community-operators-sfkpq\" (UID: \"c9961771-fe17-45c0-ba58-04a487d45f06\") " pod="openshift-marketplace/community-operators-sfkpq" Jan 22 17:30:09 crc kubenswrapper[4758]: I0122 17:30:09.879026 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2v9w\" (UniqueName: \"kubernetes.io/projected/c9961771-fe17-45c0-ba58-04a487d45f06-kube-api-access-h2v9w\") pod \"community-operators-sfkpq\" (UID: \"c9961771-fe17-45c0-ba58-04a487d45f06\") " pod="openshift-marketplace/community-operators-sfkpq" Jan 22 17:30:09 crc kubenswrapper[4758]: I0122 17:30:09.929877 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sfkpq" Jan 22 17:30:10 crc kubenswrapper[4758]: I0122 17:30:10.527232 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sfkpq"] Jan 22 17:30:10 crc kubenswrapper[4758]: I0122 17:30:10.923067 4758 generic.go:334] "Generic (PLEG): container finished" podID="c9961771-fe17-45c0-ba58-04a487d45f06" containerID="11972d56a535c9b78b71940a19c91e3a0b74af6b9f41262bb5c491d743744ad8" exitCode=0 Jan 22 17:30:10 crc kubenswrapper[4758]: I0122 17:30:10.923116 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sfkpq" event={"ID":"c9961771-fe17-45c0-ba58-04a487d45f06","Type":"ContainerDied","Data":"11972d56a535c9b78b71940a19c91e3a0b74af6b9f41262bb5c491d743744ad8"} Jan 22 17:30:10 crc kubenswrapper[4758]: I0122 17:30:10.923145 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sfkpq" event={"ID":"c9961771-fe17-45c0-ba58-04a487d45f06","Type":"ContainerStarted","Data":"2e6193ba5f96aa4ab95980b6f0f25fb4b7cf820be4bda911a86e8a02958a44a5"} Jan 22 17:30:10 crc kubenswrapper[4758]: I0122 17:30:10.925268 4758 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 17:30:13 crc kubenswrapper[4758]: I0122 17:30:13.837288 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 17:30:13 crc kubenswrapper[4758]: I0122 17:30:13.837751 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 17:30:13 crc kubenswrapper[4758]: I0122 17:30:13.837795 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" Jan 22 17:30:13 crc kubenswrapper[4758]: I0122 17:30:13.838841 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1a6b57c06e858afc9440772312f2f1d6c577633fd537cacb24d567278025f461"} pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 17:30:13 crc kubenswrapper[4758]: I0122 17:30:13.838910 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" containerID="cri-o://1a6b57c06e858afc9440772312f2f1d6c577633fd537cacb24d567278025f461" gracePeriod=600 Jan 22 17:30:14 crc kubenswrapper[4758]: I0122 17:30:14.968642 4758 generic.go:334] "Generic (PLEG): container finished" podID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerID="1a6b57c06e858afc9440772312f2f1d6c577633fd537cacb24d567278025f461" exitCode=0 Jan 22 17:30:14 crc kubenswrapper[4758]: I0122 17:30:14.968711 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerDied","Data":"1a6b57c06e858afc9440772312f2f1d6c577633fd537cacb24d567278025f461"} Jan 22 17:30:14 crc kubenswrapper[4758]: I0122 17:30:14.969058 4758 scope.go:117] "RemoveContainer" containerID="6b6038efa721e68032c4b8465c33e81c0d3698308aa5597c04600a44e4aa9178" Jan 22 17:30:16 crc kubenswrapper[4758]: I0122 17:30:16.987647 4758 generic.go:334] "Generic (PLEG): container finished" podID="c9961771-fe17-45c0-ba58-04a487d45f06" containerID="ffd23c0246ac643707d6ee398b69abc1cef82d3948f2d35e373d873473626b20" exitCode=0 Jan 22 17:30:16 crc kubenswrapper[4758]: I0122 17:30:16.987775 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sfkpq" event={"ID":"c9961771-fe17-45c0-ba58-04a487d45f06","Type":"ContainerDied","Data":"ffd23c0246ac643707d6ee398b69abc1cef82d3948f2d35e373d873473626b20"} Jan 22 17:30:16 crc kubenswrapper[4758]: I0122 17:30:16.992689 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerStarted","Data":"2eb9b403711db327cd66f17ca80b7c8e2b5fed945d29ad01e1af351fcc931688"} Jan 22 17:30:18 crc kubenswrapper[4758]: I0122 17:30:18.006001 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sfkpq" event={"ID":"c9961771-fe17-45c0-ba58-04a487d45f06","Type":"ContainerStarted","Data":"f92ae1f5a9224df72977b0af5f1e41131abc17ecb3a3a1b03a7b99c9bbb2ad63"} Jan 22 17:30:18 crc kubenswrapper[4758]: I0122 17:30:18.031038 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sfkpq" podStartSLOduration=2.42119584 podStartE2EDuration="9.031004578s" podCreationTimestamp="2026-01-22 17:30:09 +0000 UTC" firstStartedPulling="2026-01-22 17:30:10.925028554 +0000 UTC m=+3632.408367829" lastFinishedPulling="2026-01-22 17:30:17.534837282 +0000 UTC m=+3639.018176567" observedRunningTime="2026-01-22 17:30:18.029959269 +0000 UTC m=+3639.513298554" watchObservedRunningTime="2026-01-22 17:30:18.031004578 +0000 UTC m=+3639.514343853" Jan 22 17:30:19 crc kubenswrapper[4758]: I0122 17:30:19.931498 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sfkpq" Jan 22 17:30:19 crc kubenswrapper[4758]: I0122 17:30:19.931874 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sfkpq" Jan 22 17:30:19 crc kubenswrapper[4758]: I0122 17:30:19.991545 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sfkpq" Jan 22 17:30:29 crc kubenswrapper[4758]: I0122 17:30:29.990558 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sfkpq" Jan 22 17:30:30 crc kubenswrapper[4758]: I0122 17:30:30.068472 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sfkpq"] Jan 22 17:30:30 crc kubenswrapper[4758]: I0122 17:30:30.137432 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6nnrg"] Jan 22 17:30:30 crc kubenswrapper[4758]: I0122 17:30:30.141555 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6nnrg" podUID="6353b564-856d-4648-88f7-b4630ec7bf7b" containerName="registry-server" containerID="cri-o://cc14190399fa000c175563324271e37cd674268625e6ea69434a23b3f6e73cfa" gracePeriod=2 Jan 22 17:30:30 crc kubenswrapper[4758]: I0122 17:30:30.752238 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6nnrg" Jan 22 17:30:30 crc kubenswrapper[4758]: I0122 17:30:30.944917 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6353b564-856d-4648-88f7-b4630ec7bf7b-catalog-content\") pod \"6353b564-856d-4648-88f7-b4630ec7bf7b\" (UID: \"6353b564-856d-4648-88f7-b4630ec7bf7b\") " Jan 22 17:30:30 crc kubenswrapper[4758]: I0122 17:30:30.945202 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2blxx\" (UniqueName: \"kubernetes.io/projected/6353b564-856d-4648-88f7-b4630ec7bf7b-kube-api-access-2blxx\") pod \"6353b564-856d-4648-88f7-b4630ec7bf7b\" (UID: \"6353b564-856d-4648-88f7-b4630ec7bf7b\") " Jan 22 17:30:30 crc kubenswrapper[4758]: I0122 17:30:30.945244 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6353b564-856d-4648-88f7-b4630ec7bf7b-utilities\") pod \"6353b564-856d-4648-88f7-b4630ec7bf7b\" (UID: \"6353b564-856d-4648-88f7-b4630ec7bf7b\") " Jan 22 17:30:30 crc kubenswrapper[4758]: I0122 17:30:30.947674 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6353b564-856d-4648-88f7-b4630ec7bf7b-utilities" (OuterVolumeSpecName: "utilities") pod "6353b564-856d-4648-88f7-b4630ec7bf7b" (UID: "6353b564-856d-4648-88f7-b4630ec7bf7b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:30:30 crc kubenswrapper[4758]: I0122 17:30:30.948141 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6353b564-856d-4648-88f7-b4630ec7bf7b-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 17:30:30 crc kubenswrapper[4758]: I0122 17:30:30.955114 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6353b564-856d-4648-88f7-b4630ec7bf7b-kube-api-access-2blxx" (OuterVolumeSpecName: "kube-api-access-2blxx") pod "6353b564-856d-4648-88f7-b4630ec7bf7b" (UID: "6353b564-856d-4648-88f7-b4630ec7bf7b"). InnerVolumeSpecName "kube-api-access-2blxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:30:30 crc kubenswrapper[4758]: I0122 17:30:30.994397 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6353b564-856d-4648-88f7-b4630ec7bf7b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6353b564-856d-4648-88f7-b4630ec7bf7b" (UID: "6353b564-856d-4648-88f7-b4630ec7bf7b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:30:31 crc kubenswrapper[4758]: I0122 17:30:31.049724 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2blxx\" (UniqueName: \"kubernetes.io/projected/6353b564-856d-4648-88f7-b4630ec7bf7b-kube-api-access-2blxx\") on node \"crc\" DevicePath \"\"" Jan 22 17:30:31 crc kubenswrapper[4758]: I0122 17:30:31.049769 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6353b564-856d-4648-88f7-b4630ec7bf7b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 17:30:31 crc kubenswrapper[4758]: I0122 17:30:31.125528 4758 generic.go:334] "Generic (PLEG): container finished" podID="6353b564-856d-4648-88f7-b4630ec7bf7b" containerID="cc14190399fa000c175563324271e37cd674268625e6ea69434a23b3f6e73cfa" exitCode=0 Jan 22 17:30:31 crc kubenswrapper[4758]: I0122 17:30:31.125572 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6nnrg" event={"ID":"6353b564-856d-4648-88f7-b4630ec7bf7b","Type":"ContainerDied","Data":"cc14190399fa000c175563324271e37cd674268625e6ea69434a23b3f6e73cfa"} Jan 22 17:30:31 crc kubenswrapper[4758]: I0122 17:30:31.125600 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6nnrg" event={"ID":"6353b564-856d-4648-88f7-b4630ec7bf7b","Type":"ContainerDied","Data":"9e1455f839f500e1994a438c9abfe8179097424275168e7eb23728d87c792213"} Jan 22 17:30:31 crc kubenswrapper[4758]: I0122 17:30:31.125617 4758 scope.go:117] "RemoveContainer" containerID="cc14190399fa000c175563324271e37cd674268625e6ea69434a23b3f6e73cfa" Jan 22 17:30:31 crc kubenswrapper[4758]: I0122 17:30:31.125804 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6nnrg" Jan 22 17:30:31 crc kubenswrapper[4758]: I0122 17:30:31.185803 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6nnrg"] Jan 22 17:30:31 crc kubenswrapper[4758]: I0122 17:30:31.193413 4758 scope.go:117] "RemoveContainer" containerID="b0b034029f2ae2ebb289a800ea137ab6e7851be40c19a2d5468b27381d4f4086" Jan 22 17:30:31 crc kubenswrapper[4758]: I0122 17:30:31.195906 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6nnrg"] Jan 22 17:30:31 crc kubenswrapper[4758]: I0122 17:30:31.503296 4758 scope.go:117] "RemoveContainer" containerID="f138a26d8b5d832ccf436e98629b2a45f6688e6baa667e31543d761d78eae15f" Jan 22 17:30:31 crc kubenswrapper[4758]: I0122 17:30:31.539859 4758 scope.go:117] "RemoveContainer" containerID="cc14190399fa000c175563324271e37cd674268625e6ea69434a23b3f6e73cfa" Jan 22 17:30:31 crc kubenswrapper[4758]: E0122 17:30:31.540647 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc14190399fa000c175563324271e37cd674268625e6ea69434a23b3f6e73cfa\": container with ID starting with cc14190399fa000c175563324271e37cd674268625e6ea69434a23b3f6e73cfa not found: ID does not exist" containerID="cc14190399fa000c175563324271e37cd674268625e6ea69434a23b3f6e73cfa" Jan 22 17:30:31 crc kubenswrapper[4758]: I0122 17:30:31.540777 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc14190399fa000c175563324271e37cd674268625e6ea69434a23b3f6e73cfa"} err="failed to get container status \"cc14190399fa000c175563324271e37cd674268625e6ea69434a23b3f6e73cfa\": rpc error: code = NotFound desc = could not find container \"cc14190399fa000c175563324271e37cd674268625e6ea69434a23b3f6e73cfa\": container with ID starting with cc14190399fa000c175563324271e37cd674268625e6ea69434a23b3f6e73cfa not found: ID does not exist" Jan 22 17:30:31 crc kubenswrapper[4758]: I0122 17:30:31.540823 4758 scope.go:117] "RemoveContainer" containerID="b0b034029f2ae2ebb289a800ea137ab6e7851be40c19a2d5468b27381d4f4086" Jan 22 17:30:31 crc kubenswrapper[4758]: E0122 17:30:31.541233 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0b034029f2ae2ebb289a800ea137ab6e7851be40c19a2d5468b27381d4f4086\": container with ID starting with b0b034029f2ae2ebb289a800ea137ab6e7851be40c19a2d5468b27381d4f4086 not found: ID does not exist" containerID="b0b034029f2ae2ebb289a800ea137ab6e7851be40c19a2d5468b27381d4f4086" Jan 22 17:30:31 crc kubenswrapper[4758]: I0122 17:30:31.541282 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0b034029f2ae2ebb289a800ea137ab6e7851be40c19a2d5468b27381d4f4086"} err="failed to get container status \"b0b034029f2ae2ebb289a800ea137ab6e7851be40c19a2d5468b27381d4f4086\": rpc error: code = NotFound desc = could not find container \"b0b034029f2ae2ebb289a800ea137ab6e7851be40c19a2d5468b27381d4f4086\": container with ID starting with b0b034029f2ae2ebb289a800ea137ab6e7851be40c19a2d5468b27381d4f4086 not found: ID does not exist" Jan 22 17:30:31 crc kubenswrapper[4758]: I0122 17:30:31.541296 4758 scope.go:117] "RemoveContainer" containerID="f138a26d8b5d832ccf436e98629b2a45f6688e6baa667e31543d761d78eae15f" Jan 22 17:30:31 crc kubenswrapper[4758]: E0122 17:30:31.541544 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f138a26d8b5d832ccf436e98629b2a45f6688e6baa667e31543d761d78eae15f\": container with ID starting with f138a26d8b5d832ccf436e98629b2a45f6688e6baa667e31543d761d78eae15f not found: ID does not exist" containerID="f138a26d8b5d832ccf436e98629b2a45f6688e6baa667e31543d761d78eae15f" Jan 22 17:30:31 crc kubenswrapper[4758]: I0122 17:30:31.541563 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f138a26d8b5d832ccf436e98629b2a45f6688e6baa667e31543d761d78eae15f"} err="failed to get container status \"f138a26d8b5d832ccf436e98629b2a45f6688e6baa667e31543d761d78eae15f\": rpc error: code = NotFound desc = could not find container \"f138a26d8b5d832ccf436e98629b2a45f6688e6baa667e31543d761d78eae15f\": container with ID starting with f138a26d8b5d832ccf436e98629b2a45f6688e6baa667e31543d761d78eae15f not found: ID does not exist" Jan 22 17:30:32 crc kubenswrapper[4758]: I0122 17:30:32.819633 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6353b564-856d-4648-88f7-b4630ec7bf7b" path="/var/lib/kubelet/pods/6353b564-856d-4648-88f7-b4630ec7bf7b/volumes" Jan 22 17:31:02 crc kubenswrapper[4758]: I0122 17:31:02.531853 4758 scope.go:117] "RemoveContainer" containerID="029ea761214c3d49a4e493c6aa30b929af7662057a755eb375810f493f454371" Jan 22 17:32:37 crc kubenswrapper[4758]: I0122 17:32:37.968796 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-crkxd"] Jan 22 17:32:37 crc kubenswrapper[4758]: E0122 17:32:37.969841 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6353b564-856d-4648-88f7-b4630ec7bf7b" containerName="registry-server" Jan 22 17:32:37 crc kubenswrapper[4758]: I0122 17:32:37.969862 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="6353b564-856d-4648-88f7-b4630ec7bf7b" containerName="registry-server" Jan 22 17:32:37 crc kubenswrapper[4758]: E0122 17:32:37.969902 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6353b564-856d-4648-88f7-b4630ec7bf7b" containerName="extract-content" Jan 22 17:32:37 crc kubenswrapper[4758]: I0122 17:32:37.969911 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="6353b564-856d-4648-88f7-b4630ec7bf7b" containerName="extract-content" Jan 22 17:32:37 crc kubenswrapper[4758]: E0122 17:32:37.969930 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6353b564-856d-4648-88f7-b4630ec7bf7b" containerName="extract-utilities" Jan 22 17:32:37 crc kubenswrapper[4758]: I0122 17:32:37.969936 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="6353b564-856d-4648-88f7-b4630ec7bf7b" containerName="extract-utilities" Jan 22 17:32:37 crc kubenswrapper[4758]: I0122 17:32:37.970192 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="6353b564-856d-4648-88f7-b4630ec7bf7b" containerName="registry-server" Jan 22 17:32:37 crc kubenswrapper[4758]: I0122 17:32:37.972028 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-crkxd" Jan 22 17:32:37 crc kubenswrapper[4758]: I0122 17:32:37.999797 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-crkxd"] Jan 22 17:32:38 crc kubenswrapper[4758]: I0122 17:32:38.032963 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fe830c8-028e-4e83-b8c2-f379ca0f7099-catalog-content\") pod \"redhat-marketplace-crkxd\" (UID: \"0fe830c8-028e-4e83-b8c2-f379ca0f7099\") " pod="openshift-marketplace/redhat-marketplace-crkxd" Jan 22 17:32:38 crc kubenswrapper[4758]: I0122 17:32:38.033765 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q96qc\" (UniqueName: \"kubernetes.io/projected/0fe830c8-028e-4e83-b8c2-f379ca0f7099-kube-api-access-q96qc\") pod \"redhat-marketplace-crkxd\" (UID: \"0fe830c8-028e-4e83-b8c2-f379ca0f7099\") " pod="openshift-marketplace/redhat-marketplace-crkxd" Jan 22 17:32:38 crc kubenswrapper[4758]: I0122 17:32:38.033809 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fe830c8-028e-4e83-b8c2-f379ca0f7099-utilities\") pod \"redhat-marketplace-crkxd\" (UID: \"0fe830c8-028e-4e83-b8c2-f379ca0f7099\") " pod="openshift-marketplace/redhat-marketplace-crkxd" Jan 22 17:32:38 crc kubenswrapper[4758]: I0122 17:32:38.136141 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fe830c8-028e-4e83-b8c2-f379ca0f7099-utilities\") pod \"redhat-marketplace-crkxd\" (UID: \"0fe830c8-028e-4e83-b8c2-f379ca0f7099\") " pod="openshift-marketplace/redhat-marketplace-crkxd" Jan 22 17:32:38 crc kubenswrapper[4758]: I0122 17:32:38.136611 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fe830c8-028e-4e83-b8c2-f379ca0f7099-catalog-content\") pod \"redhat-marketplace-crkxd\" (UID: \"0fe830c8-028e-4e83-b8c2-f379ca0f7099\") " pod="openshift-marketplace/redhat-marketplace-crkxd" Jan 22 17:32:38 crc kubenswrapper[4758]: I0122 17:32:38.136790 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q96qc\" (UniqueName: \"kubernetes.io/projected/0fe830c8-028e-4e83-b8c2-f379ca0f7099-kube-api-access-q96qc\") pod \"redhat-marketplace-crkxd\" (UID: \"0fe830c8-028e-4e83-b8c2-f379ca0f7099\") " pod="openshift-marketplace/redhat-marketplace-crkxd" Jan 22 17:32:38 crc kubenswrapper[4758]: I0122 17:32:38.137005 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fe830c8-028e-4e83-b8c2-f379ca0f7099-utilities\") pod \"redhat-marketplace-crkxd\" (UID: \"0fe830c8-028e-4e83-b8c2-f379ca0f7099\") " pod="openshift-marketplace/redhat-marketplace-crkxd" Jan 22 17:32:38 crc kubenswrapper[4758]: I0122 17:32:38.137325 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fe830c8-028e-4e83-b8c2-f379ca0f7099-catalog-content\") pod \"redhat-marketplace-crkxd\" (UID: \"0fe830c8-028e-4e83-b8c2-f379ca0f7099\") " pod="openshift-marketplace/redhat-marketplace-crkxd" Jan 22 17:32:38 crc kubenswrapper[4758]: I0122 17:32:38.183544 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q96qc\" (UniqueName: \"kubernetes.io/projected/0fe830c8-028e-4e83-b8c2-f379ca0f7099-kube-api-access-q96qc\") pod \"redhat-marketplace-crkxd\" (UID: \"0fe830c8-028e-4e83-b8c2-f379ca0f7099\") " pod="openshift-marketplace/redhat-marketplace-crkxd" Jan 22 17:32:38 crc kubenswrapper[4758]: I0122 17:32:38.290646 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-crkxd" Jan 22 17:32:38 crc kubenswrapper[4758]: I0122 17:32:38.847133 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-crkxd"] Jan 22 17:32:39 crc kubenswrapper[4758]: I0122 17:32:39.786061 4758 generic.go:334] "Generic (PLEG): container finished" podID="0fe830c8-028e-4e83-b8c2-f379ca0f7099" containerID="619be4b03e51ada7e1adc9d7a98ca34df84be9fa29e8cb355143d755ce0f7c7a" exitCode=0 Jan 22 17:32:39 crc kubenswrapper[4758]: I0122 17:32:39.786209 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crkxd" event={"ID":"0fe830c8-028e-4e83-b8c2-f379ca0f7099","Type":"ContainerDied","Data":"619be4b03e51ada7e1adc9d7a98ca34df84be9fa29e8cb355143d755ce0f7c7a"} Jan 22 17:32:39 crc kubenswrapper[4758]: I0122 17:32:39.787598 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crkxd" event={"ID":"0fe830c8-028e-4e83-b8c2-f379ca0f7099","Type":"ContainerStarted","Data":"fb51e46a8d15e0d419539dd760c1233ee653a597c44cb4b6dcd9d968a152b711"} Jan 22 17:32:40 crc kubenswrapper[4758]: I0122 17:32:40.799400 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crkxd" event={"ID":"0fe830c8-028e-4e83-b8c2-f379ca0f7099","Type":"ContainerStarted","Data":"ecdb6435e7433985484b984a81a10cfdcca2408179bfdc7407f2e053c996e2ef"} Jan 22 17:32:41 crc kubenswrapper[4758]: I0122 17:32:41.823230 4758 generic.go:334] "Generic (PLEG): container finished" podID="0fe830c8-028e-4e83-b8c2-f379ca0f7099" containerID="ecdb6435e7433985484b984a81a10cfdcca2408179bfdc7407f2e053c996e2ef" exitCode=0 Jan 22 17:32:41 crc kubenswrapper[4758]: I0122 17:32:41.823578 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crkxd" event={"ID":"0fe830c8-028e-4e83-b8c2-f379ca0f7099","Type":"ContainerDied","Data":"ecdb6435e7433985484b984a81a10cfdcca2408179bfdc7407f2e053c996e2ef"} Jan 22 17:32:43 crc kubenswrapper[4758]: I0122 17:32:43.837310 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 17:32:43 crc kubenswrapper[4758]: I0122 17:32:43.837618 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 17:32:43 crc kubenswrapper[4758]: I0122 17:32:43.844154 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crkxd" event={"ID":"0fe830c8-028e-4e83-b8c2-f379ca0f7099","Type":"ContainerStarted","Data":"3b50b5fe2bfbbfbd4c2f4442e36b4f2b0dde886b94adb4c01b7cbab98c62cf92"} Jan 22 17:32:43 crc kubenswrapper[4758]: I0122 17:32:43.871193 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-crkxd" podStartSLOduration=3.738284189 podStartE2EDuration="6.871165766s" podCreationTimestamp="2026-01-22 17:32:37 +0000 UTC" firstStartedPulling="2026-01-22 17:32:39.788648293 +0000 UTC m=+3781.271987588" lastFinishedPulling="2026-01-22 17:32:42.92152986 +0000 UTC m=+3784.404869165" observedRunningTime="2026-01-22 17:32:43.860025972 +0000 UTC m=+3785.343365257" watchObservedRunningTime="2026-01-22 17:32:43.871165766 +0000 UTC m=+3785.354505051" Jan 22 17:32:48 crc kubenswrapper[4758]: I0122 17:32:48.291705 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-crkxd" Jan 22 17:32:48 crc kubenswrapper[4758]: I0122 17:32:48.292376 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-crkxd" Jan 22 17:32:48 crc kubenswrapper[4758]: I0122 17:32:48.353668 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-crkxd" Jan 22 17:32:48 crc kubenswrapper[4758]: I0122 17:32:48.967339 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-crkxd" Jan 22 17:32:49 crc kubenswrapper[4758]: I0122 17:32:49.049545 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-crkxd"] Jan 22 17:32:50 crc kubenswrapper[4758]: I0122 17:32:50.918280 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-crkxd" podUID="0fe830c8-028e-4e83-b8c2-f379ca0f7099" containerName="registry-server" containerID="cri-o://3b50b5fe2bfbbfbd4c2f4442e36b4f2b0dde886b94adb4c01b7cbab98c62cf92" gracePeriod=2 Jan 22 17:32:51 crc kubenswrapper[4758]: I0122 17:32:51.384443 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-crkxd" Jan 22 17:32:51 crc kubenswrapper[4758]: I0122 17:32:51.454468 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q96qc\" (UniqueName: \"kubernetes.io/projected/0fe830c8-028e-4e83-b8c2-f379ca0f7099-kube-api-access-q96qc\") pod \"0fe830c8-028e-4e83-b8c2-f379ca0f7099\" (UID: \"0fe830c8-028e-4e83-b8c2-f379ca0f7099\") " Jan 22 17:32:51 crc kubenswrapper[4758]: I0122 17:32:51.454604 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fe830c8-028e-4e83-b8c2-f379ca0f7099-utilities\") pod \"0fe830c8-028e-4e83-b8c2-f379ca0f7099\" (UID: \"0fe830c8-028e-4e83-b8c2-f379ca0f7099\") " Jan 22 17:32:51 crc kubenswrapper[4758]: I0122 17:32:51.454700 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fe830c8-028e-4e83-b8c2-f379ca0f7099-catalog-content\") pod \"0fe830c8-028e-4e83-b8c2-f379ca0f7099\" (UID: \"0fe830c8-028e-4e83-b8c2-f379ca0f7099\") " Jan 22 17:32:51 crc kubenswrapper[4758]: I0122 17:32:51.456166 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fe830c8-028e-4e83-b8c2-f379ca0f7099-utilities" (OuterVolumeSpecName: "utilities") pod "0fe830c8-028e-4e83-b8c2-f379ca0f7099" (UID: "0fe830c8-028e-4e83-b8c2-f379ca0f7099"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:32:51 crc kubenswrapper[4758]: I0122 17:32:51.462017 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fe830c8-028e-4e83-b8c2-f379ca0f7099-kube-api-access-q96qc" (OuterVolumeSpecName: "kube-api-access-q96qc") pod "0fe830c8-028e-4e83-b8c2-f379ca0f7099" (UID: "0fe830c8-028e-4e83-b8c2-f379ca0f7099"). InnerVolumeSpecName "kube-api-access-q96qc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:32:51 crc kubenswrapper[4758]: I0122 17:32:51.475385 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fe830c8-028e-4e83-b8c2-f379ca0f7099-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0fe830c8-028e-4e83-b8c2-f379ca0f7099" (UID: "0fe830c8-028e-4e83-b8c2-f379ca0f7099"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:32:51 crc kubenswrapper[4758]: I0122 17:32:51.556660 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q96qc\" (UniqueName: \"kubernetes.io/projected/0fe830c8-028e-4e83-b8c2-f379ca0f7099-kube-api-access-q96qc\") on node \"crc\" DevicePath \"\"" Jan 22 17:32:51 crc kubenswrapper[4758]: I0122 17:32:51.556697 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fe830c8-028e-4e83-b8c2-f379ca0f7099-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 17:32:51 crc kubenswrapper[4758]: I0122 17:32:51.556708 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fe830c8-028e-4e83-b8c2-f379ca0f7099-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 17:32:51 crc kubenswrapper[4758]: I0122 17:32:51.931634 4758 generic.go:334] "Generic (PLEG): container finished" podID="0fe830c8-028e-4e83-b8c2-f379ca0f7099" containerID="3b50b5fe2bfbbfbd4c2f4442e36b4f2b0dde886b94adb4c01b7cbab98c62cf92" exitCode=0 Jan 22 17:32:51 crc kubenswrapper[4758]: I0122 17:32:51.931695 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crkxd" event={"ID":"0fe830c8-028e-4e83-b8c2-f379ca0f7099","Type":"ContainerDied","Data":"3b50b5fe2bfbbfbd4c2f4442e36b4f2b0dde886b94adb4c01b7cbab98c62cf92"} Jan 22 17:32:51 crc kubenswrapper[4758]: I0122 17:32:51.931710 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-crkxd" Jan 22 17:32:51 crc kubenswrapper[4758]: I0122 17:32:51.932058 4758 scope.go:117] "RemoveContainer" containerID="3b50b5fe2bfbbfbd4c2f4442e36b4f2b0dde886b94adb4c01b7cbab98c62cf92" Jan 22 17:32:51 crc kubenswrapper[4758]: I0122 17:32:51.932043 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crkxd" event={"ID":"0fe830c8-028e-4e83-b8c2-f379ca0f7099","Type":"ContainerDied","Data":"fb51e46a8d15e0d419539dd760c1233ee653a597c44cb4b6dcd9d968a152b711"} Jan 22 17:32:51 crc kubenswrapper[4758]: I0122 17:32:51.954441 4758 scope.go:117] "RemoveContainer" containerID="ecdb6435e7433985484b984a81a10cfdcca2408179bfdc7407f2e053c996e2ef" Jan 22 17:32:51 crc kubenswrapper[4758]: I0122 17:32:51.980509 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-crkxd"] Jan 22 17:32:51 crc kubenswrapper[4758]: I0122 17:32:51.993044 4758 scope.go:117] "RemoveContainer" containerID="619be4b03e51ada7e1adc9d7a98ca34df84be9fa29e8cb355143d755ce0f7c7a" Jan 22 17:32:51 crc kubenswrapper[4758]: I0122 17:32:51.993801 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-crkxd"] Jan 22 17:32:52 crc kubenswrapper[4758]: I0122 17:32:52.076920 4758 scope.go:117] "RemoveContainer" containerID="3b50b5fe2bfbbfbd4c2f4442e36b4f2b0dde886b94adb4c01b7cbab98c62cf92" Jan 22 17:32:52 crc kubenswrapper[4758]: E0122 17:32:52.077832 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b50b5fe2bfbbfbd4c2f4442e36b4f2b0dde886b94adb4c01b7cbab98c62cf92\": container with ID starting with 3b50b5fe2bfbbfbd4c2f4442e36b4f2b0dde886b94adb4c01b7cbab98c62cf92 not found: ID does not exist" containerID="3b50b5fe2bfbbfbd4c2f4442e36b4f2b0dde886b94adb4c01b7cbab98c62cf92" Jan 22 17:32:52 crc kubenswrapper[4758]: I0122 17:32:52.077877 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b50b5fe2bfbbfbd4c2f4442e36b4f2b0dde886b94adb4c01b7cbab98c62cf92"} err="failed to get container status \"3b50b5fe2bfbbfbd4c2f4442e36b4f2b0dde886b94adb4c01b7cbab98c62cf92\": rpc error: code = NotFound desc = could not find container \"3b50b5fe2bfbbfbd4c2f4442e36b4f2b0dde886b94adb4c01b7cbab98c62cf92\": container with ID starting with 3b50b5fe2bfbbfbd4c2f4442e36b4f2b0dde886b94adb4c01b7cbab98c62cf92 not found: ID does not exist" Jan 22 17:32:52 crc kubenswrapper[4758]: I0122 17:32:52.077899 4758 scope.go:117] "RemoveContainer" containerID="ecdb6435e7433985484b984a81a10cfdcca2408179bfdc7407f2e053c996e2ef" Jan 22 17:32:52 crc kubenswrapper[4758]: E0122 17:32:52.080778 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecdb6435e7433985484b984a81a10cfdcca2408179bfdc7407f2e053c996e2ef\": container with ID starting with ecdb6435e7433985484b984a81a10cfdcca2408179bfdc7407f2e053c996e2ef not found: ID does not exist" containerID="ecdb6435e7433985484b984a81a10cfdcca2408179bfdc7407f2e053c996e2ef" Jan 22 17:32:52 crc kubenswrapper[4758]: I0122 17:32:52.080839 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecdb6435e7433985484b984a81a10cfdcca2408179bfdc7407f2e053c996e2ef"} err="failed to get container status \"ecdb6435e7433985484b984a81a10cfdcca2408179bfdc7407f2e053c996e2ef\": rpc error: code = NotFound desc = could not find container \"ecdb6435e7433985484b984a81a10cfdcca2408179bfdc7407f2e053c996e2ef\": container with ID starting with ecdb6435e7433985484b984a81a10cfdcca2408179bfdc7407f2e053c996e2ef not found: ID does not exist" Jan 22 17:32:52 crc kubenswrapper[4758]: I0122 17:32:52.080874 4758 scope.go:117] "RemoveContainer" containerID="619be4b03e51ada7e1adc9d7a98ca34df84be9fa29e8cb355143d755ce0f7c7a" Jan 22 17:32:52 crc kubenswrapper[4758]: E0122 17:32:52.081316 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"619be4b03e51ada7e1adc9d7a98ca34df84be9fa29e8cb355143d755ce0f7c7a\": container with ID starting with 619be4b03e51ada7e1adc9d7a98ca34df84be9fa29e8cb355143d755ce0f7c7a not found: ID does not exist" containerID="619be4b03e51ada7e1adc9d7a98ca34df84be9fa29e8cb355143d755ce0f7c7a" Jan 22 17:32:52 crc kubenswrapper[4758]: I0122 17:32:52.081362 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"619be4b03e51ada7e1adc9d7a98ca34df84be9fa29e8cb355143d755ce0f7c7a"} err="failed to get container status \"619be4b03e51ada7e1adc9d7a98ca34df84be9fa29e8cb355143d755ce0f7c7a\": rpc error: code = NotFound desc = could not find container \"619be4b03e51ada7e1adc9d7a98ca34df84be9fa29e8cb355143d755ce0f7c7a\": container with ID starting with 619be4b03e51ada7e1adc9d7a98ca34df84be9fa29e8cb355143d755ce0f7c7a not found: ID does not exist" Jan 22 17:32:52 crc kubenswrapper[4758]: I0122 17:32:52.825525 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fe830c8-028e-4e83-b8c2-f379ca0f7099" path="/var/lib/kubelet/pods/0fe830c8-028e-4e83-b8c2-f379ca0f7099/volumes" Jan 22 17:33:13 crc kubenswrapper[4758]: I0122 17:33:13.837505 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 17:33:13 crc kubenswrapper[4758]: I0122 17:33:13.838113 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 17:33:43 crc kubenswrapper[4758]: I0122 17:33:43.837030 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 17:33:43 crc kubenswrapper[4758]: I0122 17:33:43.837735 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 17:33:43 crc kubenswrapper[4758]: I0122 17:33:43.837850 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" Jan 22 17:33:43 crc kubenswrapper[4758]: I0122 17:33:43.838939 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2eb9b403711db327cd66f17ca80b7c8e2b5fed945d29ad01e1af351fcc931688"} pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 17:33:43 crc kubenswrapper[4758]: I0122 17:33:43.839049 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" containerID="cri-o://2eb9b403711db327cd66f17ca80b7c8e2b5fed945d29ad01e1af351fcc931688" gracePeriod=600 Jan 22 17:33:43 crc kubenswrapper[4758]: E0122 17:33:43.965232 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:33:44 crc kubenswrapper[4758]: I0122 17:33:44.897283 4758 generic.go:334] "Generic (PLEG): container finished" podID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerID="2eb9b403711db327cd66f17ca80b7c8e2b5fed945d29ad01e1af351fcc931688" exitCode=0 Jan 22 17:33:44 crc kubenswrapper[4758]: I0122 17:33:44.897351 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerDied","Data":"2eb9b403711db327cd66f17ca80b7c8e2b5fed945d29ad01e1af351fcc931688"} Jan 22 17:33:44 crc kubenswrapper[4758]: I0122 17:33:44.897683 4758 scope.go:117] "RemoveContainer" containerID="1a6b57c06e858afc9440772312f2f1d6c577633fd537cacb24d567278025f461" Jan 22 17:33:44 crc kubenswrapper[4758]: I0122 17:33:44.898796 4758 scope.go:117] "RemoveContainer" containerID="2eb9b403711db327cd66f17ca80b7c8e2b5fed945d29ad01e1af351fcc931688" Jan 22 17:33:44 crc kubenswrapper[4758]: E0122 17:33:44.899262 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:33:59 crc kubenswrapper[4758]: I0122 17:33:59.808345 4758 scope.go:117] "RemoveContainer" containerID="2eb9b403711db327cd66f17ca80b7c8e2b5fed945d29ad01e1af351fcc931688" Jan 22 17:33:59 crc kubenswrapper[4758]: E0122 17:33:59.809222 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:34:12 crc kubenswrapper[4758]: I0122 17:34:12.809156 4758 scope.go:117] "RemoveContainer" containerID="2eb9b403711db327cd66f17ca80b7c8e2b5fed945d29ad01e1af351fcc931688" Jan 22 17:34:12 crc kubenswrapper[4758]: E0122 17:34:12.810181 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:34:26 crc kubenswrapper[4758]: I0122 17:34:26.810845 4758 scope.go:117] "RemoveContainer" containerID="2eb9b403711db327cd66f17ca80b7c8e2b5fed945d29ad01e1af351fcc931688" Jan 22 17:34:26 crc kubenswrapper[4758]: E0122 17:34:26.811874 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:34:38 crc kubenswrapper[4758]: I0122 17:34:38.820149 4758 scope.go:117] "RemoveContainer" containerID="2eb9b403711db327cd66f17ca80b7c8e2b5fed945d29ad01e1af351fcc931688" Jan 22 17:34:38 crc kubenswrapper[4758]: E0122 17:34:38.821608 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:34:52 crc kubenswrapper[4758]: I0122 17:34:52.808217 4758 scope.go:117] "RemoveContainer" containerID="2eb9b403711db327cd66f17ca80b7c8e2b5fed945d29ad01e1af351fcc931688" Jan 22 17:34:52 crc kubenswrapper[4758]: E0122 17:34:52.809164 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:35:04 crc kubenswrapper[4758]: I0122 17:35:04.809278 4758 scope.go:117] "RemoveContainer" containerID="2eb9b403711db327cd66f17ca80b7c8e2b5fed945d29ad01e1af351fcc931688" Jan 22 17:35:04 crc kubenswrapper[4758]: E0122 17:35:04.810268 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:35:18 crc kubenswrapper[4758]: I0122 17:35:18.815093 4758 scope.go:117] "RemoveContainer" containerID="2eb9b403711db327cd66f17ca80b7c8e2b5fed945d29ad01e1af351fcc931688" Jan 22 17:35:18 crc kubenswrapper[4758]: E0122 17:35:18.815919 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:35:32 crc kubenswrapper[4758]: I0122 17:35:32.809225 4758 scope.go:117] "RemoveContainer" containerID="2eb9b403711db327cd66f17ca80b7c8e2b5fed945d29ad01e1af351fcc931688" Jan 22 17:35:32 crc kubenswrapper[4758]: E0122 17:35:32.810886 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:35:33 crc kubenswrapper[4758]: I0122 17:35:33.418535 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g9jtj"] Jan 22 17:35:33 crc kubenswrapper[4758]: E0122 17:35:33.421879 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fe830c8-028e-4e83-b8c2-f379ca0f7099" containerName="extract-content" Jan 22 17:35:33 crc kubenswrapper[4758]: I0122 17:35:33.421923 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fe830c8-028e-4e83-b8c2-f379ca0f7099" containerName="extract-content" Jan 22 17:35:33 crc kubenswrapper[4758]: E0122 17:35:33.421969 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fe830c8-028e-4e83-b8c2-f379ca0f7099" containerName="registry-server" Jan 22 17:35:33 crc kubenswrapper[4758]: I0122 17:35:33.421979 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fe830c8-028e-4e83-b8c2-f379ca0f7099" containerName="registry-server" Jan 22 17:35:33 crc kubenswrapper[4758]: E0122 17:35:33.422003 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fe830c8-028e-4e83-b8c2-f379ca0f7099" containerName="extract-utilities" Jan 22 17:35:33 crc kubenswrapper[4758]: I0122 17:35:33.422013 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fe830c8-028e-4e83-b8c2-f379ca0f7099" containerName="extract-utilities" Jan 22 17:35:33 crc kubenswrapper[4758]: I0122 17:35:33.422314 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fe830c8-028e-4e83-b8c2-f379ca0f7099" containerName="registry-server" Jan 22 17:35:33 crc kubenswrapper[4758]: I0122 17:35:33.425113 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g9jtj" Jan 22 17:35:33 crc kubenswrapper[4758]: I0122 17:35:33.429579 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g9jtj"] Jan 22 17:35:33 crc kubenswrapper[4758]: I0122 17:35:33.548664 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/796eeceb-d0a5-4c26-9f57-a61cb7b5dc91-catalog-content\") pod \"redhat-operators-g9jtj\" (UID: \"796eeceb-d0a5-4c26-9f57-a61cb7b5dc91\") " pod="openshift-marketplace/redhat-operators-g9jtj" Jan 22 17:35:33 crc kubenswrapper[4758]: I0122 17:35:33.548794 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/796eeceb-d0a5-4c26-9f57-a61cb7b5dc91-utilities\") pod \"redhat-operators-g9jtj\" (UID: \"796eeceb-d0a5-4c26-9f57-a61cb7b5dc91\") " pod="openshift-marketplace/redhat-operators-g9jtj" Jan 22 17:35:33 crc kubenswrapper[4758]: I0122 17:35:33.548836 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94wc2\" (UniqueName: \"kubernetes.io/projected/796eeceb-d0a5-4c26-9f57-a61cb7b5dc91-kube-api-access-94wc2\") pod \"redhat-operators-g9jtj\" (UID: \"796eeceb-d0a5-4c26-9f57-a61cb7b5dc91\") " pod="openshift-marketplace/redhat-operators-g9jtj" Jan 22 17:35:33 crc kubenswrapper[4758]: I0122 17:35:33.651277 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/796eeceb-d0a5-4c26-9f57-a61cb7b5dc91-utilities\") pod \"redhat-operators-g9jtj\" (UID: \"796eeceb-d0a5-4c26-9f57-a61cb7b5dc91\") " pod="openshift-marketplace/redhat-operators-g9jtj" Jan 22 17:35:33 crc kubenswrapper[4758]: I0122 17:35:33.651330 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94wc2\" (UniqueName: \"kubernetes.io/projected/796eeceb-d0a5-4c26-9f57-a61cb7b5dc91-kube-api-access-94wc2\") pod \"redhat-operators-g9jtj\" (UID: \"796eeceb-d0a5-4c26-9f57-a61cb7b5dc91\") " pod="openshift-marketplace/redhat-operators-g9jtj" Jan 22 17:35:33 crc kubenswrapper[4758]: I0122 17:35:33.651485 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/796eeceb-d0a5-4c26-9f57-a61cb7b5dc91-catalog-content\") pod \"redhat-operators-g9jtj\" (UID: \"796eeceb-d0a5-4c26-9f57-a61cb7b5dc91\") " pod="openshift-marketplace/redhat-operators-g9jtj" Jan 22 17:35:33 crc kubenswrapper[4758]: I0122 17:35:33.651962 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/796eeceb-d0a5-4c26-9f57-a61cb7b5dc91-utilities\") pod \"redhat-operators-g9jtj\" (UID: \"796eeceb-d0a5-4c26-9f57-a61cb7b5dc91\") " pod="openshift-marketplace/redhat-operators-g9jtj" Jan 22 17:35:33 crc kubenswrapper[4758]: I0122 17:35:33.651988 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/796eeceb-d0a5-4c26-9f57-a61cb7b5dc91-catalog-content\") pod \"redhat-operators-g9jtj\" (UID: \"796eeceb-d0a5-4c26-9f57-a61cb7b5dc91\") " pod="openshift-marketplace/redhat-operators-g9jtj" Jan 22 17:35:33 crc kubenswrapper[4758]: I0122 17:35:33.678303 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94wc2\" (UniqueName: \"kubernetes.io/projected/796eeceb-d0a5-4c26-9f57-a61cb7b5dc91-kube-api-access-94wc2\") pod \"redhat-operators-g9jtj\" (UID: \"796eeceb-d0a5-4c26-9f57-a61cb7b5dc91\") " pod="openshift-marketplace/redhat-operators-g9jtj" Jan 22 17:35:33 crc kubenswrapper[4758]: I0122 17:35:33.768293 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g9jtj" Jan 22 17:35:34 crc kubenswrapper[4758]: I0122 17:35:34.295758 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g9jtj"] Jan 22 17:35:34 crc kubenswrapper[4758]: I0122 17:35:34.974424 4758 generic.go:334] "Generic (PLEG): container finished" podID="796eeceb-d0a5-4c26-9f57-a61cb7b5dc91" containerID="ac895c9621fff6a89071ca351a9753c8c5576375910be294a3e42e4791e4eb3d" exitCode=0 Jan 22 17:35:34 crc kubenswrapper[4758]: I0122 17:35:34.974489 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g9jtj" event={"ID":"796eeceb-d0a5-4c26-9f57-a61cb7b5dc91","Type":"ContainerDied","Data":"ac895c9621fff6a89071ca351a9753c8c5576375910be294a3e42e4791e4eb3d"} Jan 22 17:35:34 crc kubenswrapper[4758]: I0122 17:35:34.974851 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g9jtj" event={"ID":"796eeceb-d0a5-4c26-9f57-a61cb7b5dc91","Type":"ContainerStarted","Data":"ce3faa5fd8dfa94eb1907855737bd915e82e02479cde50e0a34cabb7522f8dc9"} Jan 22 17:35:34 crc kubenswrapper[4758]: I0122 17:35:34.976732 4758 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 17:35:37 crc kubenswrapper[4758]: I0122 17:35:37.014499 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g9jtj" event={"ID":"796eeceb-d0a5-4c26-9f57-a61cb7b5dc91","Type":"ContainerStarted","Data":"6db04a2d86d11255f1f351366f1ddfeeae720d9f19fd9c9935eaa28953a29f4c"} Jan 22 17:35:41 crc kubenswrapper[4758]: I0122 17:35:41.059935 4758 generic.go:334] "Generic (PLEG): container finished" podID="796eeceb-d0a5-4c26-9f57-a61cb7b5dc91" containerID="6db04a2d86d11255f1f351366f1ddfeeae720d9f19fd9c9935eaa28953a29f4c" exitCode=0 Jan 22 17:35:41 crc kubenswrapper[4758]: I0122 17:35:41.059971 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g9jtj" event={"ID":"796eeceb-d0a5-4c26-9f57-a61cb7b5dc91","Type":"ContainerDied","Data":"6db04a2d86d11255f1f351366f1ddfeeae720d9f19fd9c9935eaa28953a29f4c"} Jan 22 17:35:42 crc kubenswrapper[4758]: I0122 17:35:42.074495 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g9jtj" event={"ID":"796eeceb-d0a5-4c26-9f57-a61cb7b5dc91","Type":"ContainerStarted","Data":"cac5fb1c9f1f6072a913df0442f91e727411fc839afeaf595b48d0c7279a86d6"} Jan 22 17:35:42 crc kubenswrapper[4758]: I0122 17:35:42.108059 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g9jtj" podStartSLOduration=2.474251165 podStartE2EDuration="9.108020869s" podCreationTimestamp="2026-01-22 17:35:33 +0000 UTC" firstStartedPulling="2026-01-22 17:35:34.976371707 +0000 UTC m=+3956.459710992" lastFinishedPulling="2026-01-22 17:35:41.610141421 +0000 UTC m=+3963.093480696" observedRunningTime="2026-01-22 17:35:42.094426368 +0000 UTC m=+3963.577765663" watchObservedRunningTime="2026-01-22 17:35:42.108020869 +0000 UTC m=+3963.591360154" Jan 22 17:35:43 crc kubenswrapper[4758]: I0122 17:35:43.769326 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g9jtj" Jan 22 17:35:43 crc kubenswrapper[4758]: I0122 17:35:43.769592 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g9jtj" Jan 22 17:35:44 crc kubenswrapper[4758]: I0122 17:35:44.818079 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g9jtj" podUID="796eeceb-d0a5-4c26-9f57-a61cb7b5dc91" containerName="registry-server" probeResult="failure" output=< Jan 22 17:35:44 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 22 17:35:44 crc kubenswrapper[4758]: > Jan 22 17:35:46 crc kubenswrapper[4758]: I0122 17:35:46.808950 4758 scope.go:117] "RemoveContainer" containerID="2eb9b403711db327cd66f17ca80b7c8e2b5fed945d29ad01e1af351fcc931688" Jan 22 17:35:46 crc kubenswrapper[4758]: E0122 17:35:46.810868 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:35:53 crc kubenswrapper[4758]: I0122 17:35:53.824852 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g9jtj" Jan 22 17:35:53 crc kubenswrapper[4758]: I0122 17:35:53.887032 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g9jtj" Jan 22 17:35:54 crc kubenswrapper[4758]: I0122 17:35:54.065788 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g9jtj"] Jan 22 17:35:55 crc kubenswrapper[4758]: I0122 17:35:55.206031 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g9jtj" podUID="796eeceb-d0a5-4c26-9f57-a61cb7b5dc91" containerName="registry-server" containerID="cri-o://cac5fb1c9f1f6072a913df0442f91e727411fc839afeaf595b48d0c7279a86d6" gracePeriod=2 Jan 22 17:35:55 crc kubenswrapper[4758]: I0122 17:35:55.687241 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g9jtj" Jan 22 17:35:55 crc kubenswrapper[4758]: I0122 17:35:55.777900 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/796eeceb-d0a5-4c26-9f57-a61cb7b5dc91-utilities\") pod \"796eeceb-d0a5-4c26-9f57-a61cb7b5dc91\" (UID: \"796eeceb-d0a5-4c26-9f57-a61cb7b5dc91\") " Jan 22 17:35:55 crc kubenswrapper[4758]: I0122 17:35:55.778088 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94wc2\" (UniqueName: \"kubernetes.io/projected/796eeceb-d0a5-4c26-9f57-a61cb7b5dc91-kube-api-access-94wc2\") pod \"796eeceb-d0a5-4c26-9f57-a61cb7b5dc91\" (UID: \"796eeceb-d0a5-4c26-9f57-a61cb7b5dc91\") " Jan 22 17:35:55 crc kubenswrapper[4758]: I0122 17:35:55.778177 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/796eeceb-d0a5-4c26-9f57-a61cb7b5dc91-catalog-content\") pod \"796eeceb-d0a5-4c26-9f57-a61cb7b5dc91\" (UID: \"796eeceb-d0a5-4c26-9f57-a61cb7b5dc91\") " Jan 22 17:35:55 crc kubenswrapper[4758]: I0122 17:35:55.779247 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/796eeceb-d0a5-4c26-9f57-a61cb7b5dc91-utilities" (OuterVolumeSpecName: "utilities") pod "796eeceb-d0a5-4c26-9f57-a61cb7b5dc91" (UID: "796eeceb-d0a5-4c26-9f57-a61cb7b5dc91"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:35:55 crc kubenswrapper[4758]: I0122 17:35:55.784276 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/796eeceb-d0a5-4c26-9f57-a61cb7b5dc91-kube-api-access-94wc2" (OuterVolumeSpecName: "kube-api-access-94wc2") pod "796eeceb-d0a5-4c26-9f57-a61cb7b5dc91" (UID: "796eeceb-d0a5-4c26-9f57-a61cb7b5dc91"). InnerVolumeSpecName "kube-api-access-94wc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:35:55 crc kubenswrapper[4758]: I0122 17:35:55.880471 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/796eeceb-d0a5-4c26-9f57-a61cb7b5dc91-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 17:35:55 crc kubenswrapper[4758]: I0122 17:35:55.880522 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94wc2\" (UniqueName: \"kubernetes.io/projected/796eeceb-d0a5-4c26-9f57-a61cb7b5dc91-kube-api-access-94wc2\") on node \"crc\" DevicePath \"\"" Jan 22 17:35:55 crc kubenswrapper[4758]: I0122 17:35:55.911721 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/796eeceb-d0a5-4c26-9f57-a61cb7b5dc91-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "796eeceb-d0a5-4c26-9f57-a61cb7b5dc91" (UID: "796eeceb-d0a5-4c26-9f57-a61cb7b5dc91"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:35:55 crc kubenswrapper[4758]: I0122 17:35:55.982117 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/796eeceb-d0a5-4c26-9f57-a61cb7b5dc91-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 17:35:56 crc kubenswrapper[4758]: I0122 17:35:56.219119 4758 generic.go:334] "Generic (PLEG): container finished" podID="796eeceb-d0a5-4c26-9f57-a61cb7b5dc91" containerID="cac5fb1c9f1f6072a913df0442f91e727411fc839afeaf595b48d0c7279a86d6" exitCode=0 Jan 22 17:35:56 crc kubenswrapper[4758]: I0122 17:35:56.219162 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g9jtj" Jan 22 17:35:56 crc kubenswrapper[4758]: I0122 17:35:56.219181 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g9jtj" event={"ID":"796eeceb-d0a5-4c26-9f57-a61cb7b5dc91","Type":"ContainerDied","Data":"cac5fb1c9f1f6072a913df0442f91e727411fc839afeaf595b48d0c7279a86d6"} Jan 22 17:35:56 crc kubenswrapper[4758]: I0122 17:35:56.219521 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g9jtj" event={"ID":"796eeceb-d0a5-4c26-9f57-a61cb7b5dc91","Type":"ContainerDied","Data":"ce3faa5fd8dfa94eb1907855737bd915e82e02479cde50e0a34cabb7522f8dc9"} Jan 22 17:35:56 crc kubenswrapper[4758]: I0122 17:35:56.219560 4758 scope.go:117] "RemoveContainer" containerID="cac5fb1c9f1f6072a913df0442f91e727411fc839afeaf595b48d0c7279a86d6" Jan 22 17:35:56 crc kubenswrapper[4758]: I0122 17:35:56.247525 4758 scope.go:117] "RemoveContainer" containerID="6db04a2d86d11255f1f351366f1ddfeeae720d9f19fd9c9935eaa28953a29f4c" Jan 22 17:35:56 crc kubenswrapper[4758]: I0122 17:35:56.259682 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g9jtj"] Jan 22 17:35:56 crc kubenswrapper[4758]: I0122 17:35:56.273689 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g9jtj"] Jan 22 17:35:56 crc kubenswrapper[4758]: I0122 17:35:56.285272 4758 scope.go:117] "RemoveContainer" containerID="ac895c9621fff6a89071ca351a9753c8c5576375910be294a3e42e4791e4eb3d" Jan 22 17:35:56 crc kubenswrapper[4758]: I0122 17:35:56.338113 4758 scope.go:117] "RemoveContainer" containerID="cac5fb1c9f1f6072a913df0442f91e727411fc839afeaf595b48d0c7279a86d6" Jan 22 17:35:56 crc kubenswrapper[4758]: E0122 17:35:56.340050 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cac5fb1c9f1f6072a913df0442f91e727411fc839afeaf595b48d0c7279a86d6\": container with ID starting with cac5fb1c9f1f6072a913df0442f91e727411fc839afeaf595b48d0c7279a86d6 not found: ID does not exist" containerID="cac5fb1c9f1f6072a913df0442f91e727411fc839afeaf595b48d0c7279a86d6" Jan 22 17:35:56 crc kubenswrapper[4758]: I0122 17:35:56.340100 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cac5fb1c9f1f6072a913df0442f91e727411fc839afeaf595b48d0c7279a86d6"} err="failed to get container status \"cac5fb1c9f1f6072a913df0442f91e727411fc839afeaf595b48d0c7279a86d6\": rpc error: code = NotFound desc = could not find container \"cac5fb1c9f1f6072a913df0442f91e727411fc839afeaf595b48d0c7279a86d6\": container with ID starting with cac5fb1c9f1f6072a913df0442f91e727411fc839afeaf595b48d0c7279a86d6 not found: ID does not exist" Jan 22 17:35:56 crc kubenswrapper[4758]: I0122 17:35:56.340127 4758 scope.go:117] "RemoveContainer" containerID="6db04a2d86d11255f1f351366f1ddfeeae720d9f19fd9c9935eaa28953a29f4c" Jan 22 17:35:56 crc kubenswrapper[4758]: E0122 17:35:56.340562 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6db04a2d86d11255f1f351366f1ddfeeae720d9f19fd9c9935eaa28953a29f4c\": container with ID starting with 6db04a2d86d11255f1f351366f1ddfeeae720d9f19fd9c9935eaa28953a29f4c not found: ID does not exist" containerID="6db04a2d86d11255f1f351366f1ddfeeae720d9f19fd9c9935eaa28953a29f4c" Jan 22 17:35:56 crc kubenswrapper[4758]: I0122 17:35:56.340641 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6db04a2d86d11255f1f351366f1ddfeeae720d9f19fd9c9935eaa28953a29f4c"} err="failed to get container status \"6db04a2d86d11255f1f351366f1ddfeeae720d9f19fd9c9935eaa28953a29f4c\": rpc error: code = NotFound desc = could not find container \"6db04a2d86d11255f1f351366f1ddfeeae720d9f19fd9c9935eaa28953a29f4c\": container with ID starting with 6db04a2d86d11255f1f351366f1ddfeeae720d9f19fd9c9935eaa28953a29f4c not found: ID does not exist" Jan 22 17:35:56 crc kubenswrapper[4758]: I0122 17:35:56.340700 4758 scope.go:117] "RemoveContainer" containerID="ac895c9621fff6a89071ca351a9753c8c5576375910be294a3e42e4791e4eb3d" Jan 22 17:35:56 crc kubenswrapper[4758]: E0122 17:35:56.341106 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac895c9621fff6a89071ca351a9753c8c5576375910be294a3e42e4791e4eb3d\": container with ID starting with ac895c9621fff6a89071ca351a9753c8c5576375910be294a3e42e4791e4eb3d not found: ID does not exist" containerID="ac895c9621fff6a89071ca351a9753c8c5576375910be294a3e42e4791e4eb3d" Jan 22 17:35:56 crc kubenswrapper[4758]: I0122 17:35:56.341135 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac895c9621fff6a89071ca351a9753c8c5576375910be294a3e42e4791e4eb3d"} err="failed to get container status \"ac895c9621fff6a89071ca351a9753c8c5576375910be294a3e42e4791e4eb3d\": rpc error: code = NotFound desc = could not find container \"ac895c9621fff6a89071ca351a9753c8c5576375910be294a3e42e4791e4eb3d\": container with ID starting with ac895c9621fff6a89071ca351a9753c8c5576375910be294a3e42e4791e4eb3d not found: ID does not exist" Jan 22 17:35:56 crc kubenswrapper[4758]: I0122 17:35:56.821179 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="796eeceb-d0a5-4c26-9f57-a61cb7b5dc91" path="/var/lib/kubelet/pods/796eeceb-d0a5-4c26-9f57-a61cb7b5dc91/volumes" Jan 22 17:35:58 crc kubenswrapper[4758]: I0122 17:35:58.818704 4758 scope.go:117] "RemoveContainer" containerID="2eb9b403711db327cd66f17ca80b7c8e2b5fed945d29ad01e1af351fcc931688" Jan 22 17:35:58 crc kubenswrapper[4758]: E0122 17:35:58.819243 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:36:11 crc kubenswrapper[4758]: I0122 17:36:11.807439 4758 scope.go:117] "RemoveContainer" containerID="2eb9b403711db327cd66f17ca80b7c8e2b5fed945d29ad01e1af351fcc931688" Jan 22 17:36:11 crc kubenswrapper[4758]: E0122 17:36:11.808270 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:36:25 crc kubenswrapper[4758]: I0122 17:36:25.808235 4758 scope.go:117] "RemoveContainer" containerID="2eb9b403711db327cd66f17ca80b7c8e2b5fed945d29ad01e1af351fcc931688" Jan 22 17:36:25 crc kubenswrapper[4758]: E0122 17:36:25.809028 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:36:40 crc kubenswrapper[4758]: I0122 17:36:40.808661 4758 scope.go:117] "RemoveContainer" containerID="2eb9b403711db327cd66f17ca80b7c8e2b5fed945d29ad01e1af351fcc931688" Jan 22 17:36:40 crc kubenswrapper[4758]: E0122 17:36:40.809433 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:36:51 crc kubenswrapper[4758]: I0122 17:36:51.808033 4758 scope.go:117] "RemoveContainer" containerID="2eb9b403711db327cd66f17ca80b7c8e2b5fed945d29ad01e1af351fcc931688" Jan 22 17:36:51 crc kubenswrapper[4758]: E0122 17:36:51.808637 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:37:06 crc kubenswrapper[4758]: I0122 17:37:06.810379 4758 scope.go:117] "RemoveContainer" containerID="2eb9b403711db327cd66f17ca80b7c8e2b5fed945d29ad01e1af351fcc931688" Jan 22 17:37:06 crc kubenswrapper[4758]: E0122 17:37:06.811175 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:37:18 crc kubenswrapper[4758]: I0122 17:37:18.829330 4758 scope.go:117] "RemoveContainer" containerID="2eb9b403711db327cd66f17ca80b7c8e2b5fed945d29ad01e1af351fcc931688" Jan 22 17:37:18 crc kubenswrapper[4758]: E0122 17:37:18.830190 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:37:25 crc kubenswrapper[4758]: I0122 17:37:25.521895 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-psqns"] Jan 22 17:37:25 crc kubenswrapper[4758]: E0122 17:37:25.523045 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="796eeceb-d0a5-4c26-9f57-a61cb7b5dc91" containerName="extract-utilities" Jan 22 17:37:25 crc kubenswrapper[4758]: I0122 17:37:25.523068 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="796eeceb-d0a5-4c26-9f57-a61cb7b5dc91" containerName="extract-utilities" Jan 22 17:37:25 crc kubenswrapper[4758]: E0122 17:37:25.523119 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="796eeceb-d0a5-4c26-9f57-a61cb7b5dc91" containerName="registry-server" Jan 22 17:37:25 crc kubenswrapper[4758]: I0122 17:37:25.523125 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="796eeceb-d0a5-4c26-9f57-a61cb7b5dc91" containerName="registry-server" Jan 22 17:37:25 crc kubenswrapper[4758]: E0122 17:37:25.523140 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="796eeceb-d0a5-4c26-9f57-a61cb7b5dc91" containerName="extract-content" Jan 22 17:37:25 crc kubenswrapper[4758]: I0122 17:37:25.523147 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="796eeceb-d0a5-4c26-9f57-a61cb7b5dc91" containerName="extract-content" Jan 22 17:37:25 crc kubenswrapper[4758]: I0122 17:37:25.523364 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="796eeceb-d0a5-4c26-9f57-a61cb7b5dc91" containerName="registry-server" Jan 22 17:37:25 crc kubenswrapper[4758]: I0122 17:37:25.525116 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-psqns" Jan 22 17:37:25 crc kubenswrapper[4758]: I0122 17:37:25.532625 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-psqns"] Jan 22 17:37:25 crc kubenswrapper[4758]: I0122 17:37:25.561002 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/732aa0d5-51d4-411e-a17b-c941c0b560f7-catalog-content\") pod \"certified-operators-psqns\" (UID: \"732aa0d5-51d4-411e-a17b-c941c0b560f7\") " pod="openshift-marketplace/certified-operators-psqns" Jan 22 17:37:25 crc kubenswrapper[4758]: I0122 17:37:25.561174 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/732aa0d5-51d4-411e-a17b-c941c0b560f7-utilities\") pod \"certified-operators-psqns\" (UID: \"732aa0d5-51d4-411e-a17b-c941c0b560f7\") " pod="openshift-marketplace/certified-operators-psqns" Jan 22 17:37:25 crc kubenswrapper[4758]: I0122 17:37:25.561241 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccjf6\" (UniqueName: \"kubernetes.io/projected/732aa0d5-51d4-411e-a17b-c941c0b560f7-kube-api-access-ccjf6\") pod \"certified-operators-psqns\" (UID: \"732aa0d5-51d4-411e-a17b-c941c0b560f7\") " pod="openshift-marketplace/certified-operators-psqns" Jan 22 17:37:25 crc kubenswrapper[4758]: I0122 17:37:25.664106 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/732aa0d5-51d4-411e-a17b-c941c0b560f7-catalog-content\") pod \"certified-operators-psqns\" (UID: \"732aa0d5-51d4-411e-a17b-c941c0b560f7\") " pod="openshift-marketplace/certified-operators-psqns" Jan 22 17:37:25 crc kubenswrapper[4758]: I0122 17:37:25.664546 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/732aa0d5-51d4-411e-a17b-c941c0b560f7-utilities\") pod \"certified-operators-psqns\" (UID: \"732aa0d5-51d4-411e-a17b-c941c0b560f7\") " pod="openshift-marketplace/certified-operators-psqns" Jan 22 17:37:25 crc kubenswrapper[4758]: I0122 17:37:25.664732 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/732aa0d5-51d4-411e-a17b-c941c0b560f7-catalog-content\") pod \"certified-operators-psqns\" (UID: \"732aa0d5-51d4-411e-a17b-c941c0b560f7\") " pod="openshift-marketplace/certified-operators-psqns" Jan 22 17:37:25 crc kubenswrapper[4758]: I0122 17:37:25.665020 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/732aa0d5-51d4-411e-a17b-c941c0b560f7-utilities\") pod \"certified-operators-psqns\" (UID: \"732aa0d5-51d4-411e-a17b-c941c0b560f7\") " pod="openshift-marketplace/certified-operators-psqns" Jan 22 17:37:25 crc kubenswrapper[4758]: I0122 17:37:25.665117 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccjf6\" (UniqueName: \"kubernetes.io/projected/732aa0d5-51d4-411e-a17b-c941c0b560f7-kube-api-access-ccjf6\") pod \"certified-operators-psqns\" (UID: \"732aa0d5-51d4-411e-a17b-c941c0b560f7\") " pod="openshift-marketplace/certified-operators-psqns" Jan 22 17:37:25 crc kubenswrapper[4758]: I0122 17:37:25.800964 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccjf6\" (UniqueName: \"kubernetes.io/projected/732aa0d5-51d4-411e-a17b-c941c0b560f7-kube-api-access-ccjf6\") pod \"certified-operators-psqns\" (UID: \"732aa0d5-51d4-411e-a17b-c941c0b560f7\") " pod="openshift-marketplace/certified-operators-psqns" Jan 22 17:37:25 crc kubenswrapper[4758]: I0122 17:37:25.851891 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-psqns" Jan 22 17:37:26 crc kubenswrapper[4758]: I0122 17:37:26.425073 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-psqns"] Jan 22 17:37:27 crc kubenswrapper[4758]: I0122 17:37:27.202424 4758 generic.go:334] "Generic (PLEG): container finished" podID="732aa0d5-51d4-411e-a17b-c941c0b560f7" containerID="04e8fe929f96f93f0eb792e82aacacf766244f75a4b3298ac5aeaf686f62b21b" exitCode=0 Jan 22 17:37:27 crc kubenswrapper[4758]: I0122 17:37:27.202560 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-psqns" event={"ID":"732aa0d5-51d4-411e-a17b-c941c0b560f7","Type":"ContainerDied","Data":"04e8fe929f96f93f0eb792e82aacacf766244f75a4b3298ac5aeaf686f62b21b"} Jan 22 17:37:27 crc kubenswrapper[4758]: I0122 17:37:27.203801 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-psqns" event={"ID":"732aa0d5-51d4-411e-a17b-c941c0b560f7","Type":"ContainerStarted","Data":"3fef26ebb162763751869aa47eaae945ab912f3198b70bebb1f67d442f15f077"} Jan 22 17:37:28 crc kubenswrapper[4758]: I0122 17:37:28.216831 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-psqns" event={"ID":"732aa0d5-51d4-411e-a17b-c941c0b560f7","Type":"ContainerStarted","Data":"4bfa51597e19e6a59d92d5ea3eeba178ddc139db4d137f64f41224d2a8b98ce6"} Jan 22 17:37:29 crc kubenswrapper[4758]: I0122 17:37:29.231863 4758 generic.go:334] "Generic (PLEG): container finished" podID="732aa0d5-51d4-411e-a17b-c941c0b560f7" containerID="4bfa51597e19e6a59d92d5ea3eeba178ddc139db4d137f64f41224d2a8b98ce6" exitCode=0 Jan 22 17:37:29 crc kubenswrapper[4758]: I0122 17:37:29.231956 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-psqns" event={"ID":"732aa0d5-51d4-411e-a17b-c941c0b560f7","Type":"ContainerDied","Data":"4bfa51597e19e6a59d92d5ea3eeba178ddc139db4d137f64f41224d2a8b98ce6"} Jan 22 17:37:30 crc kubenswrapper[4758]: I0122 17:37:30.260441 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-psqns" event={"ID":"732aa0d5-51d4-411e-a17b-c941c0b560f7","Type":"ContainerStarted","Data":"ded490e3ead6d8c252d37a1e0c4c53f4ab0dd94c9710408465fecf80ee5bf6ae"} Jan 22 17:37:30 crc kubenswrapper[4758]: I0122 17:37:30.284593 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-psqns" podStartSLOduration=2.83314308 podStartE2EDuration="5.284566815s" podCreationTimestamp="2026-01-22 17:37:25 +0000 UTC" firstStartedPulling="2026-01-22 17:37:27.206015032 +0000 UTC m=+4068.689354317" lastFinishedPulling="2026-01-22 17:37:29.657438777 +0000 UTC m=+4071.140778052" observedRunningTime="2026-01-22 17:37:30.281523402 +0000 UTC m=+4071.764862687" watchObservedRunningTime="2026-01-22 17:37:30.284566815 +0000 UTC m=+4071.767906100" Jan 22 17:37:32 crc kubenswrapper[4758]: I0122 17:37:32.809265 4758 scope.go:117] "RemoveContainer" containerID="2eb9b403711db327cd66f17ca80b7c8e2b5fed945d29ad01e1af351fcc931688" Jan 22 17:37:32 crc kubenswrapper[4758]: E0122 17:37:32.810290 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:37:35 crc kubenswrapper[4758]: I0122 17:37:35.853207 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-psqns" Jan 22 17:37:35 crc kubenswrapper[4758]: I0122 17:37:35.853805 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-psqns" Jan 22 17:37:35 crc kubenswrapper[4758]: I0122 17:37:35.903045 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-psqns" Jan 22 17:37:36 crc kubenswrapper[4758]: I0122 17:37:36.372012 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-psqns" Jan 22 17:37:36 crc kubenswrapper[4758]: I0122 17:37:36.427007 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-psqns"] Jan 22 17:37:38 crc kubenswrapper[4758]: I0122 17:37:38.344866 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-psqns" podUID="732aa0d5-51d4-411e-a17b-c941c0b560f7" containerName="registry-server" containerID="cri-o://ded490e3ead6d8c252d37a1e0c4c53f4ab0dd94c9710408465fecf80ee5bf6ae" gracePeriod=2 Jan 22 17:37:39 crc kubenswrapper[4758]: I0122 17:37:39.360843 4758 generic.go:334] "Generic (PLEG): container finished" podID="732aa0d5-51d4-411e-a17b-c941c0b560f7" containerID="ded490e3ead6d8c252d37a1e0c4c53f4ab0dd94c9710408465fecf80ee5bf6ae" exitCode=0 Jan 22 17:37:39 crc kubenswrapper[4758]: I0122 17:37:39.360960 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-psqns" event={"ID":"732aa0d5-51d4-411e-a17b-c941c0b560f7","Type":"ContainerDied","Data":"ded490e3ead6d8c252d37a1e0c4c53f4ab0dd94c9710408465fecf80ee5bf6ae"} Jan 22 17:37:39 crc kubenswrapper[4758]: I0122 17:37:39.464901 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-psqns" Jan 22 17:37:39 crc kubenswrapper[4758]: I0122 17:37:39.545450 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/732aa0d5-51d4-411e-a17b-c941c0b560f7-utilities\") pod \"732aa0d5-51d4-411e-a17b-c941c0b560f7\" (UID: \"732aa0d5-51d4-411e-a17b-c941c0b560f7\") " Jan 22 17:37:39 crc kubenswrapper[4758]: I0122 17:37:39.545694 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccjf6\" (UniqueName: \"kubernetes.io/projected/732aa0d5-51d4-411e-a17b-c941c0b560f7-kube-api-access-ccjf6\") pod \"732aa0d5-51d4-411e-a17b-c941c0b560f7\" (UID: \"732aa0d5-51d4-411e-a17b-c941c0b560f7\") " Jan 22 17:37:39 crc kubenswrapper[4758]: I0122 17:37:39.545785 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/732aa0d5-51d4-411e-a17b-c941c0b560f7-catalog-content\") pod \"732aa0d5-51d4-411e-a17b-c941c0b560f7\" (UID: \"732aa0d5-51d4-411e-a17b-c941c0b560f7\") " Jan 22 17:37:39 crc kubenswrapper[4758]: I0122 17:37:39.546568 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/732aa0d5-51d4-411e-a17b-c941c0b560f7-utilities" (OuterVolumeSpecName: "utilities") pod "732aa0d5-51d4-411e-a17b-c941c0b560f7" (UID: "732aa0d5-51d4-411e-a17b-c941c0b560f7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:37:39 crc kubenswrapper[4758]: I0122 17:37:39.559694 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/732aa0d5-51d4-411e-a17b-c941c0b560f7-kube-api-access-ccjf6" (OuterVolumeSpecName: "kube-api-access-ccjf6") pod "732aa0d5-51d4-411e-a17b-c941c0b560f7" (UID: "732aa0d5-51d4-411e-a17b-c941c0b560f7"). InnerVolumeSpecName "kube-api-access-ccjf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:37:39 crc kubenswrapper[4758]: I0122 17:37:39.601725 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/732aa0d5-51d4-411e-a17b-c941c0b560f7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "732aa0d5-51d4-411e-a17b-c941c0b560f7" (UID: "732aa0d5-51d4-411e-a17b-c941c0b560f7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:37:39 crc kubenswrapper[4758]: I0122 17:37:39.648695 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/732aa0d5-51d4-411e-a17b-c941c0b560f7-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 17:37:39 crc kubenswrapper[4758]: I0122 17:37:39.648758 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccjf6\" (UniqueName: \"kubernetes.io/projected/732aa0d5-51d4-411e-a17b-c941c0b560f7-kube-api-access-ccjf6\") on node \"crc\" DevicePath \"\"" Jan 22 17:37:39 crc kubenswrapper[4758]: I0122 17:37:39.648773 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/732aa0d5-51d4-411e-a17b-c941c0b560f7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 17:37:40 crc kubenswrapper[4758]: I0122 17:37:40.372416 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-psqns" event={"ID":"732aa0d5-51d4-411e-a17b-c941c0b560f7","Type":"ContainerDied","Data":"3fef26ebb162763751869aa47eaae945ab912f3198b70bebb1f67d442f15f077"} Jan 22 17:37:40 crc kubenswrapper[4758]: I0122 17:37:40.372849 4758 scope.go:117] "RemoveContainer" containerID="ded490e3ead6d8c252d37a1e0c4c53f4ab0dd94c9710408465fecf80ee5bf6ae" Jan 22 17:37:40 crc kubenswrapper[4758]: I0122 17:37:40.372501 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-psqns" Jan 22 17:37:40 crc kubenswrapper[4758]: I0122 17:37:40.397151 4758 scope.go:117] "RemoveContainer" containerID="4bfa51597e19e6a59d92d5ea3eeba178ddc139db4d137f64f41224d2a8b98ce6" Jan 22 17:37:40 crc kubenswrapper[4758]: I0122 17:37:40.440434 4758 scope.go:117] "RemoveContainer" containerID="04e8fe929f96f93f0eb792e82aacacf766244f75a4b3298ac5aeaf686f62b21b" Jan 22 17:37:40 crc kubenswrapper[4758]: I0122 17:37:40.441097 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-psqns"] Jan 22 17:37:40 crc kubenswrapper[4758]: I0122 17:37:40.450039 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-psqns"] Jan 22 17:37:40 crc kubenswrapper[4758]: I0122 17:37:40.819296 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="732aa0d5-51d4-411e-a17b-c941c0b560f7" path="/var/lib/kubelet/pods/732aa0d5-51d4-411e-a17b-c941c0b560f7/volumes" Jan 22 17:37:43 crc kubenswrapper[4758]: I0122 17:37:43.808268 4758 scope.go:117] "RemoveContainer" containerID="2eb9b403711db327cd66f17ca80b7c8e2b5fed945d29ad01e1af351fcc931688" Jan 22 17:37:43 crc kubenswrapper[4758]: E0122 17:37:43.808982 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:37:55 crc kubenswrapper[4758]: I0122 17:37:55.809046 4758 scope.go:117] "RemoveContainer" containerID="2eb9b403711db327cd66f17ca80b7c8e2b5fed945d29ad01e1af351fcc931688" Jan 22 17:37:55 crc kubenswrapper[4758]: E0122 17:37:55.809886 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:38:07 crc kubenswrapper[4758]: I0122 17:38:07.808535 4758 scope.go:117] "RemoveContainer" containerID="2eb9b403711db327cd66f17ca80b7c8e2b5fed945d29ad01e1af351fcc931688" Jan 22 17:38:07 crc kubenswrapper[4758]: E0122 17:38:07.809158 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:38:19 crc kubenswrapper[4758]: I0122 17:38:19.808909 4758 scope.go:117] "RemoveContainer" containerID="2eb9b403711db327cd66f17ca80b7c8e2b5fed945d29ad01e1af351fcc931688" Jan 22 17:38:19 crc kubenswrapper[4758]: E0122 17:38:19.809877 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:38:32 crc kubenswrapper[4758]: I0122 17:38:32.809529 4758 scope.go:117] "RemoveContainer" containerID="2eb9b403711db327cd66f17ca80b7c8e2b5fed945d29ad01e1af351fcc931688" Jan 22 17:38:32 crc kubenswrapper[4758]: E0122 17:38:32.810489 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:38:44 crc kubenswrapper[4758]: I0122 17:38:44.808354 4758 scope.go:117] "RemoveContainer" containerID="2eb9b403711db327cd66f17ca80b7c8e2b5fed945d29ad01e1af351fcc931688" Jan 22 17:38:45 crc kubenswrapper[4758]: I0122 17:38:45.112266 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerStarted","Data":"fb2b5dae11488cf1f921401dbbd3aaca34dac8cdcf379ae8fd0abef128b1dfc5"} Jan 22 17:40:27 crc kubenswrapper[4758]: I0122 17:40:27.989558 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-j2psb"] Jan 22 17:40:27 crc kubenswrapper[4758]: E0122 17:40:27.990807 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="732aa0d5-51d4-411e-a17b-c941c0b560f7" containerName="extract-content" Jan 22 17:40:27 crc kubenswrapper[4758]: I0122 17:40:27.990836 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="732aa0d5-51d4-411e-a17b-c941c0b560f7" containerName="extract-content" Jan 22 17:40:27 crc kubenswrapper[4758]: E0122 17:40:27.990875 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="732aa0d5-51d4-411e-a17b-c941c0b560f7" containerName="registry-server" Jan 22 17:40:27 crc kubenswrapper[4758]: I0122 17:40:27.990888 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="732aa0d5-51d4-411e-a17b-c941c0b560f7" containerName="registry-server" Jan 22 17:40:27 crc kubenswrapper[4758]: E0122 17:40:27.990915 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="732aa0d5-51d4-411e-a17b-c941c0b560f7" containerName="extract-utilities" Jan 22 17:40:27 crc kubenswrapper[4758]: I0122 17:40:27.990926 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="732aa0d5-51d4-411e-a17b-c941c0b560f7" containerName="extract-utilities" Jan 22 17:40:27 crc kubenswrapper[4758]: I0122 17:40:27.991239 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="732aa0d5-51d4-411e-a17b-c941c0b560f7" containerName="registry-server" Jan 22 17:40:27 crc kubenswrapper[4758]: I0122 17:40:27.993474 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j2psb" Jan 22 17:40:28 crc kubenswrapper[4758]: I0122 17:40:28.032642 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j2psb"] Jan 22 17:40:28 crc kubenswrapper[4758]: I0122 17:40:28.131669 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30657f72-e55a-4aff-b094-b6b3d6a5022f-catalog-content\") pod \"community-operators-j2psb\" (UID: \"30657f72-e55a-4aff-b094-b6b3d6a5022f\") " pod="openshift-marketplace/community-operators-j2psb" Jan 22 17:40:28 crc kubenswrapper[4758]: I0122 17:40:28.131752 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4h8r\" (UniqueName: \"kubernetes.io/projected/30657f72-e55a-4aff-b094-b6b3d6a5022f-kube-api-access-j4h8r\") pod \"community-operators-j2psb\" (UID: \"30657f72-e55a-4aff-b094-b6b3d6a5022f\") " pod="openshift-marketplace/community-operators-j2psb" Jan 22 17:40:28 crc kubenswrapper[4758]: I0122 17:40:28.132039 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30657f72-e55a-4aff-b094-b6b3d6a5022f-utilities\") pod \"community-operators-j2psb\" (UID: \"30657f72-e55a-4aff-b094-b6b3d6a5022f\") " pod="openshift-marketplace/community-operators-j2psb" Jan 22 17:40:28 crc kubenswrapper[4758]: I0122 17:40:28.234453 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30657f72-e55a-4aff-b094-b6b3d6a5022f-utilities\") pod \"community-operators-j2psb\" (UID: \"30657f72-e55a-4aff-b094-b6b3d6a5022f\") " pod="openshift-marketplace/community-operators-j2psb" Jan 22 17:40:28 crc kubenswrapper[4758]: I0122 17:40:28.234997 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30657f72-e55a-4aff-b094-b6b3d6a5022f-catalog-content\") pod \"community-operators-j2psb\" (UID: \"30657f72-e55a-4aff-b094-b6b3d6a5022f\") " pod="openshift-marketplace/community-operators-j2psb" Jan 22 17:40:28 crc kubenswrapper[4758]: I0122 17:40:28.235062 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4h8r\" (UniqueName: \"kubernetes.io/projected/30657f72-e55a-4aff-b094-b6b3d6a5022f-kube-api-access-j4h8r\") pod \"community-operators-j2psb\" (UID: \"30657f72-e55a-4aff-b094-b6b3d6a5022f\") " pod="openshift-marketplace/community-operators-j2psb" Jan 22 17:40:28 crc kubenswrapper[4758]: I0122 17:40:28.235090 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30657f72-e55a-4aff-b094-b6b3d6a5022f-utilities\") pod \"community-operators-j2psb\" (UID: \"30657f72-e55a-4aff-b094-b6b3d6a5022f\") " pod="openshift-marketplace/community-operators-j2psb" Jan 22 17:40:28 crc kubenswrapper[4758]: I0122 17:40:28.235501 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30657f72-e55a-4aff-b094-b6b3d6a5022f-catalog-content\") pod \"community-operators-j2psb\" (UID: \"30657f72-e55a-4aff-b094-b6b3d6a5022f\") " pod="openshift-marketplace/community-operators-j2psb" Jan 22 17:40:28 crc kubenswrapper[4758]: I0122 17:40:28.265982 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4h8r\" (UniqueName: \"kubernetes.io/projected/30657f72-e55a-4aff-b094-b6b3d6a5022f-kube-api-access-j4h8r\") pod \"community-operators-j2psb\" (UID: \"30657f72-e55a-4aff-b094-b6b3d6a5022f\") " pod="openshift-marketplace/community-operators-j2psb" Jan 22 17:40:28 crc kubenswrapper[4758]: I0122 17:40:28.322580 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j2psb" Jan 22 17:40:28 crc kubenswrapper[4758]: I0122 17:40:28.895305 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j2psb"] Jan 22 17:40:29 crc kubenswrapper[4758]: I0122 17:40:29.316209 4758 generic.go:334] "Generic (PLEG): container finished" podID="30657f72-e55a-4aff-b094-b6b3d6a5022f" containerID="f550b6ae606e274baa0c424604c8771f53eeae1d633c2fe189610e73f6ec16b3" exitCode=0 Jan 22 17:40:29 crc kubenswrapper[4758]: I0122 17:40:29.316265 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j2psb" event={"ID":"30657f72-e55a-4aff-b094-b6b3d6a5022f","Type":"ContainerDied","Data":"f550b6ae606e274baa0c424604c8771f53eeae1d633c2fe189610e73f6ec16b3"} Jan 22 17:40:29 crc kubenswrapper[4758]: I0122 17:40:29.316296 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j2psb" event={"ID":"30657f72-e55a-4aff-b094-b6b3d6a5022f","Type":"ContainerStarted","Data":"e6bd1ed33b4987b52b1aca489b084567a08b64e10f561b7123d8d0058286fae7"} Jan 22 17:40:30 crc kubenswrapper[4758]: I0122 17:40:30.330685 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j2psb" event={"ID":"30657f72-e55a-4aff-b094-b6b3d6a5022f","Type":"ContainerStarted","Data":"5a44dbd8c50ae6bba8bbde5ba80ebd9c7ee9a80436025156bebec773e6416583"} Jan 22 17:40:31 crc kubenswrapper[4758]: I0122 17:40:31.348579 4758 generic.go:334] "Generic (PLEG): container finished" podID="30657f72-e55a-4aff-b094-b6b3d6a5022f" containerID="5a44dbd8c50ae6bba8bbde5ba80ebd9c7ee9a80436025156bebec773e6416583" exitCode=0 Jan 22 17:40:31 crc kubenswrapper[4758]: I0122 17:40:31.348674 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j2psb" event={"ID":"30657f72-e55a-4aff-b094-b6b3d6a5022f","Type":"ContainerDied","Data":"5a44dbd8c50ae6bba8bbde5ba80ebd9c7ee9a80436025156bebec773e6416583"} Jan 22 17:40:32 crc kubenswrapper[4758]: I0122 17:40:32.363446 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j2psb" event={"ID":"30657f72-e55a-4aff-b094-b6b3d6a5022f","Type":"ContainerStarted","Data":"2f78c4ec33e99ad4ad381f2617ec260b13e148e81a9a69e36d93469373dc69d9"} Jan 22 17:40:32 crc kubenswrapper[4758]: I0122 17:40:32.405731 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-j2psb" podStartSLOduration=2.985829522 podStartE2EDuration="5.405671419s" podCreationTimestamp="2026-01-22 17:40:27 +0000 UTC" firstStartedPulling="2026-01-22 17:40:29.320208441 +0000 UTC m=+4250.803547726" lastFinishedPulling="2026-01-22 17:40:31.740050338 +0000 UTC m=+4253.223389623" observedRunningTime="2026-01-22 17:40:32.392106979 +0000 UTC m=+4253.875446314" watchObservedRunningTime="2026-01-22 17:40:32.405671419 +0000 UTC m=+4253.889010694" Jan 22 17:40:38 crc kubenswrapper[4758]: I0122 17:40:38.323625 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-j2psb" Jan 22 17:40:38 crc kubenswrapper[4758]: I0122 17:40:38.324460 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-j2psb" Jan 22 17:40:38 crc kubenswrapper[4758]: I0122 17:40:38.413662 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-j2psb" Jan 22 17:40:38 crc kubenswrapper[4758]: I0122 17:40:38.495515 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-j2psb" Jan 22 17:40:38 crc kubenswrapper[4758]: I0122 17:40:38.655827 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j2psb"] Jan 22 17:40:40 crc kubenswrapper[4758]: I0122 17:40:40.449190 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-j2psb" podUID="30657f72-e55a-4aff-b094-b6b3d6a5022f" containerName="registry-server" containerID="cri-o://2f78c4ec33e99ad4ad381f2617ec260b13e148e81a9a69e36d93469373dc69d9" gracePeriod=2 Jan 22 17:40:40 crc kubenswrapper[4758]: I0122 17:40:40.968238 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j2psb" Jan 22 17:40:41 crc kubenswrapper[4758]: I0122 17:40:41.022879 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4h8r\" (UniqueName: \"kubernetes.io/projected/30657f72-e55a-4aff-b094-b6b3d6a5022f-kube-api-access-j4h8r\") pod \"30657f72-e55a-4aff-b094-b6b3d6a5022f\" (UID: \"30657f72-e55a-4aff-b094-b6b3d6a5022f\") " Jan 22 17:40:41 crc kubenswrapper[4758]: I0122 17:40:41.022977 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30657f72-e55a-4aff-b094-b6b3d6a5022f-utilities\") pod \"30657f72-e55a-4aff-b094-b6b3d6a5022f\" (UID: \"30657f72-e55a-4aff-b094-b6b3d6a5022f\") " Jan 22 17:40:41 crc kubenswrapper[4758]: I0122 17:40:41.023117 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30657f72-e55a-4aff-b094-b6b3d6a5022f-catalog-content\") pod \"30657f72-e55a-4aff-b094-b6b3d6a5022f\" (UID: \"30657f72-e55a-4aff-b094-b6b3d6a5022f\") " Jan 22 17:40:41 crc kubenswrapper[4758]: I0122 17:40:41.023794 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30657f72-e55a-4aff-b094-b6b3d6a5022f-utilities" (OuterVolumeSpecName: "utilities") pod "30657f72-e55a-4aff-b094-b6b3d6a5022f" (UID: "30657f72-e55a-4aff-b094-b6b3d6a5022f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:40:41 crc kubenswrapper[4758]: I0122 17:40:41.030320 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30657f72-e55a-4aff-b094-b6b3d6a5022f-kube-api-access-j4h8r" (OuterVolumeSpecName: "kube-api-access-j4h8r") pod "30657f72-e55a-4aff-b094-b6b3d6a5022f" (UID: "30657f72-e55a-4aff-b094-b6b3d6a5022f"). InnerVolumeSpecName "kube-api-access-j4h8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:40:41 crc kubenswrapper[4758]: I0122 17:40:41.125014 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4h8r\" (UniqueName: \"kubernetes.io/projected/30657f72-e55a-4aff-b094-b6b3d6a5022f-kube-api-access-j4h8r\") on node \"crc\" DevicePath \"\"" Jan 22 17:40:41 crc kubenswrapper[4758]: I0122 17:40:41.125050 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30657f72-e55a-4aff-b094-b6b3d6a5022f-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 17:40:41 crc kubenswrapper[4758]: I0122 17:40:41.164225 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30657f72-e55a-4aff-b094-b6b3d6a5022f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "30657f72-e55a-4aff-b094-b6b3d6a5022f" (UID: "30657f72-e55a-4aff-b094-b6b3d6a5022f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:40:41 crc kubenswrapper[4758]: I0122 17:40:41.227968 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30657f72-e55a-4aff-b094-b6b3d6a5022f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 17:40:41 crc kubenswrapper[4758]: I0122 17:40:41.468199 4758 generic.go:334] "Generic (PLEG): container finished" podID="30657f72-e55a-4aff-b094-b6b3d6a5022f" containerID="2f78c4ec33e99ad4ad381f2617ec260b13e148e81a9a69e36d93469373dc69d9" exitCode=0 Jan 22 17:40:41 crc kubenswrapper[4758]: I0122 17:40:41.468266 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j2psb" Jan 22 17:40:41 crc kubenswrapper[4758]: I0122 17:40:41.468265 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j2psb" event={"ID":"30657f72-e55a-4aff-b094-b6b3d6a5022f","Type":"ContainerDied","Data":"2f78c4ec33e99ad4ad381f2617ec260b13e148e81a9a69e36d93469373dc69d9"} Jan 22 17:40:41 crc kubenswrapper[4758]: I0122 17:40:41.468445 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j2psb" event={"ID":"30657f72-e55a-4aff-b094-b6b3d6a5022f","Type":"ContainerDied","Data":"e6bd1ed33b4987b52b1aca489b084567a08b64e10f561b7123d8d0058286fae7"} Jan 22 17:40:41 crc kubenswrapper[4758]: I0122 17:40:41.468473 4758 scope.go:117] "RemoveContainer" containerID="2f78c4ec33e99ad4ad381f2617ec260b13e148e81a9a69e36d93469373dc69d9" Jan 22 17:40:41 crc kubenswrapper[4758]: I0122 17:40:41.504035 4758 scope.go:117] "RemoveContainer" containerID="5a44dbd8c50ae6bba8bbde5ba80ebd9c7ee9a80436025156bebec773e6416583" Jan 22 17:40:41 crc kubenswrapper[4758]: I0122 17:40:41.516687 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j2psb"] Jan 22 17:40:41 crc kubenswrapper[4758]: I0122 17:40:41.530010 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-j2psb"] Jan 22 17:40:41 crc kubenswrapper[4758]: I0122 17:40:41.565443 4758 scope.go:117] "RemoveContainer" containerID="f550b6ae606e274baa0c424604c8771f53eeae1d633c2fe189610e73f6ec16b3" Jan 22 17:40:41 crc kubenswrapper[4758]: I0122 17:40:41.607191 4758 scope.go:117] "RemoveContainer" containerID="2f78c4ec33e99ad4ad381f2617ec260b13e148e81a9a69e36d93469373dc69d9" Jan 22 17:40:41 crc kubenswrapper[4758]: E0122 17:40:41.607496 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f78c4ec33e99ad4ad381f2617ec260b13e148e81a9a69e36d93469373dc69d9\": container with ID starting with 2f78c4ec33e99ad4ad381f2617ec260b13e148e81a9a69e36d93469373dc69d9 not found: ID does not exist" containerID="2f78c4ec33e99ad4ad381f2617ec260b13e148e81a9a69e36d93469373dc69d9" Jan 22 17:40:41 crc kubenswrapper[4758]: I0122 17:40:41.607535 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f78c4ec33e99ad4ad381f2617ec260b13e148e81a9a69e36d93469373dc69d9"} err="failed to get container status \"2f78c4ec33e99ad4ad381f2617ec260b13e148e81a9a69e36d93469373dc69d9\": rpc error: code = NotFound desc = could not find container \"2f78c4ec33e99ad4ad381f2617ec260b13e148e81a9a69e36d93469373dc69d9\": container with ID starting with 2f78c4ec33e99ad4ad381f2617ec260b13e148e81a9a69e36d93469373dc69d9 not found: ID does not exist" Jan 22 17:40:41 crc kubenswrapper[4758]: I0122 17:40:41.607559 4758 scope.go:117] "RemoveContainer" containerID="5a44dbd8c50ae6bba8bbde5ba80ebd9c7ee9a80436025156bebec773e6416583" Jan 22 17:40:41 crc kubenswrapper[4758]: E0122 17:40:41.608300 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a44dbd8c50ae6bba8bbde5ba80ebd9c7ee9a80436025156bebec773e6416583\": container with ID starting with 5a44dbd8c50ae6bba8bbde5ba80ebd9c7ee9a80436025156bebec773e6416583 not found: ID does not exist" containerID="5a44dbd8c50ae6bba8bbde5ba80ebd9c7ee9a80436025156bebec773e6416583" Jan 22 17:40:41 crc kubenswrapper[4758]: I0122 17:40:41.608329 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a44dbd8c50ae6bba8bbde5ba80ebd9c7ee9a80436025156bebec773e6416583"} err="failed to get container status \"5a44dbd8c50ae6bba8bbde5ba80ebd9c7ee9a80436025156bebec773e6416583\": rpc error: code = NotFound desc = could not find container \"5a44dbd8c50ae6bba8bbde5ba80ebd9c7ee9a80436025156bebec773e6416583\": container with ID starting with 5a44dbd8c50ae6bba8bbde5ba80ebd9c7ee9a80436025156bebec773e6416583 not found: ID does not exist" Jan 22 17:40:41 crc kubenswrapper[4758]: I0122 17:40:41.608353 4758 scope.go:117] "RemoveContainer" containerID="f550b6ae606e274baa0c424604c8771f53eeae1d633c2fe189610e73f6ec16b3" Jan 22 17:40:41 crc kubenswrapper[4758]: E0122 17:40:41.609077 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f550b6ae606e274baa0c424604c8771f53eeae1d633c2fe189610e73f6ec16b3\": container with ID starting with f550b6ae606e274baa0c424604c8771f53eeae1d633c2fe189610e73f6ec16b3 not found: ID does not exist" containerID="f550b6ae606e274baa0c424604c8771f53eeae1d633c2fe189610e73f6ec16b3" Jan 22 17:40:41 crc kubenswrapper[4758]: I0122 17:40:41.609130 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f550b6ae606e274baa0c424604c8771f53eeae1d633c2fe189610e73f6ec16b3"} err="failed to get container status \"f550b6ae606e274baa0c424604c8771f53eeae1d633c2fe189610e73f6ec16b3\": rpc error: code = NotFound desc = could not find container \"f550b6ae606e274baa0c424604c8771f53eeae1d633c2fe189610e73f6ec16b3\": container with ID starting with f550b6ae606e274baa0c424604c8771f53eeae1d633c2fe189610e73f6ec16b3 not found: ID does not exist" Jan 22 17:40:42 crc kubenswrapper[4758]: I0122 17:40:42.822709 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30657f72-e55a-4aff-b094-b6b3d6a5022f" path="/var/lib/kubelet/pods/30657f72-e55a-4aff-b094-b6b3d6a5022f/volumes" Jan 22 17:41:13 crc kubenswrapper[4758]: I0122 17:41:13.837773 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 17:41:13 crc kubenswrapper[4758]: I0122 17:41:13.838651 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 17:41:43 crc kubenswrapper[4758]: I0122 17:41:43.837316 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 17:41:43 crc kubenswrapper[4758]: I0122 17:41:43.838333 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 17:42:13 crc kubenswrapper[4758]: I0122 17:42:13.838309 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 17:42:13 crc kubenswrapper[4758]: I0122 17:42:13.839963 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 17:42:13 crc kubenswrapper[4758]: I0122 17:42:13.840087 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" Jan 22 17:42:13 crc kubenswrapper[4758]: I0122 17:42:13.841146 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fb2b5dae11488cf1f921401dbbd3aaca34dac8cdcf379ae8fd0abef128b1dfc5"} pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 17:42:13 crc kubenswrapper[4758]: I0122 17:42:13.841255 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" containerID="cri-o://fb2b5dae11488cf1f921401dbbd3aaca34dac8cdcf379ae8fd0abef128b1dfc5" gracePeriod=600 Jan 22 17:42:14 crc kubenswrapper[4758]: I0122 17:42:14.772833 4758 generic.go:334] "Generic (PLEG): container finished" podID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerID="fb2b5dae11488cf1f921401dbbd3aaca34dac8cdcf379ae8fd0abef128b1dfc5" exitCode=0 Jan 22 17:42:14 crc kubenswrapper[4758]: I0122 17:42:14.772957 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerDied","Data":"fb2b5dae11488cf1f921401dbbd3aaca34dac8cdcf379ae8fd0abef128b1dfc5"} Jan 22 17:42:14 crc kubenswrapper[4758]: I0122 17:42:14.773715 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerStarted","Data":"bca2620d2cea65c5431bb12ba3e5a5465e86fea66bac21826a76ec638ddabb93"} Jan 22 17:42:14 crc kubenswrapper[4758]: I0122 17:42:14.773828 4758 scope.go:117] "RemoveContainer" containerID="2eb9b403711db327cd66f17ca80b7c8e2b5fed945d29ad01e1af351fcc931688" Jan 22 17:42:38 crc kubenswrapper[4758]: I0122 17:42:38.981213 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sr7dx"] Jan 22 17:42:38 crc kubenswrapper[4758]: E0122 17:42:38.982520 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30657f72-e55a-4aff-b094-b6b3d6a5022f" containerName="extract-content" Jan 22 17:42:38 crc kubenswrapper[4758]: I0122 17:42:38.982543 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="30657f72-e55a-4aff-b094-b6b3d6a5022f" containerName="extract-content" Jan 22 17:42:38 crc kubenswrapper[4758]: E0122 17:42:38.982595 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30657f72-e55a-4aff-b094-b6b3d6a5022f" containerName="registry-server" Jan 22 17:42:38 crc kubenswrapper[4758]: I0122 17:42:38.982603 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="30657f72-e55a-4aff-b094-b6b3d6a5022f" containerName="registry-server" Jan 22 17:42:38 crc kubenswrapper[4758]: E0122 17:42:38.982620 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30657f72-e55a-4aff-b094-b6b3d6a5022f" containerName="extract-utilities" Jan 22 17:42:38 crc kubenswrapper[4758]: I0122 17:42:38.982628 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="30657f72-e55a-4aff-b094-b6b3d6a5022f" containerName="extract-utilities" Jan 22 17:42:38 crc kubenswrapper[4758]: I0122 17:42:38.982906 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="30657f72-e55a-4aff-b094-b6b3d6a5022f" containerName="registry-server" Jan 22 17:42:38 crc kubenswrapper[4758]: I0122 17:42:38.986205 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sr7dx" Jan 22 17:42:38 crc kubenswrapper[4758]: I0122 17:42:38.993545 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sr7dx"] Jan 22 17:42:39 crc kubenswrapper[4758]: I0122 17:42:39.074175 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c3fda25-dd5a-444e-b32e-c57f69da819d-utilities\") pod \"redhat-marketplace-sr7dx\" (UID: \"9c3fda25-dd5a-444e-b32e-c57f69da819d\") " pod="openshift-marketplace/redhat-marketplace-sr7dx" Jan 22 17:42:39 crc kubenswrapper[4758]: I0122 17:42:39.074495 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzwqw\" (UniqueName: \"kubernetes.io/projected/9c3fda25-dd5a-444e-b32e-c57f69da819d-kube-api-access-dzwqw\") pod \"redhat-marketplace-sr7dx\" (UID: \"9c3fda25-dd5a-444e-b32e-c57f69da819d\") " pod="openshift-marketplace/redhat-marketplace-sr7dx" Jan 22 17:42:39 crc kubenswrapper[4758]: I0122 17:42:39.074568 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c3fda25-dd5a-444e-b32e-c57f69da819d-catalog-content\") pod \"redhat-marketplace-sr7dx\" (UID: \"9c3fda25-dd5a-444e-b32e-c57f69da819d\") " pod="openshift-marketplace/redhat-marketplace-sr7dx" Jan 22 17:42:39 crc kubenswrapper[4758]: I0122 17:42:39.176911 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzwqw\" (UniqueName: \"kubernetes.io/projected/9c3fda25-dd5a-444e-b32e-c57f69da819d-kube-api-access-dzwqw\") pod \"redhat-marketplace-sr7dx\" (UID: \"9c3fda25-dd5a-444e-b32e-c57f69da819d\") " pod="openshift-marketplace/redhat-marketplace-sr7dx" Jan 22 17:42:39 crc kubenswrapper[4758]: I0122 17:42:39.176996 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c3fda25-dd5a-444e-b32e-c57f69da819d-catalog-content\") pod \"redhat-marketplace-sr7dx\" (UID: \"9c3fda25-dd5a-444e-b32e-c57f69da819d\") " pod="openshift-marketplace/redhat-marketplace-sr7dx" Jan 22 17:42:39 crc kubenswrapper[4758]: I0122 17:42:39.177087 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c3fda25-dd5a-444e-b32e-c57f69da819d-utilities\") pod \"redhat-marketplace-sr7dx\" (UID: \"9c3fda25-dd5a-444e-b32e-c57f69da819d\") " pod="openshift-marketplace/redhat-marketplace-sr7dx" Jan 22 17:42:39 crc kubenswrapper[4758]: I0122 17:42:39.177629 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c3fda25-dd5a-444e-b32e-c57f69da819d-utilities\") pod \"redhat-marketplace-sr7dx\" (UID: \"9c3fda25-dd5a-444e-b32e-c57f69da819d\") " pod="openshift-marketplace/redhat-marketplace-sr7dx" Jan 22 17:42:39 crc kubenswrapper[4758]: I0122 17:42:39.177668 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c3fda25-dd5a-444e-b32e-c57f69da819d-catalog-content\") pod \"redhat-marketplace-sr7dx\" (UID: \"9c3fda25-dd5a-444e-b32e-c57f69da819d\") " pod="openshift-marketplace/redhat-marketplace-sr7dx" Jan 22 17:42:39 crc kubenswrapper[4758]: I0122 17:42:39.209793 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzwqw\" (UniqueName: \"kubernetes.io/projected/9c3fda25-dd5a-444e-b32e-c57f69da819d-kube-api-access-dzwqw\") pod \"redhat-marketplace-sr7dx\" (UID: \"9c3fda25-dd5a-444e-b32e-c57f69da819d\") " pod="openshift-marketplace/redhat-marketplace-sr7dx" Jan 22 17:42:39 crc kubenswrapper[4758]: I0122 17:42:39.309114 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sr7dx" Jan 22 17:42:39 crc kubenswrapper[4758]: I0122 17:42:39.827996 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sr7dx"] Jan 22 17:42:40 crc kubenswrapper[4758]: I0122 17:42:40.052364 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sr7dx" event={"ID":"9c3fda25-dd5a-444e-b32e-c57f69da819d","Type":"ContainerStarted","Data":"87c5954a58fe43551d238a75b5f12d5a7ffba65233ba2a4dd874a60a999a5017"} Jan 22 17:42:40 crc kubenswrapper[4758]: I0122 17:42:40.052812 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sr7dx" event={"ID":"9c3fda25-dd5a-444e-b32e-c57f69da819d","Type":"ContainerStarted","Data":"d40e95f31f12ba62cbf61f5ded8f249f9745fca11095f5acfc4c2f50dade7c11"} Jan 22 17:42:41 crc kubenswrapper[4758]: I0122 17:42:41.077794 4758 generic.go:334] "Generic (PLEG): container finished" podID="9c3fda25-dd5a-444e-b32e-c57f69da819d" containerID="87c5954a58fe43551d238a75b5f12d5a7ffba65233ba2a4dd874a60a999a5017" exitCode=0 Jan 22 17:42:41 crc kubenswrapper[4758]: I0122 17:42:41.077868 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sr7dx" event={"ID":"9c3fda25-dd5a-444e-b32e-c57f69da819d","Type":"ContainerDied","Data":"87c5954a58fe43551d238a75b5f12d5a7ffba65233ba2a4dd874a60a999a5017"} Jan 22 17:42:41 crc kubenswrapper[4758]: I0122 17:42:41.084004 4758 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 17:42:42 crc kubenswrapper[4758]: I0122 17:42:42.112493 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sr7dx" event={"ID":"9c3fda25-dd5a-444e-b32e-c57f69da819d","Type":"ContainerStarted","Data":"4643fb3d54cba33f9f66ac5fb7b317ba52027185f3ceb737d8e85255e0b36454"} Jan 22 17:42:43 crc kubenswrapper[4758]: I0122 17:42:43.127790 4758 generic.go:334] "Generic (PLEG): container finished" podID="9c3fda25-dd5a-444e-b32e-c57f69da819d" containerID="4643fb3d54cba33f9f66ac5fb7b317ba52027185f3ceb737d8e85255e0b36454" exitCode=0 Jan 22 17:42:43 crc kubenswrapper[4758]: I0122 17:42:43.127862 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sr7dx" event={"ID":"9c3fda25-dd5a-444e-b32e-c57f69da819d","Type":"ContainerDied","Data":"4643fb3d54cba33f9f66ac5fb7b317ba52027185f3ceb737d8e85255e0b36454"} Jan 22 17:42:44 crc kubenswrapper[4758]: I0122 17:42:44.142671 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sr7dx" event={"ID":"9c3fda25-dd5a-444e-b32e-c57f69da819d","Type":"ContainerStarted","Data":"c560f3ecd160d81ab42b00bd59eb9bf235c4f50e892a36d645c4139d73122867"} Jan 22 17:42:44 crc kubenswrapper[4758]: I0122 17:42:44.165158 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sr7dx" podStartSLOduration=3.388322397 podStartE2EDuration="6.165115458s" podCreationTimestamp="2026-01-22 17:42:38 +0000 UTC" firstStartedPulling="2026-01-22 17:42:41.08368098 +0000 UTC m=+4382.567020265" lastFinishedPulling="2026-01-22 17:42:43.860474041 +0000 UTC m=+4385.343813326" observedRunningTime="2026-01-22 17:42:44.161400487 +0000 UTC m=+4385.644739782" watchObservedRunningTime="2026-01-22 17:42:44.165115458 +0000 UTC m=+4385.648454753" Jan 22 17:42:49 crc kubenswrapper[4758]: I0122 17:42:49.310203 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sr7dx" Jan 22 17:42:49 crc kubenswrapper[4758]: I0122 17:42:49.311019 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sr7dx" Jan 22 17:42:49 crc kubenswrapper[4758]: I0122 17:42:49.764928 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sr7dx" Jan 22 17:42:50 crc kubenswrapper[4758]: I0122 17:42:50.256674 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sr7dx" Jan 22 17:42:50 crc kubenswrapper[4758]: I0122 17:42:50.331212 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sr7dx"] Jan 22 17:42:52 crc kubenswrapper[4758]: I0122 17:42:52.217443 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sr7dx" podUID="9c3fda25-dd5a-444e-b32e-c57f69da819d" containerName="registry-server" containerID="cri-o://c560f3ecd160d81ab42b00bd59eb9bf235c4f50e892a36d645c4139d73122867" gracePeriod=2 Jan 22 17:42:52 crc kubenswrapper[4758]: I0122 17:42:52.872222 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sr7dx" Jan 22 17:42:52 crc kubenswrapper[4758]: I0122 17:42:52.996179 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c3fda25-dd5a-444e-b32e-c57f69da819d-utilities\") pod \"9c3fda25-dd5a-444e-b32e-c57f69da819d\" (UID: \"9c3fda25-dd5a-444e-b32e-c57f69da819d\") " Jan 22 17:42:52 crc kubenswrapper[4758]: I0122 17:42:52.996268 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c3fda25-dd5a-444e-b32e-c57f69da819d-catalog-content\") pod \"9c3fda25-dd5a-444e-b32e-c57f69da819d\" (UID: \"9c3fda25-dd5a-444e-b32e-c57f69da819d\") " Jan 22 17:42:52 crc kubenswrapper[4758]: I0122 17:42:52.996478 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzwqw\" (UniqueName: \"kubernetes.io/projected/9c3fda25-dd5a-444e-b32e-c57f69da819d-kube-api-access-dzwqw\") pod \"9c3fda25-dd5a-444e-b32e-c57f69da819d\" (UID: \"9c3fda25-dd5a-444e-b32e-c57f69da819d\") " Jan 22 17:42:52 crc kubenswrapper[4758]: I0122 17:42:52.999078 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c3fda25-dd5a-444e-b32e-c57f69da819d-utilities" (OuterVolumeSpecName: "utilities") pod "9c3fda25-dd5a-444e-b32e-c57f69da819d" (UID: "9c3fda25-dd5a-444e-b32e-c57f69da819d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:42:53 crc kubenswrapper[4758]: I0122 17:42:53.008015 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c3fda25-dd5a-444e-b32e-c57f69da819d-kube-api-access-dzwqw" (OuterVolumeSpecName: "kube-api-access-dzwqw") pod "9c3fda25-dd5a-444e-b32e-c57f69da819d" (UID: "9c3fda25-dd5a-444e-b32e-c57f69da819d"). InnerVolumeSpecName "kube-api-access-dzwqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:42:53 crc kubenswrapper[4758]: I0122 17:42:53.018495 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c3fda25-dd5a-444e-b32e-c57f69da819d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9c3fda25-dd5a-444e-b32e-c57f69da819d" (UID: "9c3fda25-dd5a-444e-b32e-c57f69da819d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:42:53 crc kubenswrapper[4758]: I0122 17:42:53.098918 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c3fda25-dd5a-444e-b32e-c57f69da819d-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 17:42:53 crc kubenswrapper[4758]: I0122 17:42:53.098971 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c3fda25-dd5a-444e-b32e-c57f69da819d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 17:42:53 crc kubenswrapper[4758]: I0122 17:42:53.098982 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzwqw\" (UniqueName: \"kubernetes.io/projected/9c3fda25-dd5a-444e-b32e-c57f69da819d-kube-api-access-dzwqw\") on node \"crc\" DevicePath \"\"" Jan 22 17:42:53 crc kubenswrapper[4758]: I0122 17:42:53.230144 4758 generic.go:334] "Generic (PLEG): container finished" podID="9c3fda25-dd5a-444e-b32e-c57f69da819d" containerID="c560f3ecd160d81ab42b00bd59eb9bf235c4f50e892a36d645c4139d73122867" exitCode=0 Jan 22 17:42:53 crc kubenswrapper[4758]: I0122 17:42:53.230200 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sr7dx" event={"ID":"9c3fda25-dd5a-444e-b32e-c57f69da819d","Type":"ContainerDied","Data":"c560f3ecd160d81ab42b00bd59eb9bf235c4f50e892a36d645c4139d73122867"} Jan 22 17:42:53 crc kubenswrapper[4758]: I0122 17:42:53.230220 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sr7dx" Jan 22 17:42:53 crc kubenswrapper[4758]: I0122 17:42:53.230249 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sr7dx" event={"ID":"9c3fda25-dd5a-444e-b32e-c57f69da819d","Type":"ContainerDied","Data":"d40e95f31f12ba62cbf61f5ded8f249f9745fca11095f5acfc4c2f50dade7c11"} Jan 22 17:42:53 crc kubenswrapper[4758]: I0122 17:42:53.230274 4758 scope.go:117] "RemoveContainer" containerID="c560f3ecd160d81ab42b00bd59eb9bf235c4f50e892a36d645c4139d73122867" Jan 22 17:42:53 crc kubenswrapper[4758]: I0122 17:42:53.266242 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sr7dx"] Jan 22 17:42:53 crc kubenswrapper[4758]: I0122 17:42:53.269305 4758 scope.go:117] "RemoveContainer" containerID="4643fb3d54cba33f9f66ac5fb7b317ba52027185f3ceb737d8e85255e0b36454" Jan 22 17:42:53 crc kubenswrapper[4758]: I0122 17:42:53.275774 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sr7dx"] Jan 22 17:42:53 crc kubenswrapper[4758]: I0122 17:42:53.310173 4758 scope.go:117] "RemoveContainer" containerID="87c5954a58fe43551d238a75b5f12d5a7ffba65233ba2a4dd874a60a999a5017" Jan 22 17:42:53 crc kubenswrapper[4758]: I0122 17:42:53.345116 4758 scope.go:117] "RemoveContainer" containerID="c560f3ecd160d81ab42b00bd59eb9bf235c4f50e892a36d645c4139d73122867" Jan 22 17:42:53 crc kubenswrapper[4758]: E0122 17:42:53.345607 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c560f3ecd160d81ab42b00bd59eb9bf235c4f50e892a36d645c4139d73122867\": container with ID starting with c560f3ecd160d81ab42b00bd59eb9bf235c4f50e892a36d645c4139d73122867 not found: ID does not exist" containerID="c560f3ecd160d81ab42b00bd59eb9bf235c4f50e892a36d645c4139d73122867" Jan 22 17:42:53 crc kubenswrapper[4758]: I0122 17:42:53.345661 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c560f3ecd160d81ab42b00bd59eb9bf235c4f50e892a36d645c4139d73122867"} err="failed to get container status \"c560f3ecd160d81ab42b00bd59eb9bf235c4f50e892a36d645c4139d73122867\": rpc error: code = NotFound desc = could not find container \"c560f3ecd160d81ab42b00bd59eb9bf235c4f50e892a36d645c4139d73122867\": container with ID starting with c560f3ecd160d81ab42b00bd59eb9bf235c4f50e892a36d645c4139d73122867 not found: ID does not exist" Jan 22 17:42:53 crc kubenswrapper[4758]: I0122 17:42:53.345686 4758 scope.go:117] "RemoveContainer" containerID="4643fb3d54cba33f9f66ac5fb7b317ba52027185f3ceb737d8e85255e0b36454" Jan 22 17:42:53 crc kubenswrapper[4758]: E0122 17:42:53.346347 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4643fb3d54cba33f9f66ac5fb7b317ba52027185f3ceb737d8e85255e0b36454\": container with ID starting with 4643fb3d54cba33f9f66ac5fb7b317ba52027185f3ceb737d8e85255e0b36454 not found: ID does not exist" containerID="4643fb3d54cba33f9f66ac5fb7b317ba52027185f3ceb737d8e85255e0b36454" Jan 22 17:42:53 crc kubenswrapper[4758]: I0122 17:42:53.346403 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4643fb3d54cba33f9f66ac5fb7b317ba52027185f3ceb737d8e85255e0b36454"} err="failed to get container status \"4643fb3d54cba33f9f66ac5fb7b317ba52027185f3ceb737d8e85255e0b36454\": rpc error: code = NotFound desc = could not find container \"4643fb3d54cba33f9f66ac5fb7b317ba52027185f3ceb737d8e85255e0b36454\": container with ID starting with 4643fb3d54cba33f9f66ac5fb7b317ba52027185f3ceb737d8e85255e0b36454 not found: ID does not exist" Jan 22 17:42:53 crc kubenswrapper[4758]: I0122 17:42:53.346440 4758 scope.go:117] "RemoveContainer" containerID="87c5954a58fe43551d238a75b5f12d5a7ffba65233ba2a4dd874a60a999a5017" Jan 22 17:42:53 crc kubenswrapper[4758]: E0122 17:42:53.346768 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87c5954a58fe43551d238a75b5f12d5a7ffba65233ba2a4dd874a60a999a5017\": container with ID starting with 87c5954a58fe43551d238a75b5f12d5a7ffba65233ba2a4dd874a60a999a5017 not found: ID does not exist" containerID="87c5954a58fe43551d238a75b5f12d5a7ffba65233ba2a4dd874a60a999a5017" Jan 22 17:42:53 crc kubenswrapper[4758]: I0122 17:42:53.346794 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87c5954a58fe43551d238a75b5f12d5a7ffba65233ba2a4dd874a60a999a5017"} err="failed to get container status \"87c5954a58fe43551d238a75b5f12d5a7ffba65233ba2a4dd874a60a999a5017\": rpc error: code = NotFound desc = could not find container \"87c5954a58fe43551d238a75b5f12d5a7ffba65233ba2a4dd874a60a999a5017\": container with ID starting with 87c5954a58fe43551d238a75b5f12d5a7ffba65233ba2a4dd874a60a999a5017 not found: ID does not exist" Jan 22 17:42:54 crc kubenswrapper[4758]: I0122 17:42:54.820413 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c3fda25-dd5a-444e-b32e-c57f69da819d" path="/var/lib/kubelet/pods/9c3fda25-dd5a-444e-b32e-c57f69da819d/volumes" Jan 22 17:44:43 crc kubenswrapper[4758]: I0122 17:44:43.837835 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 17:44:43 crc kubenswrapper[4758]: I0122 17:44:43.838626 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 17:45:00 crc kubenswrapper[4758]: I0122 17:45:00.195141 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485065-mlm94"] Jan 22 17:45:00 crc kubenswrapper[4758]: E0122 17:45:00.196037 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c3fda25-dd5a-444e-b32e-c57f69da819d" containerName="registry-server" Jan 22 17:45:00 crc kubenswrapper[4758]: I0122 17:45:00.196055 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c3fda25-dd5a-444e-b32e-c57f69da819d" containerName="registry-server" Jan 22 17:45:00 crc kubenswrapper[4758]: E0122 17:45:00.196085 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c3fda25-dd5a-444e-b32e-c57f69da819d" containerName="extract-utilities" Jan 22 17:45:00 crc kubenswrapper[4758]: I0122 17:45:00.196091 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c3fda25-dd5a-444e-b32e-c57f69da819d" containerName="extract-utilities" Jan 22 17:45:00 crc kubenswrapper[4758]: E0122 17:45:00.196111 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c3fda25-dd5a-444e-b32e-c57f69da819d" containerName="extract-content" Jan 22 17:45:00 crc kubenswrapper[4758]: I0122 17:45:00.196117 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c3fda25-dd5a-444e-b32e-c57f69da819d" containerName="extract-content" Jan 22 17:45:00 crc kubenswrapper[4758]: I0122 17:45:00.196380 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c3fda25-dd5a-444e-b32e-c57f69da819d" containerName="registry-server" Jan 22 17:45:00 crc kubenswrapper[4758]: I0122 17:45:00.197082 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485065-mlm94" Jan 22 17:45:00 crc kubenswrapper[4758]: I0122 17:45:00.202155 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 17:45:00 crc kubenswrapper[4758]: I0122 17:45:00.202438 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 17:45:00 crc kubenswrapper[4758]: I0122 17:45:00.219467 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485065-mlm94"] Jan 22 17:45:00 crc kubenswrapper[4758]: I0122 17:45:00.261464 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/793d9467-9d54-4846-b94d-a37e214504ee-config-volume\") pod \"collect-profiles-29485065-mlm94\" (UID: \"793d9467-9d54-4846-b94d-a37e214504ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485065-mlm94" Jan 22 17:45:00 crc kubenswrapper[4758]: I0122 17:45:00.261507 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/793d9467-9d54-4846-b94d-a37e214504ee-secret-volume\") pod \"collect-profiles-29485065-mlm94\" (UID: \"793d9467-9d54-4846-b94d-a37e214504ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485065-mlm94" Jan 22 17:45:00 crc kubenswrapper[4758]: I0122 17:45:00.261912 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwc42\" (UniqueName: \"kubernetes.io/projected/793d9467-9d54-4846-b94d-a37e214504ee-kube-api-access-hwc42\") pod \"collect-profiles-29485065-mlm94\" (UID: \"793d9467-9d54-4846-b94d-a37e214504ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485065-mlm94" Jan 22 17:45:00 crc kubenswrapper[4758]: I0122 17:45:00.364027 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwc42\" (UniqueName: \"kubernetes.io/projected/793d9467-9d54-4846-b94d-a37e214504ee-kube-api-access-hwc42\") pod \"collect-profiles-29485065-mlm94\" (UID: \"793d9467-9d54-4846-b94d-a37e214504ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485065-mlm94" Jan 22 17:45:00 crc kubenswrapper[4758]: I0122 17:45:00.364186 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/793d9467-9d54-4846-b94d-a37e214504ee-config-volume\") pod \"collect-profiles-29485065-mlm94\" (UID: \"793d9467-9d54-4846-b94d-a37e214504ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485065-mlm94" Jan 22 17:45:00 crc kubenswrapper[4758]: I0122 17:45:00.364223 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/793d9467-9d54-4846-b94d-a37e214504ee-secret-volume\") pod \"collect-profiles-29485065-mlm94\" (UID: \"793d9467-9d54-4846-b94d-a37e214504ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485065-mlm94" Jan 22 17:45:00 crc kubenswrapper[4758]: I0122 17:45:00.365288 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/793d9467-9d54-4846-b94d-a37e214504ee-config-volume\") pod \"collect-profiles-29485065-mlm94\" (UID: \"793d9467-9d54-4846-b94d-a37e214504ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485065-mlm94" Jan 22 17:45:00 crc kubenswrapper[4758]: I0122 17:45:00.899304 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwc42\" (UniqueName: \"kubernetes.io/projected/793d9467-9d54-4846-b94d-a37e214504ee-kube-api-access-hwc42\") pod \"collect-profiles-29485065-mlm94\" (UID: \"793d9467-9d54-4846-b94d-a37e214504ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485065-mlm94" Jan 22 17:45:00 crc kubenswrapper[4758]: I0122 17:45:00.899785 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/793d9467-9d54-4846-b94d-a37e214504ee-secret-volume\") pod \"collect-profiles-29485065-mlm94\" (UID: \"793d9467-9d54-4846-b94d-a37e214504ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485065-mlm94" Jan 22 17:45:01 crc kubenswrapper[4758]: I0122 17:45:01.120010 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485065-mlm94" Jan 22 17:45:01 crc kubenswrapper[4758]: I0122 17:45:01.630802 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485065-mlm94"] Jan 22 17:45:01 crc kubenswrapper[4758]: I0122 17:45:01.770786 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485065-mlm94" event={"ID":"793d9467-9d54-4846-b94d-a37e214504ee","Type":"ContainerStarted","Data":"9f53624760ed374507666d98751ddc18c8806a036fb28d47789ba0b84b91c01e"} Jan 22 17:45:02 crc kubenswrapper[4758]: I0122 17:45:02.786366 4758 generic.go:334] "Generic (PLEG): container finished" podID="793d9467-9d54-4846-b94d-a37e214504ee" containerID="71feffce9b38bbaffaf76f2ee35515dd25b3fa00f30e54aefa7c9195f2c008b2" exitCode=0 Jan 22 17:45:02 crc kubenswrapper[4758]: I0122 17:45:02.786462 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485065-mlm94" event={"ID":"793d9467-9d54-4846-b94d-a37e214504ee","Type":"ContainerDied","Data":"71feffce9b38bbaffaf76f2ee35515dd25b3fa00f30e54aefa7c9195f2c008b2"} Jan 22 17:45:04 crc kubenswrapper[4758]: I0122 17:45:04.297790 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485065-mlm94" Jan 22 17:45:04 crc kubenswrapper[4758]: I0122 17:45:04.359444 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/793d9467-9d54-4846-b94d-a37e214504ee-config-volume\") pod \"793d9467-9d54-4846-b94d-a37e214504ee\" (UID: \"793d9467-9d54-4846-b94d-a37e214504ee\") " Jan 22 17:45:04 crc kubenswrapper[4758]: I0122 17:45:04.359820 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwc42\" (UniqueName: \"kubernetes.io/projected/793d9467-9d54-4846-b94d-a37e214504ee-kube-api-access-hwc42\") pod \"793d9467-9d54-4846-b94d-a37e214504ee\" (UID: \"793d9467-9d54-4846-b94d-a37e214504ee\") " Jan 22 17:45:04 crc kubenswrapper[4758]: I0122 17:45:04.359880 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/793d9467-9d54-4846-b94d-a37e214504ee-secret-volume\") pod \"793d9467-9d54-4846-b94d-a37e214504ee\" (UID: \"793d9467-9d54-4846-b94d-a37e214504ee\") " Jan 22 17:45:04 crc kubenswrapper[4758]: I0122 17:45:04.361967 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/793d9467-9d54-4846-b94d-a37e214504ee-config-volume" (OuterVolumeSpecName: "config-volume") pod "793d9467-9d54-4846-b94d-a37e214504ee" (UID: "793d9467-9d54-4846-b94d-a37e214504ee"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 17:45:04 crc kubenswrapper[4758]: I0122 17:45:04.379063 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/793d9467-9d54-4846-b94d-a37e214504ee-kube-api-access-hwc42" (OuterVolumeSpecName: "kube-api-access-hwc42") pod "793d9467-9d54-4846-b94d-a37e214504ee" (UID: "793d9467-9d54-4846-b94d-a37e214504ee"). InnerVolumeSpecName "kube-api-access-hwc42". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:45:04 crc kubenswrapper[4758]: I0122 17:45:04.384078 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/793d9467-9d54-4846-b94d-a37e214504ee-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "793d9467-9d54-4846-b94d-a37e214504ee" (UID: "793d9467-9d54-4846-b94d-a37e214504ee"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:45:04 crc kubenswrapper[4758]: I0122 17:45:04.461772 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwc42\" (UniqueName: \"kubernetes.io/projected/793d9467-9d54-4846-b94d-a37e214504ee-kube-api-access-hwc42\") on node \"crc\" DevicePath \"\"" Jan 22 17:45:04 crc kubenswrapper[4758]: I0122 17:45:04.461802 4758 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/793d9467-9d54-4846-b94d-a37e214504ee-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 17:45:04 crc kubenswrapper[4758]: I0122 17:45:04.461811 4758 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/793d9467-9d54-4846-b94d-a37e214504ee-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 17:45:04 crc kubenswrapper[4758]: I0122 17:45:04.805676 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485065-mlm94" event={"ID":"793d9467-9d54-4846-b94d-a37e214504ee","Type":"ContainerDied","Data":"9f53624760ed374507666d98751ddc18c8806a036fb28d47789ba0b84b91c01e"} Jan 22 17:45:04 crc kubenswrapper[4758]: I0122 17:45:04.805768 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f53624760ed374507666d98751ddc18c8806a036fb28d47789ba0b84b91c01e" Jan 22 17:45:04 crc kubenswrapper[4758]: I0122 17:45:04.805778 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485065-mlm94" Jan 22 17:45:05 crc kubenswrapper[4758]: I0122 17:45:05.381678 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485020-7b4nk"] Jan 22 17:45:05 crc kubenswrapper[4758]: I0122 17:45:05.391160 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485020-7b4nk"] Jan 22 17:45:06 crc kubenswrapper[4758]: I0122 17:45:06.829326 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db2c0313-5662-42cc-bb1d-6a3d53379b40" path="/var/lib/kubelet/pods/db2c0313-5662-42cc-bb1d-6a3d53379b40/volumes" Jan 22 17:45:13 crc kubenswrapper[4758]: I0122 17:45:13.837304 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 17:45:13 crc kubenswrapper[4758]: I0122 17:45:13.839236 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 17:45:43 crc kubenswrapper[4758]: I0122 17:45:43.837250 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 17:45:43 crc kubenswrapper[4758]: I0122 17:45:43.837983 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 17:45:43 crc kubenswrapper[4758]: I0122 17:45:43.838042 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" Jan 22 17:45:43 crc kubenswrapper[4758]: I0122 17:45:43.840057 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bca2620d2cea65c5431bb12ba3e5a5465e86fea66bac21826a76ec638ddabb93"} pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 17:45:43 crc kubenswrapper[4758]: I0122 17:45:43.840175 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" containerID="cri-o://bca2620d2cea65c5431bb12ba3e5a5465e86fea66bac21826a76ec638ddabb93" gracePeriod=600 Jan 22 17:45:43 crc kubenswrapper[4758]: E0122 17:45:43.985691 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:45:44 crc kubenswrapper[4758]: I0122 17:45:44.237492 4758 generic.go:334] "Generic (PLEG): container finished" podID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerID="bca2620d2cea65c5431bb12ba3e5a5465e86fea66bac21826a76ec638ddabb93" exitCode=0 Jan 22 17:45:44 crc kubenswrapper[4758]: I0122 17:45:44.237543 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerDied","Data":"bca2620d2cea65c5431bb12ba3e5a5465e86fea66bac21826a76ec638ddabb93"} Jan 22 17:45:44 crc kubenswrapper[4758]: I0122 17:45:44.237614 4758 scope.go:117] "RemoveContainer" containerID="fb2b5dae11488cf1f921401dbbd3aaca34dac8cdcf379ae8fd0abef128b1dfc5" Jan 22 17:45:44 crc kubenswrapper[4758]: I0122 17:45:44.239508 4758 scope.go:117] "RemoveContainer" containerID="bca2620d2cea65c5431bb12ba3e5a5465e86fea66bac21826a76ec638ddabb93" Jan 22 17:45:44 crc kubenswrapper[4758]: E0122 17:45:44.240200 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:45:56 crc kubenswrapper[4758]: I0122 17:45:56.808282 4758 scope.go:117] "RemoveContainer" containerID="bca2620d2cea65c5431bb12ba3e5a5465e86fea66bac21826a76ec638ddabb93" Jan 22 17:45:56 crc kubenswrapper[4758]: E0122 17:45:56.809177 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:46:03 crc kubenswrapper[4758]: I0122 17:46:03.136456 4758 scope.go:117] "RemoveContainer" containerID="039d08746163c267120e43d8643da740fdb10bbd0d750cdc23358467be8cc8f8" Jan 22 17:46:08 crc kubenswrapper[4758]: I0122 17:46:08.821161 4758 scope.go:117] "RemoveContainer" containerID="bca2620d2cea65c5431bb12ba3e5a5465e86fea66bac21826a76ec638ddabb93" Jan 22 17:46:08 crc kubenswrapper[4758]: E0122 17:46:08.822118 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:46:20 crc kubenswrapper[4758]: I0122 17:46:20.808826 4758 scope.go:117] "RemoveContainer" containerID="bca2620d2cea65c5431bb12ba3e5a5465e86fea66bac21826a76ec638ddabb93" Jan 22 17:46:20 crc kubenswrapper[4758]: E0122 17:46:20.809820 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:46:33 crc kubenswrapper[4758]: I0122 17:46:33.197195 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lbvcz"] Jan 22 17:46:33 crc kubenswrapper[4758]: E0122 17:46:33.198809 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="793d9467-9d54-4846-b94d-a37e214504ee" containerName="collect-profiles" Jan 22 17:46:33 crc kubenswrapper[4758]: I0122 17:46:33.198827 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="793d9467-9d54-4846-b94d-a37e214504ee" containerName="collect-profiles" Jan 22 17:46:33 crc kubenswrapper[4758]: I0122 17:46:33.199039 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="793d9467-9d54-4846-b94d-a37e214504ee" containerName="collect-profiles" Jan 22 17:46:33 crc kubenswrapper[4758]: I0122 17:46:33.200746 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lbvcz" Jan 22 17:46:33 crc kubenswrapper[4758]: I0122 17:46:33.220609 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lbvcz"] Jan 22 17:46:33 crc kubenswrapper[4758]: I0122 17:46:33.275444 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85d14087-d4f7-4dfe-8212-46d20f5e130f-utilities\") pod \"redhat-operators-lbvcz\" (UID: \"85d14087-d4f7-4dfe-8212-46d20f5e130f\") " pod="openshift-marketplace/redhat-operators-lbvcz" Jan 22 17:46:33 crc kubenswrapper[4758]: I0122 17:46:33.275647 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vm9k\" (UniqueName: \"kubernetes.io/projected/85d14087-d4f7-4dfe-8212-46d20f5e130f-kube-api-access-8vm9k\") pod \"redhat-operators-lbvcz\" (UID: \"85d14087-d4f7-4dfe-8212-46d20f5e130f\") " pod="openshift-marketplace/redhat-operators-lbvcz" Jan 22 17:46:33 crc kubenswrapper[4758]: I0122 17:46:33.275724 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85d14087-d4f7-4dfe-8212-46d20f5e130f-catalog-content\") pod \"redhat-operators-lbvcz\" (UID: \"85d14087-d4f7-4dfe-8212-46d20f5e130f\") " pod="openshift-marketplace/redhat-operators-lbvcz" Jan 22 17:46:33 crc kubenswrapper[4758]: I0122 17:46:33.377517 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vm9k\" (UniqueName: \"kubernetes.io/projected/85d14087-d4f7-4dfe-8212-46d20f5e130f-kube-api-access-8vm9k\") pod \"redhat-operators-lbvcz\" (UID: \"85d14087-d4f7-4dfe-8212-46d20f5e130f\") " pod="openshift-marketplace/redhat-operators-lbvcz" Jan 22 17:46:33 crc kubenswrapper[4758]: I0122 17:46:33.377603 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85d14087-d4f7-4dfe-8212-46d20f5e130f-catalog-content\") pod \"redhat-operators-lbvcz\" (UID: \"85d14087-d4f7-4dfe-8212-46d20f5e130f\") " pod="openshift-marketplace/redhat-operators-lbvcz" Jan 22 17:46:33 crc kubenswrapper[4758]: I0122 17:46:33.377704 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85d14087-d4f7-4dfe-8212-46d20f5e130f-utilities\") pod \"redhat-operators-lbvcz\" (UID: \"85d14087-d4f7-4dfe-8212-46d20f5e130f\") " pod="openshift-marketplace/redhat-operators-lbvcz" Jan 22 17:46:33 crc kubenswrapper[4758]: I0122 17:46:33.378332 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85d14087-d4f7-4dfe-8212-46d20f5e130f-utilities\") pod \"redhat-operators-lbvcz\" (UID: \"85d14087-d4f7-4dfe-8212-46d20f5e130f\") " pod="openshift-marketplace/redhat-operators-lbvcz" Jan 22 17:46:33 crc kubenswrapper[4758]: I0122 17:46:33.378487 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85d14087-d4f7-4dfe-8212-46d20f5e130f-catalog-content\") pod \"redhat-operators-lbvcz\" (UID: \"85d14087-d4f7-4dfe-8212-46d20f5e130f\") " pod="openshift-marketplace/redhat-operators-lbvcz" Jan 22 17:46:33 crc kubenswrapper[4758]: I0122 17:46:33.397289 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vm9k\" (UniqueName: \"kubernetes.io/projected/85d14087-d4f7-4dfe-8212-46d20f5e130f-kube-api-access-8vm9k\") pod \"redhat-operators-lbvcz\" (UID: \"85d14087-d4f7-4dfe-8212-46d20f5e130f\") " pod="openshift-marketplace/redhat-operators-lbvcz" Jan 22 17:46:33 crc kubenswrapper[4758]: I0122 17:46:33.518336 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lbvcz" Jan 22 17:46:34 crc kubenswrapper[4758]: I0122 17:46:34.134260 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lbvcz"] Jan 22 17:46:34 crc kubenswrapper[4758]: I0122 17:46:34.795354 4758 generic.go:334] "Generic (PLEG): container finished" podID="85d14087-d4f7-4dfe-8212-46d20f5e130f" containerID="b0dfcc54e71646ce8111fd30420e5f6fb1ebbc956dc688584b6fddb6db0c4fa3" exitCode=0 Jan 22 17:46:34 crc kubenswrapper[4758]: I0122 17:46:34.795980 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lbvcz" event={"ID":"85d14087-d4f7-4dfe-8212-46d20f5e130f","Type":"ContainerDied","Data":"b0dfcc54e71646ce8111fd30420e5f6fb1ebbc956dc688584b6fddb6db0c4fa3"} Jan 22 17:46:34 crc kubenswrapper[4758]: I0122 17:46:34.796014 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lbvcz" event={"ID":"85d14087-d4f7-4dfe-8212-46d20f5e130f","Type":"ContainerStarted","Data":"01b6ac0f8ca18c6fafa1d9d5f6924cc3d86263b1d43c1bda157c0b41cbade2f4"} Jan 22 17:46:34 crc kubenswrapper[4758]: I0122 17:46:34.808782 4758 scope.go:117] "RemoveContainer" containerID="bca2620d2cea65c5431bb12ba3e5a5465e86fea66bac21826a76ec638ddabb93" Jan 22 17:46:34 crc kubenswrapper[4758]: E0122 17:46:34.809037 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:46:36 crc kubenswrapper[4758]: I0122 17:46:36.824311 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lbvcz" event={"ID":"85d14087-d4f7-4dfe-8212-46d20f5e130f","Type":"ContainerStarted","Data":"1bfdda96277118283a7ecfdbc9ff9c0005191b2f5675127b8ed5a87357726bb7"} Jan 22 17:46:39 crc kubenswrapper[4758]: I0122 17:46:39.880326 4758 generic.go:334] "Generic (PLEG): container finished" podID="85d14087-d4f7-4dfe-8212-46d20f5e130f" containerID="1bfdda96277118283a7ecfdbc9ff9c0005191b2f5675127b8ed5a87357726bb7" exitCode=0 Jan 22 17:46:39 crc kubenswrapper[4758]: I0122 17:46:39.880491 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lbvcz" event={"ID":"85d14087-d4f7-4dfe-8212-46d20f5e130f","Type":"ContainerDied","Data":"1bfdda96277118283a7ecfdbc9ff9c0005191b2f5675127b8ed5a87357726bb7"} Jan 22 17:46:40 crc kubenswrapper[4758]: I0122 17:46:40.893233 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lbvcz" event={"ID":"85d14087-d4f7-4dfe-8212-46d20f5e130f","Type":"ContainerStarted","Data":"08f1aa28d671f27a265c1ca6886d0f4e1386ad99758c93b6a8aa8150033edbd8"} Jan 22 17:46:40 crc kubenswrapper[4758]: I0122 17:46:40.923396 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lbvcz" podStartSLOduration=2.387098903 podStartE2EDuration="7.923365563s" podCreationTimestamp="2026-01-22 17:46:33 +0000 UTC" firstStartedPulling="2026-01-22 17:46:34.79766553 +0000 UTC m=+4616.281004815" lastFinishedPulling="2026-01-22 17:46:40.3339322 +0000 UTC m=+4621.817271475" observedRunningTime="2026-01-22 17:46:40.914174512 +0000 UTC m=+4622.397513837" watchObservedRunningTime="2026-01-22 17:46:40.923365563 +0000 UTC m=+4622.406704848" Jan 22 17:46:43 crc kubenswrapper[4758]: I0122 17:46:43.570376 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lbvcz" Jan 22 17:46:43 crc kubenswrapper[4758]: I0122 17:46:43.572198 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lbvcz" Jan 22 17:46:44 crc kubenswrapper[4758]: I0122 17:46:44.620045 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lbvcz" podUID="85d14087-d4f7-4dfe-8212-46d20f5e130f" containerName="registry-server" probeResult="failure" output=< Jan 22 17:46:44 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 22 17:46:44 crc kubenswrapper[4758]: > Jan 22 17:46:45 crc kubenswrapper[4758]: I0122 17:46:45.809936 4758 scope.go:117] "RemoveContainer" containerID="bca2620d2cea65c5431bb12ba3e5a5465e86fea66bac21826a76ec638ddabb93" Jan 22 17:46:45 crc kubenswrapper[4758]: E0122 17:46:45.810404 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:46:53 crc kubenswrapper[4758]: I0122 17:46:53.568313 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lbvcz" Jan 22 17:46:53 crc kubenswrapper[4758]: I0122 17:46:53.620661 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lbvcz" Jan 22 17:46:53 crc kubenswrapper[4758]: I0122 17:46:53.806349 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lbvcz"] Jan 22 17:46:55 crc kubenswrapper[4758]: I0122 17:46:55.113451 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lbvcz" podUID="85d14087-d4f7-4dfe-8212-46d20f5e130f" containerName="registry-server" containerID="cri-o://08f1aa28d671f27a265c1ca6886d0f4e1386ad99758c93b6a8aa8150033edbd8" gracePeriod=2 Jan 22 17:46:55 crc kubenswrapper[4758]: I0122 17:46:55.851198 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lbvcz" Jan 22 17:46:55 crc kubenswrapper[4758]: I0122 17:46:55.942467 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85d14087-d4f7-4dfe-8212-46d20f5e130f-catalog-content\") pod \"85d14087-d4f7-4dfe-8212-46d20f5e130f\" (UID: \"85d14087-d4f7-4dfe-8212-46d20f5e130f\") " Jan 22 17:46:55 crc kubenswrapper[4758]: I0122 17:46:55.942562 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85d14087-d4f7-4dfe-8212-46d20f5e130f-utilities\") pod \"85d14087-d4f7-4dfe-8212-46d20f5e130f\" (UID: \"85d14087-d4f7-4dfe-8212-46d20f5e130f\") " Jan 22 17:46:55 crc kubenswrapper[4758]: I0122 17:46:55.942672 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vm9k\" (UniqueName: \"kubernetes.io/projected/85d14087-d4f7-4dfe-8212-46d20f5e130f-kube-api-access-8vm9k\") pod \"85d14087-d4f7-4dfe-8212-46d20f5e130f\" (UID: \"85d14087-d4f7-4dfe-8212-46d20f5e130f\") " Jan 22 17:46:55 crc kubenswrapper[4758]: I0122 17:46:55.943765 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85d14087-d4f7-4dfe-8212-46d20f5e130f-utilities" (OuterVolumeSpecName: "utilities") pod "85d14087-d4f7-4dfe-8212-46d20f5e130f" (UID: "85d14087-d4f7-4dfe-8212-46d20f5e130f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:46:55 crc kubenswrapper[4758]: I0122 17:46:55.952044 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85d14087-d4f7-4dfe-8212-46d20f5e130f-kube-api-access-8vm9k" (OuterVolumeSpecName: "kube-api-access-8vm9k") pod "85d14087-d4f7-4dfe-8212-46d20f5e130f" (UID: "85d14087-d4f7-4dfe-8212-46d20f5e130f"). InnerVolumeSpecName "kube-api-access-8vm9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:46:56 crc kubenswrapper[4758]: I0122 17:46:56.046534 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85d14087-d4f7-4dfe-8212-46d20f5e130f-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 17:46:56 crc kubenswrapper[4758]: I0122 17:46:56.046576 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vm9k\" (UniqueName: \"kubernetes.io/projected/85d14087-d4f7-4dfe-8212-46d20f5e130f-kube-api-access-8vm9k\") on node \"crc\" DevicePath \"\"" Jan 22 17:46:56 crc kubenswrapper[4758]: I0122 17:46:56.072902 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85d14087-d4f7-4dfe-8212-46d20f5e130f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "85d14087-d4f7-4dfe-8212-46d20f5e130f" (UID: "85d14087-d4f7-4dfe-8212-46d20f5e130f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:46:56 crc kubenswrapper[4758]: I0122 17:46:56.126707 4758 generic.go:334] "Generic (PLEG): container finished" podID="85d14087-d4f7-4dfe-8212-46d20f5e130f" containerID="08f1aa28d671f27a265c1ca6886d0f4e1386ad99758c93b6a8aa8150033edbd8" exitCode=0 Jan 22 17:46:56 crc kubenswrapper[4758]: I0122 17:46:56.126808 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lbvcz" event={"ID":"85d14087-d4f7-4dfe-8212-46d20f5e130f","Type":"ContainerDied","Data":"08f1aa28d671f27a265c1ca6886d0f4e1386ad99758c93b6a8aa8150033edbd8"} Jan 22 17:46:56 crc kubenswrapper[4758]: I0122 17:46:56.127108 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lbvcz" event={"ID":"85d14087-d4f7-4dfe-8212-46d20f5e130f","Type":"ContainerDied","Data":"01b6ac0f8ca18c6fafa1d9d5f6924cc3d86263b1d43c1bda157c0b41cbade2f4"} Jan 22 17:46:56 crc kubenswrapper[4758]: I0122 17:46:56.127162 4758 scope.go:117] "RemoveContainer" containerID="08f1aa28d671f27a265c1ca6886d0f4e1386ad99758c93b6a8aa8150033edbd8" Jan 22 17:46:56 crc kubenswrapper[4758]: I0122 17:46:56.127225 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lbvcz" Jan 22 17:46:56 crc kubenswrapper[4758]: I0122 17:46:56.149198 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85d14087-d4f7-4dfe-8212-46d20f5e130f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 17:46:56 crc kubenswrapper[4758]: I0122 17:46:56.158578 4758 scope.go:117] "RemoveContainer" containerID="1bfdda96277118283a7ecfdbc9ff9c0005191b2f5675127b8ed5a87357726bb7" Jan 22 17:46:56 crc kubenswrapper[4758]: I0122 17:46:56.165834 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lbvcz"] Jan 22 17:46:56 crc kubenswrapper[4758]: I0122 17:46:56.175785 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lbvcz"] Jan 22 17:46:56 crc kubenswrapper[4758]: I0122 17:46:56.202669 4758 scope.go:117] "RemoveContainer" containerID="b0dfcc54e71646ce8111fd30420e5f6fb1ebbc956dc688584b6fddb6db0c4fa3" Jan 22 17:46:56 crc kubenswrapper[4758]: I0122 17:46:56.230173 4758 scope.go:117] "RemoveContainer" containerID="08f1aa28d671f27a265c1ca6886d0f4e1386ad99758c93b6a8aa8150033edbd8" Jan 22 17:46:56 crc kubenswrapper[4758]: E0122 17:46:56.230737 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08f1aa28d671f27a265c1ca6886d0f4e1386ad99758c93b6a8aa8150033edbd8\": container with ID starting with 08f1aa28d671f27a265c1ca6886d0f4e1386ad99758c93b6a8aa8150033edbd8 not found: ID does not exist" containerID="08f1aa28d671f27a265c1ca6886d0f4e1386ad99758c93b6a8aa8150033edbd8" Jan 22 17:46:56 crc kubenswrapper[4758]: I0122 17:46:56.230806 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08f1aa28d671f27a265c1ca6886d0f4e1386ad99758c93b6a8aa8150033edbd8"} err="failed to get container status \"08f1aa28d671f27a265c1ca6886d0f4e1386ad99758c93b6a8aa8150033edbd8\": rpc error: code = NotFound desc = could not find container \"08f1aa28d671f27a265c1ca6886d0f4e1386ad99758c93b6a8aa8150033edbd8\": container with ID starting with 08f1aa28d671f27a265c1ca6886d0f4e1386ad99758c93b6a8aa8150033edbd8 not found: ID does not exist" Jan 22 17:46:56 crc kubenswrapper[4758]: I0122 17:46:56.230834 4758 scope.go:117] "RemoveContainer" containerID="1bfdda96277118283a7ecfdbc9ff9c0005191b2f5675127b8ed5a87357726bb7" Jan 22 17:46:56 crc kubenswrapper[4758]: E0122 17:46:56.231294 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bfdda96277118283a7ecfdbc9ff9c0005191b2f5675127b8ed5a87357726bb7\": container with ID starting with 1bfdda96277118283a7ecfdbc9ff9c0005191b2f5675127b8ed5a87357726bb7 not found: ID does not exist" containerID="1bfdda96277118283a7ecfdbc9ff9c0005191b2f5675127b8ed5a87357726bb7" Jan 22 17:46:56 crc kubenswrapper[4758]: I0122 17:46:56.231334 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bfdda96277118283a7ecfdbc9ff9c0005191b2f5675127b8ed5a87357726bb7"} err="failed to get container status \"1bfdda96277118283a7ecfdbc9ff9c0005191b2f5675127b8ed5a87357726bb7\": rpc error: code = NotFound desc = could not find container \"1bfdda96277118283a7ecfdbc9ff9c0005191b2f5675127b8ed5a87357726bb7\": container with ID starting with 1bfdda96277118283a7ecfdbc9ff9c0005191b2f5675127b8ed5a87357726bb7 not found: ID does not exist" Jan 22 17:46:56 crc kubenswrapper[4758]: I0122 17:46:56.231361 4758 scope.go:117] "RemoveContainer" containerID="b0dfcc54e71646ce8111fd30420e5f6fb1ebbc956dc688584b6fddb6db0c4fa3" Jan 22 17:46:56 crc kubenswrapper[4758]: E0122 17:46:56.231690 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0dfcc54e71646ce8111fd30420e5f6fb1ebbc956dc688584b6fddb6db0c4fa3\": container with ID starting with b0dfcc54e71646ce8111fd30420e5f6fb1ebbc956dc688584b6fddb6db0c4fa3 not found: ID does not exist" containerID="b0dfcc54e71646ce8111fd30420e5f6fb1ebbc956dc688584b6fddb6db0c4fa3" Jan 22 17:46:56 crc kubenswrapper[4758]: I0122 17:46:56.231713 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0dfcc54e71646ce8111fd30420e5f6fb1ebbc956dc688584b6fddb6db0c4fa3"} err="failed to get container status \"b0dfcc54e71646ce8111fd30420e5f6fb1ebbc956dc688584b6fddb6db0c4fa3\": rpc error: code = NotFound desc = could not find container \"b0dfcc54e71646ce8111fd30420e5f6fb1ebbc956dc688584b6fddb6db0c4fa3\": container with ID starting with b0dfcc54e71646ce8111fd30420e5f6fb1ebbc956dc688584b6fddb6db0c4fa3 not found: ID does not exist" Jan 22 17:46:56 crc kubenswrapper[4758]: I0122 17:46:56.849702 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85d14087-d4f7-4dfe-8212-46d20f5e130f" path="/var/lib/kubelet/pods/85d14087-d4f7-4dfe-8212-46d20f5e130f/volumes" Jan 22 17:47:00 crc kubenswrapper[4758]: I0122 17:47:00.810016 4758 scope.go:117] "RemoveContainer" containerID="bca2620d2cea65c5431bb12ba3e5a5465e86fea66bac21826a76ec638ddabb93" Jan 22 17:47:00 crc kubenswrapper[4758]: E0122 17:47:00.810988 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:47:13 crc kubenswrapper[4758]: I0122 17:47:13.808721 4758 scope.go:117] "RemoveContainer" containerID="bca2620d2cea65c5431bb12ba3e5a5465e86fea66bac21826a76ec638ddabb93" Jan 22 17:47:13 crc kubenswrapper[4758]: E0122 17:47:13.810361 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:47:25 crc kubenswrapper[4758]: I0122 17:47:25.699394 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="f52e2571-4001-441f-b7b7-b4746ae1c10d" containerName="galera" probeResult="failure" output="command timed out" Jan 22 17:47:29 crc kubenswrapper[4758]: I0122 17:47:29.807889 4758 scope.go:117] "RemoveContainer" containerID="bca2620d2cea65c5431bb12ba3e5a5465e86fea66bac21826a76ec638ddabb93" Jan 22 17:47:29 crc kubenswrapper[4758]: E0122 17:47:29.809973 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:47:44 crc kubenswrapper[4758]: I0122 17:47:44.743552 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-66r7j"] Jan 22 17:47:44 crc kubenswrapper[4758]: E0122 17:47:44.744876 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85d14087-d4f7-4dfe-8212-46d20f5e130f" containerName="registry-server" Jan 22 17:47:44 crc kubenswrapper[4758]: I0122 17:47:44.744903 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="85d14087-d4f7-4dfe-8212-46d20f5e130f" containerName="registry-server" Jan 22 17:47:44 crc kubenswrapper[4758]: E0122 17:47:44.744948 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85d14087-d4f7-4dfe-8212-46d20f5e130f" containerName="extract-utilities" Jan 22 17:47:44 crc kubenswrapper[4758]: I0122 17:47:44.744957 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="85d14087-d4f7-4dfe-8212-46d20f5e130f" containerName="extract-utilities" Jan 22 17:47:44 crc kubenswrapper[4758]: E0122 17:47:44.744977 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85d14087-d4f7-4dfe-8212-46d20f5e130f" containerName="extract-content" Jan 22 17:47:44 crc kubenswrapper[4758]: I0122 17:47:44.744985 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="85d14087-d4f7-4dfe-8212-46d20f5e130f" containerName="extract-content" Jan 22 17:47:44 crc kubenswrapper[4758]: I0122 17:47:44.745276 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="85d14087-d4f7-4dfe-8212-46d20f5e130f" containerName="registry-server" Jan 22 17:47:44 crc kubenswrapper[4758]: I0122 17:47:44.747241 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-66r7j" Jan 22 17:47:44 crc kubenswrapper[4758]: I0122 17:47:44.756073 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5790457f-38e4-4d41-8ea3-f6d950f5d376-utilities\") pod \"certified-operators-66r7j\" (UID: \"5790457f-38e4-4d41-8ea3-f6d950f5d376\") " pod="openshift-marketplace/certified-operators-66r7j" Jan 22 17:47:44 crc kubenswrapper[4758]: I0122 17:47:44.756171 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5790457f-38e4-4d41-8ea3-f6d950f5d376-catalog-content\") pod \"certified-operators-66r7j\" (UID: \"5790457f-38e4-4d41-8ea3-f6d950f5d376\") " pod="openshift-marketplace/certified-operators-66r7j" Jan 22 17:47:44 crc kubenswrapper[4758]: I0122 17:47:44.756251 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w9pq\" (UniqueName: \"kubernetes.io/projected/5790457f-38e4-4d41-8ea3-f6d950f5d376-kube-api-access-7w9pq\") pod \"certified-operators-66r7j\" (UID: \"5790457f-38e4-4d41-8ea3-f6d950f5d376\") " pod="openshift-marketplace/certified-operators-66r7j" Jan 22 17:47:44 crc kubenswrapper[4758]: I0122 17:47:44.761990 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-66r7j"] Jan 22 17:47:44 crc kubenswrapper[4758]: I0122 17:47:44.809533 4758 scope.go:117] "RemoveContainer" containerID="bca2620d2cea65c5431bb12ba3e5a5465e86fea66bac21826a76ec638ddabb93" Jan 22 17:47:44 crc kubenswrapper[4758]: E0122 17:47:44.809895 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:47:44 crc kubenswrapper[4758]: I0122 17:47:44.858705 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5790457f-38e4-4d41-8ea3-f6d950f5d376-catalog-content\") pod \"certified-operators-66r7j\" (UID: \"5790457f-38e4-4d41-8ea3-f6d950f5d376\") " pod="openshift-marketplace/certified-operators-66r7j" Jan 22 17:47:44 crc kubenswrapper[4758]: I0122 17:47:44.859208 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w9pq\" (UniqueName: \"kubernetes.io/projected/5790457f-38e4-4d41-8ea3-f6d950f5d376-kube-api-access-7w9pq\") pod \"certified-operators-66r7j\" (UID: \"5790457f-38e4-4d41-8ea3-f6d950f5d376\") " pod="openshift-marketplace/certified-operators-66r7j" Jan 22 17:47:44 crc kubenswrapper[4758]: I0122 17:47:44.859303 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5790457f-38e4-4d41-8ea3-f6d950f5d376-catalog-content\") pod \"certified-operators-66r7j\" (UID: \"5790457f-38e4-4d41-8ea3-f6d950f5d376\") " pod="openshift-marketplace/certified-operators-66r7j" Jan 22 17:47:44 crc kubenswrapper[4758]: I0122 17:47:44.859400 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5790457f-38e4-4d41-8ea3-f6d950f5d376-utilities\") pod \"certified-operators-66r7j\" (UID: \"5790457f-38e4-4d41-8ea3-f6d950f5d376\") " pod="openshift-marketplace/certified-operators-66r7j" Jan 22 17:47:44 crc kubenswrapper[4758]: I0122 17:47:44.859943 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5790457f-38e4-4d41-8ea3-f6d950f5d376-utilities\") pod \"certified-operators-66r7j\" (UID: \"5790457f-38e4-4d41-8ea3-f6d950f5d376\") " pod="openshift-marketplace/certified-operators-66r7j" Jan 22 17:47:44 crc kubenswrapper[4758]: I0122 17:47:44.880971 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w9pq\" (UniqueName: \"kubernetes.io/projected/5790457f-38e4-4d41-8ea3-f6d950f5d376-kube-api-access-7w9pq\") pod \"certified-operators-66r7j\" (UID: \"5790457f-38e4-4d41-8ea3-f6d950f5d376\") " pod="openshift-marketplace/certified-operators-66r7j" Jan 22 17:47:45 crc kubenswrapper[4758]: I0122 17:47:45.071019 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-66r7j" Jan 22 17:47:45 crc kubenswrapper[4758]: I0122 17:47:45.599204 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-66r7j"] Jan 22 17:47:45 crc kubenswrapper[4758]: I0122 17:47:45.635326 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-66r7j" event={"ID":"5790457f-38e4-4d41-8ea3-f6d950f5d376","Type":"ContainerStarted","Data":"b137dc894ebf718f4ea8ded801a5ba461b62124c5746b3e8a6c4e6b6f8aadc1c"} Jan 22 17:47:46 crc kubenswrapper[4758]: I0122 17:47:46.824631 4758 generic.go:334] "Generic (PLEG): container finished" podID="5790457f-38e4-4d41-8ea3-f6d950f5d376" containerID="bb8d55924036245523dfc853ac6bd0ee131549c1bb40a35901643c7ddcebe5dc" exitCode=0 Jan 22 17:47:46 crc kubenswrapper[4758]: I0122 17:47:46.830222 4758 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 17:47:46 crc kubenswrapper[4758]: I0122 17:47:46.831101 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-66r7j" event={"ID":"5790457f-38e4-4d41-8ea3-f6d950f5d376","Type":"ContainerDied","Data":"bb8d55924036245523dfc853ac6bd0ee131549c1bb40a35901643c7ddcebe5dc"} Jan 22 17:47:50 crc kubenswrapper[4758]: I0122 17:47:50.865985 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-66r7j" event={"ID":"5790457f-38e4-4d41-8ea3-f6d950f5d376","Type":"ContainerStarted","Data":"237a58a584b45ae2ae5581baf8b5402f68be1eefac9314aee76568787261e982"} Jan 22 17:47:51 crc kubenswrapper[4758]: I0122 17:47:51.876588 4758 generic.go:334] "Generic (PLEG): container finished" podID="5790457f-38e4-4d41-8ea3-f6d950f5d376" containerID="237a58a584b45ae2ae5581baf8b5402f68be1eefac9314aee76568787261e982" exitCode=0 Jan 22 17:47:51 crc kubenswrapper[4758]: I0122 17:47:51.876640 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-66r7j" event={"ID":"5790457f-38e4-4d41-8ea3-f6d950f5d376","Type":"ContainerDied","Data":"237a58a584b45ae2ae5581baf8b5402f68be1eefac9314aee76568787261e982"} Jan 22 17:47:53 crc kubenswrapper[4758]: I0122 17:47:53.897004 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-66r7j" event={"ID":"5790457f-38e4-4d41-8ea3-f6d950f5d376","Type":"ContainerStarted","Data":"2ba9ea3f4b48dee0946461a30a8562dbbcbeb109d66fa7505f0133eea85572ce"} Jan 22 17:47:53 crc kubenswrapper[4758]: I0122 17:47:53.925265 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-66r7j" podStartSLOduration=4.484730022 podStartE2EDuration="9.92522461s" podCreationTimestamp="2026-01-22 17:47:44 +0000 UTC" firstStartedPulling="2026-01-22 17:47:46.829699621 +0000 UTC m=+4688.313038906" lastFinishedPulling="2026-01-22 17:47:52.270194199 +0000 UTC m=+4693.753533494" observedRunningTime="2026-01-22 17:47:53.916823161 +0000 UTC m=+4695.400162456" watchObservedRunningTime="2026-01-22 17:47:53.92522461 +0000 UTC m=+4695.408563905" Jan 22 17:47:55 crc kubenswrapper[4758]: I0122 17:47:55.071648 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-66r7j" Jan 22 17:47:55 crc kubenswrapper[4758]: I0122 17:47:55.072098 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-66r7j" Jan 22 17:47:55 crc kubenswrapper[4758]: I0122 17:47:55.154483 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-66r7j" Jan 22 17:47:57 crc kubenswrapper[4758]: I0122 17:47:57.809647 4758 scope.go:117] "RemoveContainer" containerID="bca2620d2cea65c5431bb12ba3e5a5465e86fea66bac21826a76ec638ddabb93" Jan 22 17:47:57 crc kubenswrapper[4758]: E0122 17:47:57.810706 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:48:05 crc kubenswrapper[4758]: I0122 17:48:05.155134 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-66r7j" Jan 22 17:48:05 crc kubenswrapper[4758]: I0122 17:48:05.243901 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-66r7j"] Jan 22 17:48:05 crc kubenswrapper[4758]: I0122 17:48:05.339840 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hdmsj"] Jan 22 17:48:05 crc kubenswrapper[4758]: I0122 17:48:05.340143 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hdmsj" podUID="c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52" containerName="registry-server" containerID="cri-o://d6059ebdebaacf4505ad36caeb6fab6d221725f4b6be3264c6c54884754320a8" gracePeriod=2 Jan 22 17:48:06 crc kubenswrapper[4758]: I0122 17:48:06.042149 4758 generic.go:334] "Generic (PLEG): container finished" podID="c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52" containerID="d6059ebdebaacf4505ad36caeb6fab6d221725f4b6be3264c6c54884754320a8" exitCode=0 Jan 22 17:48:06 crc kubenswrapper[4758]: I0122 17:48:06.042338 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hdmsj" event={"ID":"c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52","Type":"ContainerDied","Data":"d6059ebdebaacf4505ad36caeb6fab6d221725f4b6be3264c6c54884754320a8"} Jan 22 17:48:06 crc kubenswrapper[4758]: I0122 17:48:06.409512 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hdmsj" Jan 22 17:48:06 crc kubenswrapper[4758]: I0122 17:48:06.588836 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxsls\" (UniqueName: \"kubernetes.io/projected/c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52-kube-api-access-qxsls\") pod \"c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52\" (UID: \"c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52\") " Jan 22 17:48:06 crc kubenswrapper[4758]: I0122 17:48:06.588928 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52-catalog-content\") pod \"c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52\" (UID: \"c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52\") " Jan 22 17:48:06 crc kubenswrapper[4758]: I0122 17:48:06.590954 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52-utilities\") pod \"c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52\" (UID: \"c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52\") " Jan 22 17:48:06 crc kubenswrapper[4758]: I0122 17:48:06.592661 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52-utilities" (OuterVolumeSpecName: "utilities") pod "c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52" (UID: "c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:48:06 crc kubenswrapper[4758]: I0122 17:48:06.617337 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52-kube-api-access-qxsls" (OuterVolumeSpecName: "kube-api-access-qxsls") pod "c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52" (UID: "c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52"). InnerVolumeSpecName "kube-api-access-qxsls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:48:06 crc kubenswrapper[4758]: I0122 17:48:06.654793 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52" (UID: "c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:48:06 crc kubenswrapper[4758]: I0122 17:48:06.694966 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxsls\" (UniqueName: \"kubernetes.io/projected/c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52-kube-api-access-qxsls\") on node \"crc\" DevicePath \"\"" Jan 22 17:48:06 crc kubenswrapper[4758]: I0122 17:48:06.695006 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 17:48:06 crc kubenswrapper[4758]: I0122 17:48:06.695019 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 17:48:07 crc kubenswrapper[4758]: I0122 17:48:07.054688 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hdmsj" event={"ID":"c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52","Type":"ContainerDied","Data":"f0b4976efd0fa58c1d1e5db0679c922c9f972cbc07e34ab8a3d8395ab79f1b43"} Jan 22 17:48:07 crc kubenswrapper[4758]: I0122 17:48:07.055045 4758 scope.go:117] "RemoveContainer" containerID="d6059ebdebaacf4505ad36caeb6fab6d221725f4b6be3264c6c54884754320a8" Jan 22 17:48:07 crc kubenswrapper[4758]: I0122 17:48:07.054955 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hdmsj" Jan 22 17:48:07 crc kubenswrapper[4758]: I0122 17:48:07.088472 4758 scope.go:117] "RemoveContainer" containerID="fe4f9f9cb91bc814c7639949421fe6e054d72de9a1547f0ee667bf582b6bc06e" Jan 22 17:48:07 crc kubenswrapper[4758]: I0122 17:48:07.091944 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hdmsj"] Jan 22 17:48:07 crc kubenswrapper[4758]: I0122 17:48:07.100610 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hdmsj"] Jan 22 17:48:07 crc kubenswrapper[4758]: I0122 17:48:07.112129 4758 scope.go:117] "RemoveContainer" containerID="5f21c55cdce018c32fbdec9817e61675dc3daf521acf33799de97de693565e31" Jan 22 17:48:08 crc kubenswrapper[4758]: I0122 17:48:08.819500 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52" path="/var/lib/kubelet/pods/c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52/volumes" Jan 22 17:48:11 crc kubenswrapper[4758]: I0122 17:48:11.807864 4758 scope.go:117] "RemoveContainer" containerID="bca2620d2cea65c5431bb12ba3e5a5465e86fea66bac21826a76ec638ddabb93" Jan 22 17:48:11 crc kubenswrapper[4758]: E0122 17:48:11.808575 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:48:25 crc kubenswrapper[4758]: I0122 17:48:25.809713 4758 scope.go:117] "RemoveContainer" containerID="bca2620d2cea65c5431bb12ba3e5a5465e86fea66bac21826a76ec638ddabb93" Jan 22 17:48:25 crc kubenswrapper[4758]: E0122 17:48:25.811109 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:48:38 crc kubenswrapper[4758]: I0122 17:48:38.808622 4758 scope.go:117] "RemoveContainer" containerID="bca2620d2cea65c5431bb12ba3e5a5465e86fea66bac21826a76ec638ddabb93" Jan 22 17:48:38 crc kubenswrapper[4758]: E0122 17:48:38.809481 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:48:49 crc kubenswrapper[4758]: I0122 17:48:49.808632 4758 scope.go:117] "RemoveContainer" containerID="bca2620d2cea65c5431bb12ba3e5a5465e86fea66bac21826a76ec638ddabb93" Jan 22 17:48:49 crc kubenswrapper[4758]: E0122 17:48:49.809452 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:49:03 crc kubenswrapper[4758]: I0122 17:49:03.808771 4758 scope.go:117] "RemoveContainer" containerID="bca2620d2cea65c5431bb12ba3e5a5465e86fea66bac21826a76ec638ddabb93" Jan 22 17:49:03 crc kubenswrapper[4758]: E0122 17:49:03.809865 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:49:18 crc kubenswrapper[4758]: I0122 17:49:18.823172 4758 scope.go:117] "RemoveContainer" containerID="bca2620d2cea65c5431bb12ba3e5a5465e86fea66bac21826a76ec638ddabb93" Jan 22 17:49:18 crc kubenswrapper[4758]: E0122 17:49:18.823975 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:49:32 crc kubenswrapper[4758]: I0122 17:49:32.098654 4758 scope.go:117] "RemoveContainer" containerID="bca2620d2cea65c5431bb12ba3e5a5465e86fea66bac21826a76ec638ddabb93" Jan 22 17:49:32 crc kubenswrapper[4758]: E0122 17:49:32.099634 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:49:43 crc kubenswrapper[4758]: I0122 17:49:43.808195 4758 scope.go:117] "RemoveContainer" containerID="bca2620d2cea65c5431bb12ba3e5a5465e86fea66bac21826a76ec638ddabb93" Jan 22 17:49:43 crc kubenswrapper[4758]: E0122 17:49:43.809277 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:49:57 crc kubenswrapper[4758]: I0122 17:49:57.808152 4758 scope.go:117] "RemoveContainer" containerID="bca2620d2cea65c5431bb12ba3e5a5465e86fea66bac21826a76ec638ddabb93" Jan 22 17:49:57 crc kubenswrapper[4758]: E0122 17:49:57.808982 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:50:11 crc kubenswrapper[4758]: I0122 17:50:11.808099 4758 scope.go:117] "RemoveContainer" containerID="bca2620d2cea65c5431bb12ba3e5a5465e86fea66bac21826a76ec638ddabb93" Jan 22 17:50:11 crc kubenswrapper[4758]: E0122 17:50:11.809653 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:50:26 crc kubenswrapper[4758]: I0122 17:50:26.807943 4758 scope.go:117] "RemoveContainer" containerID="bca2620d2cea65c5431bb12ba3e5a5465e86fea66bac21826a76ec638ddabb93" Jan 22 17:50:26 crc kubenswrapper[4758]: E0122 17:50:26.808824 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:50:35 crc kubenswrapper[4758]: I0122 17:50:35.251906 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7cdh5"] Jan 22 17:50:35 crc kubenswrapper[4758]: E0122 17:50:35.269997 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52" containerName="extract-utilities" Jan 22 17:50:35 crc kubenswrapper[4758]: I0122 17:50:35.270042 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52" containerName="extract-utilities" Jan 22 17:50:35 crc kubenswrapper[4758]: E0122 17:50:35.270079 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52" containerName="registry-server" Jan 22 17:50:35 crc kubenswrapper[4758]: I0122 17:50:35.270089 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52" containerName="registry-server" Jan 22 17:50:35 crc kubenswrapper[4758]: E0122 17:50:35.270118 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52" containerName="extract-content" Jan 22 17:50:35 crc kubenswrapper[4758]: I0122 17:50:35.270127 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52" containerName="extract-content" Jan 22 17:50:35 crc kubenswrapper[4758]: I0122 17:50:35.270428 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7cbb6c2-7109-4445-a85e-c5e4ecfc6d52" containerName="registry-server" Jan 22 17:50:35 crc kubenswrapper[4758]: I0122 17:50:35.272699 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7cdh5"] Jan 22 17:50:35 crc kubenswrapper[4758]: I0122 17:50:35.272887 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7cdh5" Jan 22 17:50:35 crc kubenswrapper[4758]: I0122 17:50:35.298241 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b57a902-1a01-4866-9cda-0b82e3bb20f4-utilities\") pod \"community-operators-7cdh5\" (UID: \"8b57a902-1a01-4866-9cda-0b82e3bb20f4\") " pod="openshift-marketplace/community-operators-7cdh5" Jan 22 17:50:35 crc kubenswrapper[4758]: I0122 17:50:35.298434 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b57a902-1a01-4866-9cda-0b82e3bb20f4-catalog-content\") pod \"community-operators-7cdh5\" (UID: \"8b57a902-1a01-4866-9cda-0b82e3bb20f4\") " pod="openshift-marketplace/community-operators-7cdh5" Jan 22 17:50:35 crc kubenswrapper[4758]: I0122 17:50:35.300529 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gs26\" (UniqueName: \"kubernetes.io/projected/8b57a902-1a01-4866-9cda-0b82e3bb20f4-kube-api-access-9gs26\") pod \"community-operators-7cdh5\" (UID: \"8b57a902-1a01-4866-9cda-0b82e3bb20f4\") " pod="openshift-marketplace/community-operators-7cdh5" Jan 22 17:50:35 crc kubenswrapper[4758]: I0122 17:50:35.403072 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b57a902-1a01-4866-9cda-0b82e3bb20f4-utilities\") pod \"community-operators-7cdh5\" (UID: \"8b57a902-1a01-4866-9cda-0b82e3bb20f4\") " pod="openshift-marketplace/community-operators-7cdh5" Jan 22 17:50:35 crc kubenswrapper[4758]: I0122 17:50:35.403215 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b57a902-1a01-4866-9cda-0b82e3bb20f4-catalog-content\") pod \"community-operators-7cdh5\" (UID: \"8b57a902-1a01-4866-9cda-0b82e3bb20f4\") " pod="openshift-marketplace/community-operators-7cdh5" Jan 22 17:50:35 crc kubenswrapper[4758]: I0122 17:50:35.403331 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gs26\" (UniqueName: \"kubernetes.io/projected/8b57a902-1a01-4866-9cda-0b82e3bb20f4-kube-api-access-9gs26\") pod \"community-operators-7cdh5\" (UID: \"8b57a902-1a01-4866-9cda-0b82e3bb20f4\") " pod="openshift-marketplace/community-operators-7cdh5" Jan 22 17:50:35 crc kubenswrapper[4758]: I0122 17:50:35.403787 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b57a902-1a01-4866-9cda-0b82e3bb20f4-utilities\") pod \"community-operators-7cdh5\" (UID: \"8b57a902-1a01-4866-9cda-0b82e3bb20f4\") " pod="openshift-marketplace/community-operators-7cdh5" Jan 22 17:50:35 crc kubenswrapper[4758]: I0122 17:50:35.403887 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b57a902-1a01-4866-9cda-0b82e3bb20f4-catalog-content\") pod \"community-operators-7cdh5\" (UID: \"8b57a902-1a01-4866-9cda-0b82e3bb20f4\") " pod="openshift-marketplace/community-operators-7cdh5" Jan 22 17:50:35 crc kubenswrapper[4758]: I0122 17:50:35.432591 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gs26\" (UniqueName: \"kubernetes.io/projected/8b57a902-1a01-4866-9cda-0b82e3bb20f4-kube-api-access-9gs26\") pod \"community-operators-7cdh5\" (UID: \"8b57a902-1a01-4866-9cda-0b82e3bb20f4\") " pod="openshift-marketplace/community-operators-7cdh5" Jan 22 17:50:35 crc kubenswrapper[4758]: I0122 17:50:35.607301 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7cdh5" Jan 22 17:50:36 crc kubenswrapper[4758]: I0122 17:50:36.235794 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7cdh5"] Jan 22 17:50:36 crc kubenswrapper[4758]: W0122 17:50:36.245721 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b57a902_1a01_4866_9cda_0b82e3bb20f4.slice/crio-bc3843506a8b6a523d9f95f32e2145d87c9035a4ff4393964debc7b56b1bdb54 WatchSource:0}: Error finding container bc3843506a8b6a523d9f95f32e2145d87c9035a4ff4393964debc7b56b1bdb54: Status 404 returned error can't find the container with id bc3843506a8b6a523d9f95f32e2145d87c9035a4ff4393964debc7b56b1bdb54 Jan 22 17:50:36 crc kubenswrapper[4758]: I0122 17:50:36.951597 4758 generic.go:334] "Generic (PLEG): container finished" podID="8b57a902-1a01-4866-9cda-0b82e3bb20f4" containerID="d73b717dfcfe95e330e647f049ff3b93c924b5bc4951c2daeecbc75946c05336" exitCode=0 Jan 22 17:50:36 crc kubenswrapper[4758]: I0122 17:50:36.951733 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cdh5" event={"ID":"8b57a902-1a01-4866-9cda-0b82e3bb20f4","Type":"ContainerDied","Data":"d73b717dfcfe95e330e647f049ff3b93c924b5bc4951c2daeecbc75946c05336"} Jan 22 17:50:36 crc kubenswrapper[4758]: I0122 17:50:36.951873 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cdh5" event={"ID":"8b57a902-1a01-4866-9cda-0b82e3bb20f4","Type":"ContainerStarted","Data":"bc3843506a8b6a523d9f95f32e2145d87c9035a4ff4393964debc7b56b1bdb54"} Jan 22 17:50:38 crc kubenswrapper[4758]: I0122 17:50:38.815833 4758 scope.go:117] "RemoveContainer" containerID="bca2620d2cea65c5431bb12ba3e5a5465e86fea66bac21826a76ec638ddabb93" Jan 22 17:50:38 crc kubenswrapper[4758]: E0122 17:50:38.816648 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:50:38 crc kubenswrapper[4758]: I0122 17:50:38.975445 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cdh5" event={"ID":"8b57a902-1a01-4866-9cda-0b82e3bb20f4","Type":"ContainerStarted","Data":"35693580050eca9e02ef0f3bce835d0256c171b5472d89f920deabc7e9a6749a"} Jan 22 17:50:39 crc kubenswrapper[4758]: I0122 17:50:39.986571 4758 generic.go:334] "Generic (PLEG): container finished" podID="8b57a902-1a01-4866-9cda-0b82e3bb20f4" containerID="35693580050eca9e02ef0f3bce835d0256c171b5472d89f920deabc7e9a6749a" exitCode=0 Jan 22 17:50:39 crc kubenswrapper[4758]: I0122 17:50:39.987045 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cdh5" event={"ID":"8b57a902-1a01-4866-9cda-0b82e3bb20f4","Type":"ContainerDied","Data":"35693580050eca9e02ef0f3bce835d0256c171b5472d89f920deabc7e9a6749a"} Jan 22 17:50:40 crc kubenswrapper[4758]: I0122 17:50:40.999396 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cdh5" event={"ID":"8b57a902-1a01-4866-9cda-0b82e3bb20f4","Type":"ContainerStarted","Data":"4565b51ca8c61de4eef1ddbeaa6e08ecf065308ee1b94ce6a6eb242a165e5f49"} Jan 22 17:50:41 crc kubenswrapper[4758]: I0122 17:50:41.026709 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7cdh5" podStartSLOduration=2.454874693 podStartE2EDuration="6.026662691s" podCreationTimestamp="2026-01-22 17:50:35 +0000 UTC" firstStartedPulling="2026-01-22 17:50:36.9543372 +0000 UTC m=+4858.437676495" lastFinishedPulling="2026-01-22 17:50:40.526125178 +0000 UTC m=+4862.009464493" observedRunningTime="2026-01-22 17:50:41.025403898 +0000 UTC m=+4862.508743193" watchObservedRunningTime="2026-01-22 17:50:41.026662691 +0000 UTC m=+4862.510001976" Jan 22 17:50:45 crc kubenswrapper[4758]: I0122 17:50:45.608563 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7cdh5" Jan 22 17:50:45 crc kubenswrapper[4758]: I0122 17:50:45.609406 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7cdh5" Jan 22 17:50:45 crc kubenswrapper[4758]: I0122 17:50:45.701334 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7cdh5" Jan 22 17:50:46 crc kubenswrapper[4758]: I0122 17:50:46.121820 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7cdh5" Jan 22 17:50:46 crc kubenswrapper[4758]: I0122 17:50:46.176387 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7cdh5"] Jan 22 17:50:48 crc kubenswrapper[4758]: I0122 17:50:48.089636 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7cdh5" podUID="8b57a902-1a01-4866-9cda-0b82e3bb20f4" containerName="registry-server" containerID="cri-o://4565b51ca8c61de4eef1ddbeaa6e08ecf065308ee1b94ce6a6eb242a165e5f49" gracePeriod=2 Jan 22 17:50:48 crc kubenswrapper[4758]: I0122 17:50:48.572386 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7cdh5" Jan 22 17:50:48 crc kubenswrapper[4758]: I0122 17:50:48.632501 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b57a902-1a01-4866-9cda-0b82e3bb20f4-utilities\") pod \"8b57a902-1a01-4866-9cda-0b82e3bb20f4\" (UID: \"8b57a902-1a01-4866-9cda-0b82e3bb20f4\") " Jan 22 17:50:48 crc kubenswrapper[4758]: I0122 17:50:48.632627 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b57a902-1a01-4866-9cda-0b82e3bb20f4-catalog-content\") pod \"8b57a902-1a01-4866-9cda-0b82e3bb20f4\" (UID: \"8b57a902-1a01-4866-9cda-0b82e3bb20f4\") " Jan 22 17:50:48 crc kubenswrapper[4758]: I0122 17:50:48.632790 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gs26\" (UniqueName: \"kubernetes.io/projected/8b57a902-1a01-4866-9cda-0b82e3bb20f4-kube-api-access-9gs26\") pod \"8b57a902-1a01-4866-9cda-0b82e3bb20f4\" (UID: \"8b57a902-1a01-4866-9cda-0b82e3bb20f4\") " Jan 22 17:50:48 crc kubenswrapper[4758]: I0122 17:50:48.634567 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b57a902-1a01-4866-9cda-0b82e3bb20f4-utilities" (OuterVolumeSpecName: "utilities") pod "8b57a902-1a01-4866-9cda-0b82e3bb20f4" (UID: "8b57a902-1a01-4866-9cda-0b82e3bb20f4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:50:48 crc kubenswrapper[4758]: I0122 17:50:48.643618 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b57a902-1a01-4866-9cda-0b82e3bb20f4-kube-api-access-9gs26" (OuterVolumeSpecName: "kube-api-access-9gs26") pod "8b57a902-1a01-4866-9cda-0b82e3bb20f4" (UID: "8b57a902-1a01-4866-9cda-0b82e3bb20f4"). InnerVolumeSpecName "kube-api-access-9gs26". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:50:48 crc kubenswrapper[4758]: I0122 17:50:48.713289 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b57a902-1a01-4866-9cda-0b82e3bb20f4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8b57a902-1a01-4866-9cda-0b82e3bb20f4" (UID: "8b57a902-1a01-4866-9cda-0b82e3bb20f4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:50:48 crc kubenswrapper[4758]: I0122 17:50:48.734600 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b57a902-1a01-4866-9cda-0b82e3bb20f4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 17:50:48 crc kubenswrapper[4758]: I0122 17:50:48.734639 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gs26\" (UniqueName: \"kubernetes.io/projected/8b57a902-1a01-4866-9cda-0b82e3bb20f4-kube-api-access-9gs26\") on node \"crc\" DevicePath \"\"" Jan 22 17:50:48 crc kubenswrapper[4758]: I0122 17:50:48.734652 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b57a902-1a01-4866-9cda-0b82e3bb20f4-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 17:50:49 crc kubenswrapper[4758]: I0122 17:50:49.105926 4758 generic.go:334] "Generic (PLEG): container finished" podID="8b57a902-1a01-4866-9cda-0b82e3bb20f4" containerID="4565b51ca8c61de4eef1ddbeaa6e08ecf065308ee1b94ce6a6eb242a165e5f49" exitCode=0 Jan 22 17:50:49 crc kubenswrapper[4758]: I0122 17:50:49.106039 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cdh5" event={"ID":"8b57a902-1a01-4866-9cda-0b82e3bb20f4","Type":"ContainerDied","Data":"4565b51ca8c61de4eef1ddbeaa6e08ecf065308ee1b94ce6a6eb242a165e5f49"} Jan 22 17:50:49 crc kubenswrapper[4758]: I0122 17:50:49.106534 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cdh5" event={"ID":"8b57a902-1a01-4866-9cda-0b82e3bb20f4","Type":"ContainerDied","Data":"bc3843506a8b6a523d9f95f32e2145d87c9035a4ff4393964debc7b56b1bdb54"} Jan 22 17:50:49 crc kubenswrapper[4758]: I0122 17:50:49.106576 4758 scope.go:117] "RemoveContainer" containerID="4565b51ca8c61de4eef1ddbeaa6e08ecf065308ee1b94ce6a6eb242a165e5f49" Jan 22 17:50:49 crc kubenswrapper[4758]: I0122 17:50:49.106113 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7cdh5" Jan 22 17:50:49 crc kubenswrapper[4758]: I0122 17:50:49.147788 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7cdh5"] Jan 22 17:50:49 crc kubenswrapper[4758]: I0122 17:50:49.149725 4758 scope.go:117] "RemoveContainer" containerID="35693580050eca9e02ef0f3bce835d0256c171b5472d89f920deabc7e9a6749a" Jan 22 17:50:49 crc kubenswrapper[4758]: I0122 17:50:49.166320 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7cdh5"] Jan 22 17:50:49 crc kubenswrapper[4758]: I0122 17:50:49.184060 4758 scope.go:117] "RemoveContainer" containerID="d73b717dfcfe95e330e647f049ff3b93c924b5bc4951c2daeecbc75946c05336" Jan 22 17:50:49 crc kubenswrapper[4758]: I0122 17:50:49.246044 4758 scope.go:117] "RemoveContainer" containerID="4565b51ca8c61de4eef1ddbeaa6e08ecf065308ee1b94ce6a6eb242a165e5f49" Jan 22 17:50:49 crc kubenswrapper[4758]: E0122 17:50:49.247117 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4565b51ca8c61de4eef1ddbeaa6e08ecf065308ee1b94ce6a6eb242a165e5f49\": container with ID starting with 4565b51ca8c61de4eef1ddbeaa6e08ecf065308ee1b94ce6a6eb242a165e5f49 not found: ID does not exist" containerID="4565b51ca8c61de4eef1ddbeaa6e08ecf065308ee1b94ce6a6eb242a165e5f49" Jan 22 17:50:49 crc kubenswrapper[4758]: I0122 17:50:49.247196 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4565b51ca8c61de4eef1ddbeaa6e08ecf065308ee1b94ce6a6eb242a165e5f49"} err="failed to get container status \"4565b51ca8c61de4eef1ddbeaa6e08ecf065308ee1b94ce6a6eb242a165e5f49\": rpc error: code = NotFound desc = could not find container \"4565b51ca8c61de4eef1ddbeaa6e08ecf065308ee1b94ce6a6eb242a165e5f49\": container with ID starting with 4565b51ca8c61de4eef1ddbeaa6e08ecf065308ee1b94ce6a6eb242a165e5f49 not found: ID does not exist" Jan 22 17:50:49 crc kubenswrapper[4758]: I0122 17:50:49.247236 4758 scope.go:117] "RemoveContainer" containerID="35693580050eca9e02ef0f3bce835d0256c171b5472d89f920deabc7e9a6749a" Jan 22 17:50:49 crc kubenswrapper[4758]: E0122 17:50:49.247578 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35693580050eca9e02ef0f3bce835d0256c171b5472d89f920deabc7e9a6749a\": container with ID starting with 35693580050eca9e02ef0f3bce835d0256c171b5472d89f920deabc7e9a6749a not found: ID does not exist" containerID="35693580050eca9e02ef0f3bce835d0256c171b5472d89f920deabc7e9a6749a" Jan 22 17:50:49 crc kubenswrapper[4758]: I0122 17:50:49.247604 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35693580050eca9e02ef0f3bce835d0256c171b5472d89f920deabc7e9a6749a"} err="failed to get container status \"35693580050eca9e02ef0f3bce835d0256c171b5472d89f920deabc7e9a6749a\": rpc error: code = NotFound desc = could not find container \"35693580050eca9e02ef0f3bce835d0256c171b5472d89f920deabc7e9a6749a\": container with ID starting with 35693580050eca9e02ef0f3bce835d0256c171b5472d89f920deabc7e9a6749a not found: ID does not exist" Jan 22 17:50:49 crc kubenswrapper[4758]: I0122 17:50:49.247619 4758 scope.go:117] "RemoveContainer" containerID="d73b717dfcfe95e330e647f049ff3b93c924b5bc4951c2daeecbc75946c05336" Jan 22 17:50:49 crc kubenswrapper[4758]: E0122 17:50:49.248185 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d73b717dfcfe95e330e647f049ff3b93c924b5bc4951c2daeecbc75946c05336\": container with ID starting with d73b717dfcfe95e330e647f049ff3b93c924b5bc4951c2daeecbc75946c05336 not found: ID does not exist" containerID="d73b717dfcfe95e330e647f049ff3b93c924b5bc4951c2daeecbc75946c05336" Jan 22 17:50:49 crc kubenswrapper[4758]: I0122 17:50:49.248212 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d73b717dfcfe95e330e647f049ff3b93c924b5bc4951c2daeecbc75946c05336"} err="failed to get container status \"d73b717dfcfe95e330e647f049ff3b93c924b5bc4951c2daeecbc75946c05336\": rpc error: code = NotFound desc = could not find container \"d73b717dfcfe95e330e647f049ff3b93c924b5bc4951c2daeecbc75946c05336\": container with ID starting with d73b717dfcfe95e330e647f049ff3b93c924b5bc4951c2daeecbc75946c05336 not found: ID does not exist" Jan 22 17:50:49 crc kubenswrapper[4758]: I0122 17:50:49.808582 4758 scope.go:117] "RemoveContainer" containerID="bca2620d2cea65c5431bb12ba3e5a5465e86fea66bac21826a76ec638ddabb93" Jan 22 17:50:50 crc kubenswrapper[4758]: I0122 17:50:50.820200 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b57a902-1a01-4866-9cda-0b82e3bb20f4" path="/var/lib/kubelet/pods/8b57a902-1a01-4866-9cda-0b82e3bb20f4/volumes" Jan 22 17:50:51 crc kubenswrapper[4758]: I0122 17:50:51.127671 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerStarted","Data":"b914cf88407ffa07bd6d3f02508e35be12faa9fed3ba54f385c87d9e72a3155f"} Jan 22 17:52:57 crc kubenswrapper[4758]: I0122 17:52:57.394169 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kjnx9"] Jan 22 17:52:57 crc kubenswrapper[4758]: E0122 17:52:57.395797 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b57a902-1a01-4866-9cda-0b82e3bb20f4" containerName="extract-utilities" Jan 22 17:52:57 crc kubenswrapper[4758]: I0122 17:52:57.395829 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b57a902-1a01-4866-9cda-0b82e3bb20f4" containerName="extract-utilities" Jan 22 17:52:57 crc kubenswrapper[4758]: E0122 17:52:57.395893 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b57a902-1a01-4866-9cda-0b82e3bb20f4" containerName="extract-content" Jan 22 17:52:57 crc kubenswrapper[4758]: I0122 17:52:57.395912 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b57a902-1a01-4866-9cda-0b82e3bb20f4" containerName="extract-content" Jan 22 17:52:57 crc kubenswrapper[4758]: E0122 17:52:57.395973 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b57a902-1a01-4866-9cda-0b82e3bb20f4" containerName="registry-server" Jan 22 17:52:57 crc kubenswrapper[4758]: I0122 17:52:57.395991 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b57a902-1a01-4866-9cda-0b82e3bb20f4" containerName="registry-server" Jan 22 17:52:57 crc kubenswrapper[4758]: I0122 17:52:57.396419 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b57a902-1a01-4866-9cda-0b82e3bb20f4" containerName="registry-server" Jan 22 17:52:57 crc kubenswrapper[4758]: I0122 17:52:57.399949 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kjnx9" Jan 22 17:52:57 crc kubenswrapper[4758]: I0122 17:52:57.415525 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kjnx9"] Jan 22 17:52:57 crc kubenswrapper[4758]: I0122 17:52:57.549191 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40fb61d8-d58a-440b-b1c2-45987a86f856-utilities\") pod \"redhat-marketplace-kjnx9\" (UID: \"40fb61d8-d58a-440b-b1c2-45987a86f856\") " pod="openshift-marketplace/redhat-marketplace-kjnx9" Jan 22 17:52:57 crc kubenswrapper[4758]: I0122 17:52:57.549543 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40fb61d8-d58a-440b-b1c2-45987a86f856-catalog-content\") pod \"redhat-marketplace-kjnx9\" (UID: \"40fb61d8-d58a-440b-b1c2-45987a86f856\") " pod="openshift-marketplace/redhat-marketplace-kjnx9" Jan 22 17:52:57 crc kubenswrapper[4758]: I0122 17:52:57.549887 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hnhg\" (UniqueName: \"kubernetes.io/projected/40fb61d8-d58a-440b-b1c2-45987a86f856-kube-api-access-7hnhg\") pod \"redhat-marketplace-kjnx9\" (UID: \"40fb61d8-d58a-440b-b1c2-45987a86f856\") " pod="openshift-marketplace/redhat-marketplace-kjnx9" Jan 22 17:52:57 crc kubenswrapper[4758]: I0122 17:52:57.652307 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hnhg\" (UniqueName: \"kubernetes.io/projected/40fb61d8-d58a-440b-b1c2-45987a86f856-kube-api-access-7hnhg\") pod \"redhat-marketplace-kjnx9\" (UID: \"40fb61d8-d58a-440b-b1c2-45987a86f856\") " pod="openshift-marketplace/redhat-marketplace-kjnx9" Jan 22 17:52:57 crc kubenswrapper[4758]: I0122 17:52:57.652475 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40fb61d8-d58a-440b-b1c2-45987a86f856-utilities\") pod \"redhat-marketplace-kjnx9\" (UID: \"40fb61d8-d58a-440b-b1c2-45987a86f856\") " pod="openshift-marketplace/redhat-marketplace-kjnx9" Jan 22 17:52:57 crc kubenswrapper[4758]: I0122 17:52:57.652533 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40fb61d8-d58a-440b-b1c2-45987a86f856-catalog-content\") pod \"redhat-marketplace-kjnx9\" (UID: \"40fb61d8-d58a-440b-b1c2-45987a86f856\") " pod="openshift-marketplace/redhat-marketplace-kjnx9" Jan 22 17:52:57 crc kubenswrapper[4758]: I0122 17:52:57.653081 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40fb61d8-d58a-440b-b1c2-45987a86f856-utilities\") pod \"redhat-marketplace-kjnx9\" (UID: \"40fb61d8-d58a-440b-b1c2-45987a86f856\") " pod="openshift-marketplace/redhat-marketplace-kjnx9" Jan 22 17:52:57 crc kubenswrapper[4758]: I0122 17:52:57.653102 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40fb61d8-d58a-440b-b1c2-45987a86f856-catalog-content\") pod \"redhat-marketplace-kjnx9\" (UID: \"40fb61d8-d58a-440b-b1c2-45987a86f856\") " pod="openshift-marketplace/redhat-marketplace-kjnx9" Jan 22 17:52:57 crc kubenswrapper[4758]: I0122 17:52:57.674819 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hnhg\" (UniqueName: \"kubernetes.io/projected/40fb61d8-d58a-440b-b1c2-45987a86f856-kube-api-access-7hnhg\") pod \"redhat-marketplace-kjnx9\" (UID: \"40fb61d8-d58a-440b-b1c2-45987a86f856\") " pod="openshift-marketplace/redhat-marketplace-kjnx9" Jan 22 17:52:57 crc kubenswrapper[4758]: I0122 17:52:57.746631 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kjnx9" Jan 22 17:52:58 crc kubenswrapper[4758]: I0122 17:52:58.241297 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kjnx9"] Jan 22 17:52:58 crc kubenswrapper[4758]: I0122 17:52:58.508651 4758 generic.go:334] "Generic (PLEG): container finished" podID="40fb61d8-d58a-440b-b1c2-45987a86f856" containerID="42b98c75546bcc022582d83ba5263a33135c1e0d68aa391462fed457156e437a" exitCode=0 Jan 22 17:52:58 crc kubenswrapper[4758]: I0122 17:52:58.508734 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kjnx9" event={"ID":"40fb61d8-d58a-440b-b1c2-45987a86f856","Type":"ContainerDied","Data":"42b98c75546bcc022582d83ba5263a33135c1e0d68aa391462fed457156e437a"} Jan 22 17:52:58 crc kubenswrapper[4758]: I0122 17:52:58.508978 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kjnx9" event={"ID":"40fb61d8-d58a-440b-b1c2-45987a86f856","Type":"ContainerStarted","Data":"ed40cea9ff1008d85daf0131c0f498a375f5d5d6fa15a269c82537c911f6a965"} Jan 22 17:52:58 crc kubenswrapper[4758]: I0122 17:52:58.510486 4758 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 17:53:00 crc kubenswrapper[4758]: I0122 17:53:00.537907 4758 generic.go:334] "Generic (PLEG): container finished" podID="40fb61d8-d58a-440b-b1c2-45987a86f856" containerID="4b83370f40f03f8056f45fc6221da850f409c8be3160ef311881d948bea8960e" exitCode=0 Jan 22 17:53:00 crc kubenswrapper[4758]: I0122 17:53:00.537999 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kjnx9" event={"ID":"40fb61d8-d58a-440b-b1c2-45987a86f856","Type":"ContainerDied","Data":"4b83370f40f03f8056f45fc6221da850f409c8be3160ef311881d948bea8960e"} Jan 22 17:53:02 crc kubenswrapper[4758]: I0122 17:53:02.577248 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kjnx9" event={"ID":"40fb61d8-d58a-440b-b1c2-45987a86f856","Type":"ContainerStarted","Data":"987de2efd81e95dfdabc3be360784bd7f0f4abd1fce1e57fcf29801a74331304"} Jan 22 17:53:02 crc kubenswrapper[4758]: I0122 17:53:02.608668 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kjnx9" podStartSLOduration=2.502826205 podStartE2EDuration="5.608639102s" podCreationTimestamp="2026-01-22 17:52:57 +0000 UTC" firstStartedPulling="2026-01-22 17:52:58.510211979 +0000 UTC m=+4999.993551264" lastFinishedPulling="2026-01-22 17:53:01.616024876 +0000 UTC m=+5003.099364161" observedRunningTime="2026-01-22 17:53:02.597209741 +0000 UTC m=+5004.080549046" watchObservedRunningTime="2026-01-22 17:53:02.608639102 +0000 UTC m=+5004.091978397" Jan 22 17:53:07 crc kubenswrapper[4758]: I0122 17:53:07.747015 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kjnx9" Jan 22 17:53:07 crc kubenswrapper[4758]: I0122 17:53:07.747771 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kjnx9" Jan 22 17:53:07 crc kubenswrapper[4758]: I0122 17:53:07.864404 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kjnx9" Jan 22 17:53:08 crc kubenswrapper[4758]: I0122 17:53:08.751633 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kjnx9" Jan 22 17:53:08 crc kubenswrapper[4758]: I0122 17:53:08.799477 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kjnx9"] Jan 22 17:53:10 crc kubenswrapper[4758]: I0122 17:53:10.693896 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kjnx9" podUID="40fb61d8-d58a-440b-b1c2-45987a86f856" containerName="registry-server" containerID="cri-o://987de2efd81e95dfdabc3be360784bd7f0f4abd1fce1e57fcf29801a74331304" gracePeriod=2 Jan 22 17:53:11 crc kubenswrapper[4758]: I0122 17:53:11.444610 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kjnx9" Jan 22 17:53:11 crc kubenswrapper[4758]: I0122 17:53:11.530060 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40fb61d8-d58a-440b-b1c2-45987a86f856-utilities\") pod \"40fb61d8-d58a-440b-b1c2-45987a86f856\" (UID: \"40fb61d8-d58a-440b-b1c2-45987a86f856\") " Jan 22 17:53:11 crc kubenswrapper[4758]: I0122 17:53:11.530260 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hnhg\" (UniqueName: \"kubernetes.io/projected/40fb61d8-d58a-440b-b1c2-45987a86f856-kube-api-access-7hnhg\") pod \"40fb61d8-d58a-440b-b1c2-45987a86f856\" (UID: \"40fb61d8-d58a-440b-b1c2-45987a86f856\") " Jan 22 17:53:11 crc kubenswrapper[4758]: I0122 17:53:11.530299 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40fb61d8-d58a-440b-b1c2-45987a86f856-catalog-content\") pod \"40fb61d8-d58a-440b-b1c2-45987a86f856\" (UID: \"40fb61d8-d58a-440b-b1c2-45987a86f856\") " Jan 22 17:53:11 crc kubenswrapper[4758]: I0122 17:53:11.531420 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40fb61d8-d58a-440b-b1c2-45987a86f856-utilities" (OuterVolumeSpecName: "utilities") pod "40fb61d8-d58a-440b-b1c2-45987a86f856" (UID: "40fb61d8-d58a-440b-b1c2-45987a86f856"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:53:11 crc kubenswrapper[4758]: I0122 17:53:11.543245 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40fb61d8-d58a-440b-b1c2-45987a86f856-kube-api-access-7hnhg" (OuterVolumeSpecName: "kube-api-access-7hnhg") pod "40fb61d8-d58a-440b-b1c2-45987a86f856" (UID: "40fb61d8-d58a-440b-b1c2-45987a86f856"). InnerVolumeSpecName "kube-api-access-7hnhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:53:11 crc kubenswrapper[4758]: I0122 17:53:11.555649 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40fb61d8-d58a-440b-b1c2-45987a86f856-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "40fb61d8-d58a-440b-b1c2-45987a86f856" (UID: "40fb61d8-d58a-440b-b1c2-45987a86f856"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:53:11 crc kubenswrapper[4758]: I0122 17:53:11.633561 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hnhg\" (UniqueName: \"kubernetes.io/projected/40fb61d8-d58a-440b-b1c2-45987a86f856-kube-api-access-7hnhg\") on node \"crc\" DevicePath \"\"" Jan 22 17:53:11 crc kubenswrapper[4758]: I0122 17:53:11.633617 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40fb61d8-d58a-440b-b1c2-45987a86f856-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 17:53:11 crc kubenswrapper[4758]: I0122 17:53:11.633634 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40fb61d8-d58a-440b-b1c2-45987a86f856-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 17:53:11 crc kubenswrapper[4758]: I0122 17:53:11.719503 4758 generic.go:334] "Generic (PLEG): container finished" podID="40fb61d8-d58a-440b-b1c2-45987a86f856" containerID="987de2efd81e95dfdabc3be360784bd7f0f4abd1fce1e57fcf29801a74331304" exitCode=0 Jan 22 17:53:11 crc kubenswrapper[4758]: I0122 17:53:11.719547 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kjnx9" event={"ID":"40fb61d8-d58a-440b-b1c2-45987a86f856","Type":"ContainerDied","Data":"987de2efd81e95dfdabc3be360784bd7f0f4abd1fce1e57fcf29801a74331304"} Jan 22 17:53:11 crc kubenswrapper[4758]: I0122 17:53:11.719576 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kjnx9" event={"ID":"40fb61d8-d58a-440b-b1c2-45987a86f856","Type":"ContainerDied","Data":"ed40cea9ff1008d85daf0131c0f498a375f5d5d6fa15a269c82537c911f6a965"} Jan 22 17:53:11 crc kubenswrapper[4758]: I0122 17:53:11.719593 4758 scope.go:117] "RemoveContainer" containerID="987de2efd81e95dfdabc3be360784bd7f0f4abd1fce1e57fcf29801a74331304" Jan 22 17:53:11 crc kubenswrapper[4758]: I0122 17:53:11.719602 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kjnx9" Jan 22 17:53:11 crc kubenswrapper[4758]: I0122 17:53:11.753889 4758 scope.go:117] "RemoveContainer" containerID="4b83370f40f03f8056f45fc6221da850f409c8be3160ef311881d948bea8960e" Jan 22 17:53:11 crc kubenswrapper[4758]: I0122 17:53:11.777632 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kjnx9"] Jan 22 17:53:11 crc kubenswrapper[4758]: I0122 17:53:11.791725 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kjnx9"] Jan 22 17:53:11 crc kubenswrapper[4758]: I0122 17:53:11.794447 4758 scope.go:117] "RemoveContainer" containerID="42b98c75546bcc022582d83ba5263a33135c1e0d68aa391462fed457156e437a" Jan 22 17:53:11 crc kubenswrapper[4758]: I0122 17:53:11.846700 4758 scope.go:117] "RemoveContainer" containerID="987de2efd81e95dfdabc3be360784bd7f0f4abd1fce1e57fcf29801a74331304" Jan 22 17:53:11 crc kubenswrapper[4758]: E0122 17:53:11.847647 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"987de2efd81e95dfdabc3be360784bd7f0f4abd1fce1e57fcf29801a74331304\": container with ID starting with 987de2efd81e95dfdabc3be360784bd7f0f4abd1fce1e57fcf29801a74331304 not found: ID does not exist" containerID="987de2efd81e95dfdabc3be360784bd7f0f4abd1fce1e57fcf29801a74331304" Jan 22 17:53:11 crc kubenswrapper[4758]: I0122 17:53:11.847706 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"987de2efd81e95dfdabc3be360784bd7f0f4abd1fce1e57fcf29801a74331304"} err="failed to get container status \"987de2efd81e95dfdabc3be360784bd7f0f4abd1fce1e57fcf29801a74331304\": rpc error: code = NotFound desc = could not find container \"987de2efd81e95dfdabc3be360784bd7f0f4abd1fce1e57fcf29801a74331304\": container with ID starting with 987de2efd81e95dfdabc3be360784bd7f0f4abd1fce1e57fcf29801a74331304 not found: ID does not exist" Jan 22 17:53:11 crc kubenswrapper[4758]: I0122 17:53:11.847814 4758 scope.go:117] "RemoveContainer" containerID="4b83370f40f03f8056f45fc6221da850f409c8be3160ef311881d948bea8960e" Jan 22 17:53:11 crc kubenswrapper[4758]: E0122 17:53:11.848364 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b83370f40f03f8056f45fc6221da850f409c8be3160ef311881d948bea8960e\": container with ID starting with 4b83370f40f03f8056f45fc6221da850f409c8be3160ef311881d948bea8960e not found: ID does not exist" containerID="4b83370f40f03f8056f45fc6221da850f409c8be3160ef311881d948bea8960e" Jan 22 17:53:11 crc kubenswrapper[4758]: I0122 17:53:11.848418 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b83370f40f03f8056f45fc6221da850f409c8be3160ef311881d948bea8960e"} err="failed to get container status \"4b83370f40f03f8056f45fc6221da850f409c8be3160ef311881d948bea8960e\": rpc error: code = NotFound desc = could not find container \"4b83370f40f03f8056f45fc6221da850f409c8be3160ef311881d948bea8960e\": container with ID starting with 4b83370f40f03f8056f45fc6221da850f409c8be3160ef311881d948bea8960e not found: ID does not exist" Jan 22 17:53:11 crc kubenswrapper[4758]: I0122 17:53:11.848455 4758 scope.go:117] "RemoveContainer" containerID="42b98c75546bcc022582d83ba5263a33135c1e0d68aa391462fed457156e437a" Jan 22 17:53:11 crc kubenswrapper[4758]: E0122 17:53:11.848846 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42b98c75546bcc022582d83ba5263a33135c1e0d68aa391462fed457156e437a\": container with ID starting with 42b98c75546bcc022582d83ba5263a33135c1e0d68aa391462fed457156e437a not found: ID does not exist" containerID="42b98c75546bcc022582d83ba5263a33135c1e0d68aa391462fed457156e437a" Jan 22 17:53:11 crc kubenswrapper[4758]: I0122 17:53:11.848887 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42b98c75546bcc022582d83ba5263a33135c1e0d68aa391462fed457156e437a"} err="failed to get container status \"42b98c75546bcc022582d83ba5263a33135c1e0d68aa391462fed457156e437a\": rpc error: code = NotFound desc = could not find container \"42b98c75546bcc022582d83ba5263a33135c1e0d68aa391462fed457156e437a\": container with ID starting with 42b98c75546bcc022582d83ba5263a33135c1e0d68aa391462fed457156e437a not found: ID does not exist" Jan 22 17:53:12 crc kubenswrapper[4758]: I0122 17:53:12.820506 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40fb61d8-d58a-440b-b1c2-45987a86f856" path="/var/lib/kubelet/pods/40fb61d8-d58a-440b-b1c2-45987a86f856/volumes" Jan 22 17:53:13 crc kubenswrapper[4758]: I0122 17:53:13.837325 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 17:53:13 crc kubenswrapper[4758]: I0122 17:53:13.837611 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 17:53:43 crc kubenswrapper[4758]: I0122 17:53:43.836957 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 17:53:43 crc kubenswrapper[4758]: I0122 17:53:43.837507 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 17:54:13 crc kubenswrapper[4758]: I0122 17:54:13.837694 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 17:54:13 crc kubenswrapper[4758]: I0122 17:54:13.838314 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 17:54:13 crc kubenswrapper[4758]: I0122 17:54:13.838404 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" Jan 22 17:54:13 crc kubenswrapper[4758]: I0122 17:54:13.839567 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b914cf88407ffa07bd6d3f02508e35be12faa9fed3ba54f385c87d9e72a3155f"} pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 17:54:13 crc kubenswrapper[4758]: I0122 17:54:13.839695 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" containerID="cri-o://b914cf88407ffa07bd6d3f02508e35be12faa9fed3ba54f385c87d9e72a3155f" gracePeriod=600 Jan 22 17:54:14 crc kubenswrapper[4758]: I0122 17:54:14.521751 4758 generic.go:334] "Generic (PLEG): container finished" podID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerID="b914cf88407ffa07bd6d3f02508e35be12faa9fed3ba54f385c87d9e72a3155f" exitCode=0 Jan 22 17:54:14 crc kubenswrapper[4758]: I0122 17:54:14.521775 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerDied","Data":"b914cf88407ffa07bd6d3f02508e35be12faa9fed3ba54f385c87d9e72a3155f"} Jan 22 17:54:14 crc kubenswrapper[4758]: I0122 17:54:14.522337 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerStarted","Data":"71d0f9a93a1f198cee3e61be87dac5fd13220229181dc2ee3ad7a9d1aecf76fb"} Jan 22 17:54:14 crc kubenswrapper[4758]: I0122 17:54:14.522365 4758 scope.go:117] "RemoveContainer" containerID="bca2620d2cea65c5431bb12ba3e5a5465e86fea66bac21826a76ec638ddabb93" Jan 22 17:55:05 crc kubenswrapper[4758]: I0122 17:55:05.700039 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="f52e2571-4001-441f-b7b7-b4746ae1c10d" containerName="galera" probeResult="failure" output="command timed out" Jan 22 17:55:05 crc kubenswrapper[4758]: I0122 17:55:05.701284 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="f52e2571-4001-441f-b7b7-b4746ae1c10d" containerName="galera" probeResult="failure" output="command timed out" Jan 22 17:56:43 crc kubenswrapper[4758]: I0122 17:56:43.837415 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 17:56:43 crc kubenswrapper[4758]: I0122 17:56:43.838056 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 17:57:10 crc kubenswrapper[4758]: I0122 17:57:10.830599 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-f5dlf"] Jan 22 17:57:10 crc kubenswrapper[4758]: E0122 17:57:10.831553 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40fb61d8-d58a-440b-b1c2-45987a86f856" containerName="extract-content" Jan 22 17:57:10 crc kubenswrapper[4758]: I0122 17:57:10.831575 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="40fb61d8-d58a-440b-b1c2-45987a86f856" containerName="extract-content" Jan 22 17:57:10 crc kubenswrapper[4758]: E0122 17:57:10.831634 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40fb61d8-d58a-440b-b1c2-45987a86f856" containerName="registry-server" Jan 22 17:57:10 crc kubenswrapper[4758]: I0122 17:57:10.831641 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="40fb61d8-d58a-440b-b1c2-45987a86f856" containerName="registry-server" Jan 22 17:57:10 crc kubenswrapper[4758]: E0122 17:57:10.831665 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40fb61d8-d58a-440b-b1c2-45987a86f856" containerName="extract-utilities" Jan 22 17:57:10 crc kubenswrapper[4758]: I0122 17:57:10.831670 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="40fb61d8-d58a-440b-b1c2-45987a86f856" containerName="extract-utilities" Jan 22 17:57:10 crc kubenswrapper[4758]: I0122 17:57:10.831915 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="40fb61d8-d58a-440b-b1c2-45987a86f856" containerName="registry-server" Jan 22 17:57:10 crc kubenswrapper[4758]: I0122 17:57:10.833610 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f5dlf" Jan 22 17:57:10 crc kubenswrapper[4758]: I0122 17:57:10.857112 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f5dlf"] Jan 22 17:57:11 crc kubenswrapper[4758]: I0122 17:57:11.023823 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xmzr\" (UniqueName: \"kubernetes.io/projected/ead220ed-98e7-4e8a-a489-7c2e46dfedb6-kube-api-access-2xmzr\") pod \"redhat-operators-f5dlf\" (UID: \"ead220ed-98e7-4e8a-a489-7c2e46dfedb6\") " pod="openshift-marketplace/redhat-operators-f5dlf" Jan 22 17:57:11 crc kubenswrapper[4758]: I0122 17:57:11.024676 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ead220ed-98e7-4e8a-a489-7c2e46dfedb6-catalog-content\") pod \"redhat-operators-f5dlf\" (UID: \"ead220ed-98e7-4e8a-a489-7c2e46dfedb6\") " pod="openshift-marketplace/redhat-operators-f5dlf" Jan 22 17:57:11 crc kubenswrapper[4758]: I0122 17:57:11.024714 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ead220ed-98e7-4e8a-a489-7c2e46dfedb6-utilities\") pod \"redhat-operators-f5dlf\" (UID: \"ead220ed-98e7-4e8a-a489-7c2e46dfedb6\") " pod="openshift-marketplace/redhat-operators-f5dlf" Jan 22 17:57:11 crc kubenswrapper[4758]: I0122 17:57:11.125992 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ead220ed-98e7-4e8a-a489-7c2e46dfedb6-catalog-content\") pod \"redhat-operators-f5dlf\" (UID: \"ead220ed-98e7-4e8a-a489-7c2e46dfedb6\") " pod="openshift-marketplace/redhat-operators-f5dlf" Jan 22 17:57:11 crc kubenswrapper[4758]: I0122 17:57:11.126054 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ead220ed-98e7-4e8a-a489-7c2e46dfedb6-utilities\") pod \"redhat-operators-f5dlf\" (UID: \"ead220ed-98e7-4e8a-a489-7c2e46dfedb6\") " pod="openshift-marketplace/redhat-operators-f5dlf" Jan 22 17:57:11 crc kubenswrapper[4758]: I0122 17:57:11.126169 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xmzr\" (UniqueName: \"kubernetes.io/projected/ead220ed-98e7-4e8a-a489-7c2e46dfedb6-kube-api-access-2xmzr\") pod \"redhat-operators-f5dlf\" (UID: \"ead220ed-98e7-4e8a-a489-7c2e46dfedb6\") " pod="openshift-marketplace/redhat-operators-f5dlf" Jan 22 17:57:11 crc kubenswrapper[4758]: I0122 17:57:11.126690 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ead220ed-98e7-4e8a-a489-7c2e46dfedb6-catalog-content\") pod \"redhat-operators-f5dlf\" (UID: \"ead220ed-98e7-4e8a-a489-7c2e46dfedb6\") " pod="openshift-marketplace/redhat-operators-f5dlf" Jan 22 17:57:11 crc kubenswrapper[4758]: I0122 17:57:11.126711 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ead220ed-98e7-4e8a-a489-7c2e46dfedb6-utilities\") pod \"redhat-operators-f5dlf\" (UID: \"ead220ed-98e7-4e8a-a489-7c2e46dfedb6\") " pod="openshift-marketplace/redhat-operators-f5dlf" Jan 22 17:57:11 crc kubenswrapper[4758]: I0122 17:57:11.149883 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xmzr\" (UniqueName: \"kubernetes.io/projected/ead220ed-98e7-4e8a-a489-7c2e46dfedb6-kube-api-access-2xmzr\") pod \"redhat-operators-f5dlf\" (UID: \"ead220ed-98e7-4e8a-a489-7c2e46dfedb6\") " pod="openshift-marketplace/redhat-operators-f5dlf" Jan 22 17:57:11 crc kubenswrapper[4758]: I0122 17:57:11.191470 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f5dlf" Jan 22 17:57:11 crc kubenswrapper[4758]: I0122 17:57:11.686518 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f5dlf"] Jan 22 17:57:12 crc kubenswrapper[4758]: I0122 17:57:12.669039 4758 generic.go:334] "Generic (PLEG): container finished" podID="ead220ed-98e7-4e8a-a489-7c2e46dfedb6" containerID="f238fdcdf2d5e331fcde5d01910941249760d2a844cf148a8db072df93a30a87" exitCode=0 Jan 22 17:57:12 crc kubenswrapper[4758]: I0122 17:57:12.669140 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f5dlf" event={"ID":"ead220ed-98e7-4e8a-a489-7c2e46dfedb6","Type":"ContainerDied","Data":"f238fdcdf2d5e331fcde5d01910941249760d2a844cf148a8db072df93a30a87"} Jan 22 17:57:12 crc kubenswrapper[4758]: I0122 17:57:12.670056 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f5dlf" event={"ID":"ead220ed-98e7-4e8a-a489-7c2e46dfedb6","Type":"ContainerStarted","Data":"cd4313f908921b3afdc1c39bbe4e50f7b5500d5639daf757ac1190d190c36f2a"} Jan 22 17:57:13 crc kubenswrapper[4758]: I0122 17:57:13.837279 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 17:57:13 crc kubenswrapper[4758]: I0122 17:57:13.837832 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 17:57:14 crc kubenswrapper[4758]: I0122 17:57:14.692299 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f5dlf" event={"ID":"ead220ed-98e7-4e8a-a489-7c2e46dfedb6","Type":"ContainerStarted","Data":"97b19c4418c9353bd816151ad9efd423c3de56b3d4c5220c4d141acd7f1217c6"} Jan 22 17:57:17 crc kubenswrapper[4758]: I0122 17:57:17.729813 4758 generic.go:334] "Generic (PLEG): container finished" podID="ead220ed-98e7-4e8a-a489-7c2e46dfedb6" containerID="97b19c4418c9353bd816151ad9efd423c3de56b3d4c5220c4d141acd7f1217c6" exitCode=0 Jan 22 17:57:17 crc kubenswrapper[4758]: I0122 17:57:17.729880 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f5dlf" event={"ID":"ead220ed-98e7-4e8a-a489-7c2e46dfedb6","Type":"ContainerDied","Data":"97b19c4418c9353bd816151ad9efd423c3de56b3d4c5220c4d141acd7f1217c6"} Jan 22 17:57:18 crc kubenswrapper[4758]: I0122 17:57:18.743648 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f5dlf" event={"ID":"ead220ed-98e7-4e8a-a489-7c2e46dfedb6","Type":"ContainerStarted","Data":"ff08cd94e483753f6bb2b3de8f85aedf20269bca8412ea0c2894c7012965b779"} Jan 22 17:57:18 crc kubenswrapper[4758]: I0122 17:57:18.768908 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-f5dlf" podStartSLOduration=3.302955395 podStartE2EDuration="8.768877045s" podCreationTimestamp="2026-01-22 17:57:10 +0000 UTC" firstStartedPulling="2026-01-22 17:57:12.67213835 +0000 UTC m=+5254.155477625" lastFinishedPulling="2026-01-22 17:57:18.13805996 +0000 UTC m=+5259.621399275" observedRunningTime="2026-01-22 17:57:18.765045411 +0000 UTC m=+5260.248384736" watchObservedRunningTime="2026-01-22 17:57:18.768877045 +0000 UTC m=+5260.252216330" Jan 22 17:57:21 crc kubenswrapper[4758]: I0122 17:57:21.193024 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-f5dlf" Jan 22 17:57:21 crc kubenswrapper[4758]: I0122 17:57:21.193798 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f5dlf" Jan 22 17:57:22 crc kubenswrapper[4758]: I0122 17:57:22.339318 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-f5dlf" podUID="ead220ed-98e7-4e8a-a489-7c2e46dfedb6" containerName="registry-server" probeResult="failure" output=< Jan 22 17:57:22 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 22 17:57:22 crc kubenswrapper[4758]: > Jan 22 17:57:31 crc kubenswrapper[4758]: I0122 17:57:31.247678 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f5dlf" Jan 22 17:57:31 crc kubenswrapper[4758]: I0122 17:57:31.293181 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f5dlf" Jan 22 17:57:31 crc kubenswrapper[4758]: I0122 17:57:31.598112 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f5dlf"] Jan 22 17:57:32 crc kubenswrapper[4758]: I0122 17:57:32.888026 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-f5dlf" podUID="ead220ed-98e7-4e8a-a489-7c2e46dfedb6" containerName="registry-server" containerID="cri-o://ff08cd94e483753f6bb2b3de8f85aedf20269bca8412ea0c2894c7012965b779" gracePeriod=2 Jan 22 17:57:33 crc kubenswrapper[4758]: I0122 17:57:33.430017 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f5dlf" Jan 22 17:57:33 crc kubenswrapper[4758]: I0122 17:57:33.496074 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ead220ed-98e7-4e8a-a489-7c2e46dfedb6-utilities\") pod \"ead220ed-98e7-4e8a-a489-7c2e46dfedb6\" (UID: \"ead220ed-98e7-4e8a-a489-7c2e46dfedb6\") " Jan 22 17:57:33 crc kubenswrapper[4758]: I0122 17:57:33.496253 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ead220ed-98e7-4e8a-a489-7c2e46dfedb6-catalog-content\") pod \"ead220ed-98e7-4e8a-a489-7c2e46dfedb6\" (UID: \"ead220ed-98e7-4e8a-a489-7c2e46dfedb6\") " Jan 22 17:57:33 crc kubenswrapper[4758]: I0122 17:57:33.496344 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xmzr\" (UniqueName: \"kubernetes.io/projected/ead220ed-98e7-4e8a-a489-7c2e46dfedb6-kube-api-access-2xmzr\") pod \"ead220ed-98e7-4e8a-a489-7c2e46dfedb6\" (UID: \"ead220ed-98e7-4e8a-a489-7c2e46dfedb6\") " Jan 22 17:57:33 crc kubenswrapper[4758]: I0122 17:57:33.497074 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ead220ed-98e7-4e8a-a489-7c2e46dfedb6-utilities" (OuterVolumeSpecName: "utilities") pod "ead220ed-98e7-4e8a-a489-7c2e46dfedb6" (UID: "ead220ed-98e7-4e8a-a489-7c2e46dfedb6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:57:33 crc kubenswrapper[4758]: I0122 17:57:33.515273 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ead220ed-98e7-4e8a-a489-7c2e46dfedb6-kube-api-access-2xmzr" (OuterVolumeSpecName: "kube-api-access-2xmzr") pod "ead220ed-98e7-4e8a-a489-7c2e46dfedb6" (UID: "ead220ed-98e7-4e8a-a489-7c2e46dfedb6"). InnerVolumeSpecName "kube-api-access-2xmzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:57:33 crc kubenswrapper[4758]: I0122 17:57:33.599271 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xmzr\" (UniqueName: \"kubernetes.io/projected/ead220ed-98e7-4e8a-a489-7c2e46dfedb6-kube-api-access-2xmzr\") on node \"crc\" DevicePath \"\"" Jan 22 17:57:33 crc kubenswrapper[4758]: I0122 17:57:33.599311 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ead220ed-98e7-4e8a-a489-7c2e46dfedb6-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 17:57:33 crc kubenswrapper[4758]: I0122 17:57:33.697104 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ead220ed-98e7-4e8a-a489-7c2e46dfedb6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ead220ed-98e7-4e8a-a489-7c2e46dfedb6" (UID: "ead220ed-98e7-4e8a-a489-7c2e46dfedb6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:57:33 crc kubenswrapper[4758]: I0122 17:57:33.700802 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ead220ed-98e7-4e8a-a489-7c2e46dfedb6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 17:57:33 crc kubenswrapper[4758]: I0122 17:57:33.899173 4758 generic.go:334] "Generic (PLEG): container finished" podID="ead220ed-98e7-4e8a-a489-7c2e46dfedb6" containerID="ff08cd94e483753f6bb2b3de8f85aedf20269bca8412ea0c2894c7012965b779" exitCode=0 Jan 22 17:57:33 crc kubenswrapper[4758]: I0122 17:57:33.899232 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f5dlf" event={"ID":"ead220ed-98e7-4e8a-a489-7c2e46dfedb6","Type":"ContainerDied","Data":"ff08cd94e483753f6bb2b3de8f85aedf20269bca8412ea0c2894c7012965b779"} Jan 22 17:57:33 crc kubenswrapper[4758]: I0122 17:57:33.899267 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f5dlf" event={"ID":"ead220ed-98e7-4e8a-a489-7c2e46dfedb6","Type":"ContainerDied","Data":"cd4313f908921b3afdc1c39bbe4e50f7b5500d5639daf757ac1190d190c36f2a"} Jan 22 17:57:33 crc kubenswrapper[4758]: I0122 17:57:33.899303 4758 scope.go:117] "RemoveContainer" containerID="ff08cd94e483753f6bb2b3de8f85aedf20269bca8412ea0c2894c7012965b779" Jan 22 17:57:33 crc kubenswrapper[4758]: I0122 17:57:33.899506 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f5dlf" Jan 22 17:57:33 crc kubenswrapper[4758]: I0122 17:57:33.936675 4758 scope.go:117] "RemoveContainer" containerID="97b19c4418c9353bd816151ad9efd423c3de56b3d4c5220c4d141acd7f1217c6" Jan 22 17:57:33 crc kubenswrapper[4758]: I0122 17:57:33.948959 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f5dlf"] Jan 22 17:57:33 crc kubenswrapper[4758]: I0122 17:57:33.959028 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-f5dlf"] Jan 22 17:57:33 crc kubenswrapper[4758]: I0122 17:57:33.975287 4758 scope.go:117] "RemoveContainer" containerID="f238fdcdf2d5e331fcde5d01910941249760d2a844cf148a8db072df93a30a87" Jan 22 17:57:34 crc kubenswrapper[4758]: I0122 17:57:34.041043 4758 scope.go:117] "RemoveContainer" containerID="ff08cd94e483753f6bb2b3de8f85aedf20269bca8412ea0c2894c7012965b779" Jan 22 17:57:34 crc kubenswrapper[4758]: E0122 17:57:34.041624 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff08cd94e483753f6bb2b3de8f85aedf20269bca8412ea0c2894c7012965b779\": container with ID starting with ff08cd94e483753f6bb2b3de8f85aedf20269bca8412ea0c2894c7012965b779 not found: ID does not exist" containerID="ff08cd94e483753f6bb2b3de8f85aedf20269bca8412ea0c2894c7012965b779" Jan 22 17:57:34 crc kubenswrapper[4758]: I0122 17:57:34.041703 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff08cd94e483753f6bb2b3de8f85aedf20269bca8412ea0c2894c7012965b779"} err="failed to get container status \"ff08cd94e483753f6bb2b3de8f85aedf20269bca8412ea0c2894c7012965b779\": rpc error: code = NotFound desc = could not find container \"ff08cd94e483753f6bb2b3de8f85aedf20269bca8412ea0c2894c7012965b779\": container with ID starting with ff08cd94e483753f6bb2b3de8f85aedf20269bca8412ea0c2894c7012965b779 not found: ID does not exist" Jan 22 17:57:34 crc kubenswrapper[4758]: I0122 17:57:34.041780 4758 scope.go:117] "RemoveContainer" containerID="97b19c4418c9353bd816151ad9efd423c3de56b3d4c5220c4d141acd7f1217c6" Jan 22 17:57:34 crc kubenswrapper[4758]: E0122 17:57:34.042358 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97b19c4418c9353bd816151ad9efd423c3de56b3d4c5220c4d141acd7f1217c6\": container with ID starting with 97b19c4418c9353bd816151ad9efd423c3de56b3d4c5220c4d141acd7f1217c6 not found: ID does not exist" containerID="97b19c4418c9353bd816151ad9efd423c3de56b3d4c5220c4d141acd7f1217c6" Jan 22 17:57:34 crc kubenswrapper[4758]: I0122 17:57:34.042392 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97b19c4418c9353bd816151ad9efd423c3de56b3d4c5220c4d141acd7f1217c6"} err="failed to get container status \"97b19c4418c9353bd816151ad9efd423c3de56b3d4c5220c4d141acd7f1217c6\": rpc error: code = NotFound desc = could not find container \"97b19c4418c9353bd816151ad9efd423c3de56b3d4c5220c4d141acd7f1217c6\": container with ID starting with 97b19c4418c9353bd816151ad9efd423c3de56b3d4c5220c4d141acd7f1217c6 not found: ID does not exist" Jan 22 17:57:34 crc kubenswrapper[4758]: I0122 17:57:34.042416 4758 scope.go:117] "RemoveContainer" containerID="f238fdcdf2d5e331fcde5d01910941249760d2a844cf148a8db072df93a30a87" Jan 22 17:57:34 crc kubenswrapper[4758]: E0122 17:57:34.042821 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f238fdcdf2d5e331fcde5d01910941249760d2a844cf148a8db072df93a30a87\": container with ID starting with f238fdcdf2d5e331fcde5d01910941249760d2a844cf148a8db072df93a30a87 not found: ID does not exist" containerID="f238fdcdf2d5e331fcde5d01910941249760d2a844cf148a8db072df93a30a87" Jan 22 17:57:34 crc kubenswrapper[4758]: I0122 17:57:34.042852 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f238fdcdf2d5e331fcde5d01910941249760d2a844cf148a8db072df93a30a87"} err="failed to get container status \"f238fdcdf2d5e331fcde5d01910941249760d2a844cf148a8db072df93a30a87\": rpc error: code = NotFound desc = could not find container \"f238fdcdf2d5e331fcde5d01910941249760d2a844cf148a8db072df93a30a87\": container with ID starting with f238fdcdf2d5e331fcde5d01910941249760d2a844cf148a8db072df93a30a87 not found: ID does not exist" Jan 22 17:57:34 crc kubenswrapper[4758]: I0122 17:57:34.824266 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ead220ed-98e7-4e8a-a489-7c2e46dfedb6" path="/var/lib/kubelet/pods/ead220ed-98e7-4e8a-a489-7c2e46dfedb6/volumes" Jan 22 17:57:43 crc kubenswrapper[4758]: I0122 17:57:43.837288 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 17:57:43 crc kubenswrapper[4758]: I0122 17:57:43.837949 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 17:57:43 crc kubenswrapper[4758]: I0122 17:57:43.837994 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" Jan 22 17:57:43 crc kubenswrapper[4758]: I0122 17:57:43.839035 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"71d0f9a93a1f198cee3e61be87dac5fd13220229181dc2ee3ad7a9d1aecf76fb"} pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 17:57:43 crc kubenswrapper[4758]: I0122 17:57:43.839108 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" containerID="cri-o://71d0f9a93a1f198cee3e61be87dac5fd13220229181dc2ee3ad7a9d1aecf76fb" gracePeriod=600 Jan 22 17:57:44 crc kubenswrapper[4758]: E0122 17:57:44.022114 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:57:45 crc kubenswrapper[4758]: I0122 17:57:45.014812 4758 generic.go:334] "Generic (PLEG): container finished" podID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerID="71d0f9a93a1f198cee3e61be87dac5fd13220229181dc2ee3ad7a9d1aecf76fb" exitCode=0 Jan 22 17:57:45 crc kubenswrapper[4758]: I0122 17:57:45.014865 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerDied","Data":"71d0f9a93a1f198cee3e61be87dac5fd13220229181dc2ee3ad7a9d1aecf76fb"} Jan 22 17:57:45 crc kubenswrapper[4758]: I0122 17:57:45.014909 4758 scope.go:117] "RemoveContainer" containerID="b914cf88407ffa07bd6d3f02508e35be12faa9fed3ba54f385c87d9e72a3155f" Jan 22 17:57:45 crc kubenswrapper[4758]: I0122 17:57:45.015418 4758 scope.go:117] "RemoveContainer" containerID="71d0f9a93a1f198cee3e61be87dac5fd13220229181dc2ee3ad7a9d1aecf76fb" Jan 22 17:57:45 crc kubenswrapper[4758]: E0122 17:57:45.015805 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:57:55 crc kubenswrapper[4758]: I0122 17:57:55.699632 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="f52e2571-4001-441f-b7b7-b4746ae1c10d" containerName="galera" probeResult="failure" output="command timed out" Jan 22 17:57:55 crc kubenswrapper[4758]: I0122 17:57:55.701767 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="f52e2571-4001-441f-b7b7-b4746ae1c10d" containerName="galera" probeResult="failure" output="command timed out" Jan 22 17:57:56 crc kubenswrapper[4758]: I0122 17:57:56.808228 4758 scope.go:117] "RemoveContainer" containerID="71d0f9a93a1f198cee3e61be87dac5fd13220229181dc2ee3ad7a9d1aecf76fb" Jan 22 17:57:56 crc kubenswrapper[4758]: E0122 17:57:56.808835 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:58:10 crc kubenswrapper[4758]: I0122 17:58:10.808612 4758 scope.go:117] "RemoveContainer" containerID="71d0f9a93a1f198cee3e61be87dac5fd13220229181dc2ee3ad7a9d1aecf76fb" Jan 22 17:58:10 crc kubenswrapper[4758]: E0122 17:58:10.809415 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:58:24 crc kubenswrapper[4758]: I0122 17:58:24.808335 4758 scope.go:117] "RemoveContainer" containerID="71d0f9a93a1f198cee3e61be87dac5fd13220229181dc2ee3ad7a9d1aecf76fb" Jan 22 17:58:24 crc kubenswrapper[4758]: E0122 17:58:24.809446 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:58:29 crc kubenswrapper[4758]: I0122 17:58:29.836834 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qxqsx"] Jan 22 17:58:29 crc kubenswrapper[4758]: E0122 17:58:29.839173 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ead220ed-98e7-4e8a-a489-7c2e46dfedb6" containerName="registry-server" Jan 22 17:58:29 crc kubenswrapper[4758]: I0122 17:58:29.839204 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="ead220ed-98e7-4e8a-a489-7c2e46dfedb6" containerName="registry-server" Jan 22 17:58:29 crc kubenswrapper[4758]: E0122 17:58:29.839234 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ead220ed-98e7-4e8a-a489-7c2e46dfedb6" containerName="extract-utilities" Jan 22 17:58:29 crc kubenswrapper[4758]: I0122 17:58:29.839240 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="ead220ed-98e7-4e8a-a489-7c2e46dfedb6" containerName="extract-utilities" Jan 22 17:58:29 crc kubenswrapper[4758]: E0122 17:58:29.839250 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ead220ed-98e7-4e8a-a489-7c2e46dfedb6" containerName="extract-content" Jan 22 17:58:29 crc kubenswrapper[4758]: I0122 17:58:29.839256 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="ead220ed-98e7-4e8a-a489-7c2e46dfedb6" containerName="extract-content" Jan 22 17:58:29 crc kubenswrapper[4758]: I0122 17:58:29.839495 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="ead220ed-98e7-4e8a-a489-7c2e46dfedb6" containerName="registry-server" Jan 22 17:58:29 crc kubenswrapper[4758]: I0122 17:58:29.842298 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qxqsx" Jan 22 17:58:29 crc kubenswrapper[4758]: I0122 17:58:29.865496 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qxqsx"] Jan 22 17:58:29 crc kubenswrapper[4758]: I0122 17:58:29.989622 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04a0da3e-0209-4ec2-9c6f-118c19d1499d-utilities\") pod \"certified-operators-qxqsx\" (UID: \"04a0da3e-0209-4ec2-9c6f-118c19d1499d\") " pod="openshift-marketplace/certified-operators-qxqsx" Jan 22 17:58:29 crc kubenswrapper[4758]: I0122 17:58:29.989852 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7xrs\" (UniqueName: \"kubernetes.io/projected/04a0da3e-0209-4ec2-9c6f-118c19d1499d-kube-api-access-l7xrs\") pod \"certified-operators-qxqsx\" (UID: \"04a0da3e-0209-4ec2-9c6f-118c19d1499d\") " pod="openshift-marketplace/certified-operators-qxqsx" Jan 22 17:58:29 crc kubenswrapper[4758]: I0122 17:58:29.989882 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04a0da3e-0209-4ec2-9c6f-118c19d1499d-catalog-content\") pod \"certified-operators-qxqsx\" (UID: \"04a0da3e-0209-4ec2-9c6f-118c19d1499d\") " pod="openshift-marketplace/certified-operators-qxqsx" Jan 22 17:58:30 crc kubenswrapper[4758]: I0122 17:58:30.092412 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7xrs\" (UniqueName: \"kubernetes.io/projected/04a0da3e-0209-4ec2-9c6f-118c19d1499d-kube-api-access-l7xrs\") pod \"certified-operators-qxqsx\" (UID: \"04a0da3e-0209-4ec2-9c6f-118c19d1499d\") " pod="openshift-marketplace/certified-operators-qxqsx" Jan 22 17:58:30 crc kubenswrapper[4758]: I0122 17:58:30.092468 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04a0da3e-0209-4ec2-9c6f-118c19d1499d-catalog-content\") pod \"certified-operators-qxqsx\" (UID: \"04a0da3e-0209-4ec2-9c6f-118c19d1499d\") " pod="openshift-marketplace/certified-operators-qxqsx" Jan 22 17:58:30 crc kubenswrapper[4758]: I0122 17:58:30.092500 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04a0da3e-0209-4ec2-9c6f-118c19d1499d-utilities\") pod \"certified-operators-qxqsx\" (UID: \"04a0da3e-0209-4ec2-9c6f-118c19d1499d\") " pod="openshift-marketplace/certified-operators-qxqsx" Jan 22 17:58:30 crc kubenswrapper[4758]: I0122 17:58:30.093009 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04a0da3e-0209-4ec2-9c6f-118c19d1499d-catalog-content\") pod \"certified-operators-qxqsx\" (UID: \"04a0da3e-0209-4ec2-9c6f-118c19d1499d\") " pod="openshift-marketplace/certified-operators-qxqsx" Jan 22 17:58:30 crc kubenswrapper[4758]: I0122 17:58:30.093105 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04a0da3e-0209-4ec2-9c6f-118c19d1499d-utilities\") pod \"certified-operators-qxqsx\" (UID: \"04a0da3e-0209-4ec2-9c6f-118c19d1499d\") " pod="openshift-marketplace/certified-operators-qxqsx" Jan 22 17:58:30 crc kubenswrapper[4758]: I0122 17:58:30.124704 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7xrs\" (UniqueName: \"kubernetes.io/projected/04a0da3e-0209-4ec2-9c6f-118c19d1499d-kube-api-access-l7xrs\") pod \"certified-operators-qxqsx\" (UID: \"04a0da3e-0209-4ec2-9c6f-118c19d1499d\") " pod="openshift-marketplace/certified-operators-qxqsx" Jan 22 17:58:30 crc kubenswrapper[4758]: I0122 17:58:30.179128 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qxqsx" Jan 22 17:58:30 crc kubenswrapper[4758]: I0122 17:58:30.718175 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qxqsx"] Jan 22 17:58:31 crc kubenswrapper[4758]: I0122 17:58:31.443905 4758 generic.go:334] "Generic (PLEG): container finished" podID="04a0da3e-0209-4ec2-9c6f-118c19d1499d" containerID="fa1b086d55f0cb9915704cac08f8293d01808bb4a1880a80e465bbe11cbc88e4" exitCode=0 Jan 22 17:58:31 crc kubenswrapper[4758]: I0122 17:58:31.443960 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxqsx" event={"ID":"04a0da3e-0209-4ec2-9c6f-118c19d1499d","Type":"ContainerDied","Data":"fa1b086d55f0cb9915704cac08f8293d01808bb4a1880a80e465bbe11cbc88e4"} Jan 22 17:58:31 crc kubenswrapper[4758]: I0122 17:58:31.444148 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxqsx" event={"ID":"04a0da3e-0209-4ec2-9c6f-118c19d1499d","Type":"ContainerStarted","Data":"282554871aa4d03bb904a9f64a5b7359db2abe0cd4b1d3aea24348350a442080"} Jan 22 17:58:31 crc kubenswrapper[4758]: I0122 17:58:31.446418 4758 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 17:58:32 crc kubenswrapper[4758]: I0122 17:58:32.453838 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxqsx" event={"ID":"04a0da3e-0209-4ec2-9c6f-118c19d1499d","Type":"ContainerStarted","Data":"4f1b19a74dee9013aa1077e5b85589201b713c0389dd3dcc8e1863d821d0e48b"} Jan 22 17:58:33 crc kubenswrapper[4758]: I0122 17:58:33.466887 4758 generic.go:334] "Generic (PLEG): container finished" podID="04a0da3e-0209-4ec2-9c6f-118c19d1499d" containerID="4f1b19a74dee9013aa1077e5b85589201b713c0389dd3dcc8e1863d821d0e48b" exitCode=0 Jan 22 17:58:33 crc kubenswrapper[4758]: I0122 17:58:33.466983 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxqsx" event={"ID":"04a0da3e-0209-4ec2-9c6f-118c19d1499d","Type":"ContainerDied","Data":"4f1b19a74dee9013aa1077e5b85589201b713c0389dd3dcc8e1863d821d0e48b"} Jan 22 17:58:34 crc kubenswrapper[4758]: I0122 17:58:34.479331 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxqsx" event={"ID":"04a0da3e-0209-4ec2-9c6f-118c19d1499d","Type":"ContainerStarted","Data":"e1d102f8e4e4d97a2286f3e2cda42ae1f64f87a935a97a3d5f4e23c235be6658"} Jan 22 17:58:34 crc kubenswrapper[4758]: I0122 17:58:34.511016 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qxqsx" podStartSLOduration=3.039849225 podStartE2EDuration="5.510984054s" podCreationTimestamp="2026-01-22 17:58:29 +0000 UTC" firstStartedPulling="2026-01-22 17:58:31.445998507 +0000 UTC m=+5332.929337812" lastFinishedPulling="2026-01-22 17:58:33.917133356 +0000 UTC m=+5335.400472641" observedRunningTime="2026-01-22 17:58:34.502581955 +0000 UTC m=+5335.985921240" watchObservedRunningTime="2026-01-22 17:58:34.510984054 +0000 UTC m=+5335.994323339" Jan 22 17:58:39 crc kubenswrapper[4758]: I0122 17:58:39.809224 4758 scope.go:117] "RemoveContainer" containerID="71d0f9a93a1f198cee3e61be87dac5fd13220229181dc2ee3ad7a9d1aecf76fb" Jan 22 17:58:39 crc kubenswrapper[4758]: E0122 17:58:39.810584 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:58:40 crc kubenswrapper[4758]: I0122 17:58:40.180577 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qxqsx" Jan 22 17:58:40 crc kubenswrapper[4758]: I0122 17:58:40.180711 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qxqsx" Jan 22 17:58:40 crc kubenswrapper[4758]: I0122 17:58:40.232151 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qxqsx" Jan 22 17:58:40 crc kubenswrapper[4758]: I0122 17:58:40.598878 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qxqsx" Jan 22 17:58:40 crc kubenswrapper[4758]: I0122 17:58:40.649383 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qxqsx"] Jan 22 17:58:42 crc kubenswrapper[4758]: I0122 17:58:42.568710 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qxqsx" podUID="04a0da3e-0209-4ec2-9c6f-118c19d1499d" containerName="registry-server" containerID="cri-o://e1d102f8e4e4d97a2286f3e2cda42ae1f64f87a935a97a3d5f4e23c235be6658" gracePeriod=2 Jan 22 17:58:46 crc kubenswrapper[4758]: I0122 17:58:46.847972 4758 patch_prober.go:28] interesting pod/oauth-openshift-65454647d6-pr5dd container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 17:58:46 crc kubenswrapper[4758]: I0122 17:58:46.848487 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" podUID="9deedfb3-0e0e-4287-81de-8131aac4b6b0" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 17:58:49 crc kubenswrapper[4758]: I0122 17:58:49.701101 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="93923998-0016-4db9-adff-a433c7a8d57c" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 22 17:58:50 crc kubenswrapper[4758]: E0122 17:58:50.180520 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e1d102f8e4e4d97a2286f3e2cda42ae1f64f87a935a97a3d5f4e23c235be6658 is running failed: container process not found" containerID="e1d102f8e4e4d97a2286f3e2cda42ae1f64f87a935a97a3d5f4e23c235be6658" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 17:58:50 crc kubenswrapper[4758]: E0122 17:58:50.181003 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e1d102f8e4e4d97a2286f3e2cda42ae1f64f87a935a97a3d5f4e23c235be6658 is running failed: container process not found" containerID="e1d102f8e4e4d97a2286f3e2cda42ae1f64f87a935a97a3d5f4e23c235be6658" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 17:58:50 crc kubenswrapper[4758]: E0122 17:58:50.181638 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e1d102f8e4e4d97a2286f3e2cda42ae1f64f87a935a97a3d5f4e23c235be6658 is running failed: container process not found" containerID="e1d102f8e4e4d97a2286f3e2cda42ae1f64f87a935a97a3d5f4e23c235be6658" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 17:58:50 crc kubenswrapper[4758]: E0122 17:58:50.181691 4758 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e1d102f8e4e4d97a2286f3e2cda42ae1f64f87a935a97a3d5f4e23c235be6658 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-qxqsx" podUID="04a0da3e-0209-4ec2-9c6f-118c19d1499d" containerName="registry-server" Jan 22 17:58:50 crc kubenswrapper[4758]: I0122 17:58:50.351045 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qxqsx_04a0da3e-0209-4ec2-9c6f-118c19d1499d/registry-server/0.log" Jan 22 17:58:50 crc kubenswrapper[4758]: I0122 17:58:50.352156 4758 generic.go:334] "Generic (PLEG): container finished" podID="04a0da3e-0209-4ec2-9c6f-118c19d1499d" containerID="e1d102f8e4e4d97a2286f3e2cda42ae1f64f87a935a97a3d5f4e23c235be6658" exitCode=-1 Jan 22 17:58:50 crc kubenswrapper[4758]: I0122 17:58:50.352206 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxqsx" event={"ID":"04a0da3e-0209-4ec2-9c6f-118c19d1499d","Type":"ContainerDied","Data":"e1d102f8e4e4d97a2286f3e2cda42ae1f64f87a935a97a3d5f4e23c235be6658"} Jan 22 17:58:54 crc kubenswrapper[4758]: I0122 17:58:54.702723 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="93923998-0016-4db9-adff-a433c7a8d57c" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 22 17:58:54 crc kubenswrapper[4758]: I0122 17:58:54.721917 4758 patch_prober.go:28] interesting pod/route-controller-manager-5876db6c88-xtp4p container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 17:58:54 crc kubenswrapper[4758]: I0122 17:58:54.722029 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5876db6c88-xtp4p" podUID="44a7e8fc-3f05-4b46-bbff-0a3394b8d884" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 17:58:54 crc kubenswrapper[4758]: I0122 17:58:54.809046 4758 scope.go:117] "RemoveContainer" containerID="71d0f9a93a1f198cee3e61be87dac5fd13220229181dc2ee3ad7a9d1aecf76fb" Jan 22 17:58:54 crc kubenswrapper[4758]: E0122 17:58:54.809444 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:58:59 crc kubenswrapper[4758]: I0122 17:58:59.699952 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="93923998-0016-4db9-adff-a433c7a8d57c" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 22 17:58:59 crc kubenswrapper[4758]: I0122 17:58:59.700452 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Jan 22 17:58:59 crc kubenswrapper[4758]: I0122 17:58:59.701130 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"ac9b523b39a8fc616563df35ca3aa97f65c7d130f93997569e78a6b68ebfdb47"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Jan 22 17:58:59 crc kubenswrapper[4758]: I0122 17:58:59.701210 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="93923998-0016-4db9-adff-a433c7a8d57c" containerName="ceilometer-central-agent" containerID="cri-o://ac9b523b39a8fc616563df35ca3aa97f65c7d130f93997569e78a6b68ebfdb47" gracePeriod=30 Jan 22 17:58:59 crc kubenswrapper[4758]: I0122 17:58:59.704073 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="93923998-0016-4db9-adff-a433c7a8d57c" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Jan 22 17:59:00 crc kubenswrapper[4758]: E0122 17:59:00.181048 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e1d102f8e4e4d97a2286f3e2cda42ae1f64f87a935a97a3d5f4e23c235be6658 is running failed: container process not found" containerID="e1d102f8e4e4d97a2286f3e2cda42ae1f64f87a935a97a3d5f4e23c235be6658" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 17:59:00 crc kubenswrapper[4758]: E0122 17:59:00.181397 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e1d102f8e4e4d97a2286f3e2cda42ae1f64f87a935a97a3d5f4e23c235be6658 is running failed: container process not found" containerID="e1d102f8e4e4d97a2286f3e2cda42ae1f64f87a935a97a3d5f4e23c235be6658" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 17:59:00 crc kubenswrapper[4758]: E0122 17:59:00.181792 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e1d102f8e4e4d97a2286f3e2cda42ae1f64f87a935a97a3d5f4e23c235be6658 is running failed: container process not found" containerID="e1d102f8e4e4d97a2286f3e2cda42ae1f64f87a935a97a3d5f4e23c235be6658" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 17:59:00 crc kubenswrapper[4758]: E0122 17:59:00.181827 4758 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e1d102f8e4e4d97a2286f3e2cda42ae1f64f87a935a97a3d5f4e23c235be6658 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-qxqsx" podUID="04a0da3e-0209-4ec2-9c6f-118c19d1499d" containerName="registry-server" Jan 22 17:59:06 crc kubenswrapper[4758]: I0122 17:59:06.841387 4758 patch_prober.go:28] interesting pod/oauth-openshift-65454647d6-pr5dd container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 17:59:06 crc kubenswrapper[4758]: I0122 17:59:06.841891 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" podUID="9deedfb3-0e0e-4287-81de-8131aac4b6b0" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 17:59:06 crc kubenswrapper[4758]: I0122 17:59:06.842176 4758 patch_prober.go:28] interesting pod/oauth-openshift-65454647d6-pr5dd container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 17:59:06 crc kubenswrapper[4758]: I0122 17:59:06.842316 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" podUID="9deedfb3-0e0e-4287-81de-8131aac4b6b0" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 17:59:07 crc kubenswrapper[4758]: I0122 17:59:07.808972 4758 scope.go:117] "RemoveContainer" containerID="71d0f9a93a1f198cee3e61be87dac5fd13220229181dc2ee3ad7a9d1aecf76fb" Jan 22 17:59:07 crc kubenswrapper[4758]: E0122 17:59:07.809911 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:59:10 crc kubenswrapper[4758]: E0122 17:59:10.181109 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e1d102f8e4e4d97a2286f3e2cda42ae1f64f87a935a97a3d5f4e23c235be6658 is running failed: container process not found" containerID="e1d102f8e4e4d97a2286f3e2cda42ae1f64f87a935a97a3d5f4e23c235be6658" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 17:59:10 crc kubenswrapper[4758]: E0122 17:59:10.182043 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e1d102f8e4e4d97a2286f3e2cda42ae1f64f87a935a97a3d5f4e23c235be6658 is running failed: container process not found" containerID="e1d102f8e4e4d97a2286f3e2cda42ae1f64f87a935a97a3d5f4e23c235be6658" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 17:59:10 crc kubenswrapper[4758]: E0122 17:59:10.182812 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e1d102f8e4e4d97a2286f3e2cda42ae1f64f87a935a97a3d5f4e23c235be6658 is running failed: container process not found" containerID="e1d102f8e4e4d97a2286f3e2cda42ae1f64f87a935a97a3d5f4e23c235be6658" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 17:59:10 crc kubenswrapper[4758]: E0122 17:59:10.182893 4758 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e1d102f8e4e4d97a2286f3e2cda42ae1f64f87a935a97a3d5f4e23c235be6658 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-qxqsx" podUID="04a0da3e-0209-4ec2-9c6f-118c19d1499d" containerName="registry-server" Jan 22 17:59:15 crc kubenswrapper[4758]: I0122 17:59:15.051683 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qxqsx" Jan 22 17:59:15 crc kubenswrapper[4758]: I0122 17:59:15.148446 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04a0da3e-0209-4ec2-9c6f-118c19d1499d-utilities\") pod \"04a0da3e-0209-4ec2-9c6f-118c19d1499d\" (UID: \"04a0da3e-0209-4ec2-9c6f-118c19d1499d\") " Jan 22 17:59:15 crc kubenswrapper[4758]: I0122 17:59:15.148487 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04a0da3e-0209-4ec2-9c6f-118c19d1499d-catalog-content\") pod \"04a0da3e-0209-4ec2-9c6f-118c19d1499d\" (UID: \"04a0da3e-0209-4ec2-9c6f-118c19d1499d\") " Jan 22 17:59:15 crc kubenswrapper[4758]: I0122 17:59:15.148963 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7xrs\" (UniqueName: \"kubernetes.io/projected/04a0da3e-0209-4ec2-9c6f-118c19d1499d-kube-api-access-l7xrs\") pod \"04a0da3e-0209-4ec2-9c6f-118c19d1499d\" (UID: \"04a0da3e-0209-4ec2-9c6f-118c19d1499d\") " Jan 22 17:59:15 crc kubenswrapper[4758]: I0122 17:59:15.151443 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04a0da3e-0209-4ec2-9c6f-118c19d1499d-utilities" (OuterVolumeSpecName: "utilities") pod "04a0da3e-0209-4ec2-9c6f-118c19d1499d" (UID: "04a0da3e-0209-4ec2-9c6f-118c19d1499d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:59:15 crc kubenswrapper[4758]: I0122 17:59:15.155710 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04a0da3e-0209-4ec2-9c6f-118c19d1499d-kube-api-access-l7xrs" (OuterVolumeSpecName: "kube-api-access-l7xrs") pod "04a0da3e-0209-4ec2-9c6f-118c19d1499d" (UID: "04a0da3e-0209-4ec2-9c6f-118c19d1499d"). InnerVolumeSpecName "kube-api-access-l7xrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:59:15 crc kubenswrapper[4758]: I0122 17:59:15.203998 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04a0da3e-0209-4ec2-9c6f-118c19d1499d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "04a0da3e-0209-4ec2-9c6f-118c19d1499d" (UID: "04a0da3e-0209-4ec2-9c6f-118c19d1499d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:59:15 crc kubenswrapper[4758]: I0122 17:59:15.251617 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04a0da3e-0209-4ec2-9c6f-118c19d1499d-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 17:59:15 crc kubenswrapper[4758]: I0122 17:59:15.251660 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04a0da3e-0209-4ec2-9c6f-118c19d1499d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 17:59:15 crc kubenswrapper[4758]: I0122 17:59:15.251676 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7xrs\" (UniqueName: \"kubernetes.io/projected/04a0da3e-0209-4ec2-9c6f-118c19d1499d-kube-api-access-l7xrs\") on node \"crc\" DevicePath \"\"" Jan 22 17:59:15 crc kubenswrapper[4758]: I0122 17:59:15.667111 4758 generic.go:334] "Generic (PLEG): container finished" podID="93923998-0016-4db9-adff-a433c7a8d57c" containerID="ac9b523b39a8fc616563df35ca3aa97f65c7d130f93997569e78a6b68ebfdb47" exitCode=0 Jan 22 17:59:15 crc kubenswrapper[4758]: I0122 17:59:15.668087 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93923998-0016-4db9-adff-a433c7a8d57c","Type":"ContainerDied","Data":"ac9b523b39a8fc616563df35ca3aa97f65c7d130f93997569e78a6b68ebfdb47"} Jan 22 17:59:15 crc kubenswrapper[4758]: I0122 17:59:15.668141 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93923998-0016-4db9-adff-a433c7a8d57c","Type":"ContainerStarted","Data":"78c0a17a53872bad0596bc52674aed65fa8cb34ed769d3f1df8a5b9572b9133e"} Jan 22 17:59:15 crc kubenswrapper[4758]: I0122 17:59:15.672908 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxqsx" event={"ID":"04a0da3e-0209-4ec2-9c6f-118c19d1499d","Type":"ContainerDied","Data":"282554871aa4d03bb904a9f64a5b7359db2abe0cd4b1d3aea24348350a442080"} Jan 22 17:59:15 crc kubenswrapper[4758]: I0122 17:59:15.672965 4758 scope.go:117] "RemoveContainer" containerID="e1d102f8e4e4d97a2286f3e2cda42ae1f64f87a935a97a3d5f4e23c235be6658" Jan 22 17:59:15 crc kubenswrapper[4758]: I0122 17:59:15.673157 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qxqsx" Jan 22 17:59:15 crc kubenswrapper[4758]: I0122 17:59:15.704877 4758 scope.go:117] "RemoveContainer" containerID="4f1b19a74dee9013aa1077e5b85589201b713c0389dd3dcc8e1863d821d0e48b" Jan 22 17:59:15 crc kubenswrapper[4758]: I0122 17:59:15.728976 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qxqsx"] Jan 22 17:59:15 crc kubenswrapper[4758]: I0122 17:59:15.739458 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qxqsx"] Jan 22 17:59:15 crc kubenswrapper[4758]: I0122 17:59:15.745626 4758 scope.go:117] "RemoveContainer" containerID="fa1b086d55f0cb9915704cac08f8293d01808bb4a1880a80e465bbe11cbc88e4" Jan 22 17:59:16 crc kubenswrapper[4758]: I0122 17:59:16.822960 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04a0da3e-0209-4ec2-9c6f-118c19d1499d" path="/var/lib/kubelet/pods/04a0da3e-0209-4ec2-9c6f-118c19d1499d/volumes" Jan 22 17:59:20 crc kubenswrapper[4758]: I0122 17:59:20.808967 4758 scope.go:117] "RemoveContainer" containerID="71d0f9a93a1f198cee3e61be87dac5fd13220229181dc2ee3ad7a9d1aecf76fb" Jan 22 17:59:20 crc kubenswrapper[4758]: E0122 17:59:20.809827 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 17:59:29 crc kubenswrapper[4758]: I0122 17:59:29.702864 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="93923998-0016-4db9-adff-a433c7a8d57c" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:32.807853 4758 scope.go:117] "RemoveContainer" containerID="71d0f9a93a1f198cee3e61be87dac5fd13220229181dc2ee3ad7a9d1aecf76fb" Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 17:59:32.808391 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:33.402000 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="d5a7a812-eaba-4ae7-8d97-e80ae4f70d78" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.221:8080/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:33.833917 4758 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:33.833992 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 17:59:34.257150 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T17:59:24Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T17:59:24Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T17:59:24Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T17:59:24Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 17:59:35.499147 4758 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:36.122069 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-qs76m" podUID="00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:36.439976 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jr994" podUID="16d19f40-45e9-4f1a-b953-e5c68ca014f3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:38.832786 4758 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:38.833371 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:43.397278 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="d5a7a812-eaba-4ae7-8d97-e80ae4f70d78" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.221:8080/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:43.809371 4758 scope.go:117] "RemoveContainer" containerID="71d0f9a93a1f198cee3e61be87dac5fd13220229181dc2ee3ad7a9d1aecf76fb" Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 17:59:43.809713 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:43.834809 4758 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:43.834899 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 17:59:44.257820 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 17:59:45.499918 4758 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:45.694038 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-tlt96" podUID="e7fdd2cd-e517-46b5-acb3-22b59b7f132f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:45.694280 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-tlt96" podUID="e7fdd2cd-e517-46b5-acb3-22b59b7f132f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:45.699347 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="f52e2571-4001-441f-b7b7-b4746ae1c10d" containerName="galera" probeResult="failure" output="command timed out" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:45.703397 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="f52e2571-4001-441f-b7b7-b4746ae1c10d" containerName="galera" probeResult="failure" output="command timed out" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:46.210940 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2fkhp" podUID="659f7d3e-5518-4d19-bb54-e39295a667d2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:46.211030 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2fkhp" podUID="659f7d3e-5518-4d19-bb54-e39295a667d2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:46.211231 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-dfb5n" podUID="78689fee-3fe7-47d2-866d-6465d23378ea" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:46.211449 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-dfb5n" podUID="78689fee-3fe7-47d2-866d-6465d23378ea" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:46.480993 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jr994" podUID="16d19f40-45e9-4f1a-b953-e5c68ca014f3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:46.481582 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jr994" podUID="16d19f40-45e9-4f1a-b953-e5c68ca014f3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:46.612006 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-4jthc" podUID="19b4b900-d90f-4e59-b082-61f058f5882b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.93:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:46.612016 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-4jthc" podUID="19b4b900-d90f-4e59-b082-61f058f5882b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.93:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:47.254867 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2xj52" podUID="644142ed-c937-406d-9fd5-3fe856879a92" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.97:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:47.254961 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2xj52" podUID="644142ed-c937-406d-9fd5-3fe856879a92" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.97:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:48.833631 4758 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:48.834056 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:53.397712 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="d5a7a812-eaba-4ae7-8d97-e80ae4f70d78" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.221:8080/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:53.398333 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/kube-state-metrics-0" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:53.399307 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-state-metrics" containerStatusID={"Type":"cri-o","ID":"78a6ec775e3414b464115c9d589c3eae8881ff824d356dbc942d4deea2d4d1d1"} pod="openstack/kube-state-metrics-0" containerMessage="Container kube-state-metrics failed liveness probe, will be restarted" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:53.399358 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="d5a7a812-eaba-4ae7-8d97-e80ae4f70d78" containerName="kube-state-metrics" containerID="cri-o://78a6ec775e3414b464115c9d589c3eae8881ff824d356dbc942d4deea2d4d1d1" gracePeriod=30 Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:53.835075 4758 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:53.835219 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:53.835393 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 17:59:54.258502 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:54.947765 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-b7565899b-vlqs7" podUID="4801e5d3-a66d-4856-bfc2-95dfebf6f442" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.75:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:54.947901 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-b7565899b-vlqs7" podUID="4801e5d3-a66d-4856-bfc2-95dfebf6f442" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.75:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:55.297760 4758 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:55.297853 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 17:59:55.501207 4758 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:55.610723 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-2mr2s" podUID="901f347a-3b10-4392-8247-41a859112544" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:55.653025 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-tlt96" podUID="e7fdd2cd-e517-46b5-acb3-22b59b7f132f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:55.982400 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2fkhp" podUID="659f7d3e-5518-4d19-bb54-e39295a667d2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:56.023946 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-dfb5n" podUID="78689fee-3fe7-47d2-866d-6465d23378ea" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:56.040095 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-85b8fd6746-9vvd6" podUID="71c16ac1-3276-4096-93c5-d10765320713" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.96:8081/readyz\": dial tcp 10.217.0.96:8081: connect: connection refused" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:56.438996 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jr994" podUID="16d19f40-45e9-4f1a-b953-e5c68ca014f3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:56.439122 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jr994" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:56.572296 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-4jthc" podUID="19b4b900-d90f-4e59-b082-61f058f5882b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.93:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:56.851995 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-np2j4" podUID="4612798c-6ae6-4a07-afe6-3f3574ee467b" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.55:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:56.852023 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-59n7w" podUID="d4c5d14c-33e9-4cb0-9ff4-9747c2cd3c13" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.95:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:56.852029 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-np2j4" podUID="4612798c-6ae6-4a07-afe6-3f3574ee467b" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.55:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:57.093935 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2xj52" podUID="644142ed-c937-406d-9fd5-3fe856879a92" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.97:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:57.482113 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jr994" podUID="16d19f40-45e9-4f1a-b953-e5c68ca014f3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:57.876321 4758 scope.go:117] "RemoveContainer" containerID="71d0f9a93a1f198cee3e61be87dac5fd13220229181dc2ee3ad7a9d1aecf76fb" Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 17:59:57.876589 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:58.088093 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-lpprz" podUID="cc433179-ae5b-4250-80c2-97af371fdfed" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:58.089582 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-lpprz" podUID="cc433179-ae5b-4250-80c2-97af371fdfed" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:58.268605 4758 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:58.268685 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:58.396943 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="d5a7a812-eaba-4ae7-8d97-e80ae4f70d78" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.221:8081/readyz\": dial tcp 10.217.0.221:8081: connect: connection refused" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:58.835757 4758 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": context deadline exceeded" start-of-body= Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:58.835851 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": context deadline exceeded" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:58.836021 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:58.836970 4758 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:58.838361 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-apiserver" containerStatusID={"Type":"cri-o","ID":"8e33eb125ab84769bb47bfb5bbf4c3643562a9ae950fe7f4a6f3ddde4057d86b"} pod="openshift-kube-apiserver/kube-apiserver-crc" containerMessage="Container kube-apiserver failed liveness probe, will be restarted" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:58.839111 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" containerID="cri-o://8e33eb125ab84769bb47bfb5bbf4c3643562a9ae950fe7f4a6f3ddde4057d86b" gracePeriod=15 Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:59.704615 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="93923998-0016-4db9-adff-a433c7a8d57c" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:59.704679 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:59.705528 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-notification-agent" containerStatusID={"Type":"cri-o","ID":"fafb5d2fa75b2b190a38003bc6cece90b275597f24e157d6ae4d1a4780c75472"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-notification-agent failed liveness probe, will be restarted" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 17:59:59.705585 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="93923998-0016-4db9-adff-a433c7a8d57c" containerName="ceilometer-notification-agent" containerID="cri-o://fafb5d2fa75b2b190a38003bc6cece90b275597f24e157d6ae4d1a4780c75472" gracePeriod=30 Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:02.083811 4758 generic.go:334] "Generic (PLEG): container finished" podID="d5a7a812-eaba-4ae7-8d97-e80ae4f70d78" containerID="78a6ec775e3414b464115c9d589c3eae8881ff824d356dbc942d4deea2d4d1d1" exitCode=-1 Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:02.083981 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d5a7a812-eaba-4ae7-8d97-e80ae4f70d78","Type":"ContainerDied","Data":"78a6ec775e3414b464115c9d589c3eae8881ff824d356dbc942d4deea2d4d1d1"} Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:03.835762 4758 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:03.836130 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:04.259322 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:04.893296 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-b7565899b-vlqs7" podUID="4801e5d3-a66d-4856-bfc2-95dfebf6f442" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.75:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:05.298596 4758 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:05.298658 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:05.501660 4758 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:05.620253 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-2mr2s" podUID="901f347a-3b10-4392-8247-41a859112544" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:05.620679 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-2mr2s" podUID="901f347a-3b10-4392-8247-41a859112544" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:05.704047 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-tlt96" podUID="e7fdd2cd-e517-46b5-acb3-22b59b7f132f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:05.704451 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-tlt96" podUID="e7fdd2cd-e517-46b5-acb3-22b59b7f132f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:05.704596 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-tlt96" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:06.108029 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-dfb5n" podUID="78689fee-3fe7-47d2-866d-6465d23378ea" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:06.108156 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-dfb5n" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:06.108404 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2fkhp" podUID="659f7d3e-5518-4d19-bb54-e39295a667d2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:06.108483 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2fkhp" podUID="659f7d3e-5518-4d19-bb54-e39295a667d2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:06.108548 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2fkhp" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:06.109886 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-85b8fd6746-9vvd6" podUID="71c16ac1-3276-4096-93c5-d10765320713" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.96:8081/healthz\": dial tcp 10.217.0.96:8081: connect: connection refused" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:06.109895 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-85b8fd6746-9vvd6" podUID="71c16ac1-3276-4096-93c5-d10765320713" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.96:8081/readyz\": dial tcp 10.217.0.96:8081: connect: connection refused" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:06.109939 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-dfb5n" podUID="78689fee-3fe7-47d2-866d-6465d23378ea" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:06.498889 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jr994" podUID="16d19f40-45e9-4f1a-b953-e5c68ca014f3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:06.499074 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jr994" podUID="16d19f40-45e9-4f1a-b953-e5c68ca014f3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:06.573165 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-4jthc" podUID="19b4b900-d90f-4e59-b082-61f058f5882b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.93:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:06.573156 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-4jthc" podUID="19b4b900-d90f-4e59-b082-61f058f5882b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.93:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:06.573286 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-4jthc" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:06.746986 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-tlt96" podUID="e7fdd2cd-e517-46b5-acb3-22b59b7f132f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:06.993925 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-59n7w" podUID="d4c5d14c-33e9-4cb0-9ff4-9747c2cd3c13" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.95:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:06.993960 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4rlkk" podUID="40845474-36a2-48c0-a0df-af5deb2a31fd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.94:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:06.994048 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-np2j4" podUID="4612798c-6ae6-4a07-afe6-3f3574ee467b" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.55:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:06.994066 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-np2j4" podUID="4612798c-6ae6-4a07-afe6-3f3574ee467b" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.55:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:06.994042 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4rlkk" podUID="40845474-36a2-48c0-a0df-af5deb2a31fd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.94:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:06.994174 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-59n7w" podUID="d4c5d14c-33e9-4cb0-9ff4-9747c2cd3c13" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.95:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:07.096877 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2xj52" podUID="644142ed-c937-406d-9fd5-3fe856879a92" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.97:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:07.096971 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2xj52" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:07.137915 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2xj52" podUID="644142ed-c937-406d-9fd5-3fe856879a92" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.97:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:07.221310 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2fkhp" podUID="659f7d3e-5518-4d19-bb54-e39295a667d2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:07.222144 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-dfb5n" podUID="78689fee-3fe7-47d2-866d-6465d23378ea" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:07.615024 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-4jthc" podUID="19b4b900-d90f-4e59-b082-61f058f5882b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.93:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:08.034981 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-lpprz" podUID="cc433179-ae5b-4250-80c2-97af371fdfed" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:08.035080 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-lpprz" podUID="cc433179-ae5b-4250-80c2-97af371fdfed" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.096725 4758 reflector.go:484] object-"openstack"/"galera-openstack-cell1-dockercfg-thg4w": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.096802 4758 reflector.go:484] object-"openstack"/"cert-glance-default-public-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.096835 4758 reflector.go:484] object-"openshift-dns"/"dns-default-metrics-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.096865 4758 reflector.go:484] object-"openshift-cluster-version"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.096896 4758 reflector.go:484] object-"openshift-etcd-operator"/"etcd-client": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.096936 4758 reflector.go:484] object-"openstack"/"cert-galera-openstack-cell1-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.096974 4758 reflector.go:484] object-"openstack"/"ceilometer-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.097010 4758 reflector.go:484] object-"openstack"/"cert-placement-public-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.097054 4758 reflector.go:484] object-"cert-manager"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.097086 4758 reflector.go:484] object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.097140 4758 reflector.go:484] object-"openstack"/"nova-metadata-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.097171 4758 reflector.go:484] object-"openshift-route-controller-manager"/"config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.097235 4758 reflector.go:484] object-"openshift-image-registry"/"image-registry-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.097267 4758 reflector.go:484] object-"openstack"/"memcached-config-data": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.097297 4758 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.097327 4758 reflector.go:484] object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.097357 4758 reflector.go:484] object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.097390 4758 reflector.go:484] object-"openstack"/"watcher-decision-engine-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.097420 4758 reflector.go:484] object-"openshift-dns-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.097455 4758 reflector.go:484] object-"openshift-authentication-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.097511 4758 reflector.go:484] object-"openshift-cluster-version"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.098198 4758 reflector.go:484] object-"openstack"/"horizon-config-data": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.098401 4758 reflector.go:484] object-"openshift-authentication-operator"/"serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.099071 4758 reflector.go:484] object-"openstack"/"cert-swift-internal-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.099197 4758 reflector.go:484] object-"openstack"/"cert-nova-public-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.099199 4758 reflector.go:484] object-"openstack"/"keystone-scripts": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.099379 4758 reflector.go:484] object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.099488 4758 reflector.go:484] object-"cert-manager"/"cert-manager-webhook-dockercfg-9xxdc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.099495 4758 reflector.go:484] object-"openstack"/"cert-rabbitmq-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.099645 4758 reflector.go:484] object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-f7gls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.099782 4758 reflector.go:484] object-"openshift-apiserver"/"audit-1": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.099861 4758 reflector.go:484] object-"hostpath-provisioner"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.100062 4758 reflector.go:484] object-"openstack"/"nova-scheduler-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.100124 4758 reflector.go:484] object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.100129 4758 reflector.go:484] object-"openshift-etcd-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.100372 4758 reflector.go:484] object-"openshift-cluster-machine-approver"/"kube-rbac-proxy": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.100440 4758 reflector.go:484] object-"openshift-service-ca"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.100563 4758 reflector.go:484] object-"openstack"/"cert-barbican-internal-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.100662 4758 reflector.go:484] object-"openstack"/"cert-ceilometer-internal-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:08.100505 4758 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/events/ceilometer-0.188d1f633c8ab212\": http2: client connection lost" event="&Event{ObjectMeta:{ceilometer-0.188d1f633c8ab212 openstack 84476 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openstack,Name:ceilometer-0,UID:93923998-0016-4db9-adff-a433c7a8d57c,APIVersion:v1,ResourceVersion:49775,FieldPath:spec.containers{ceilometer-notification-agent},},Reason:Unhealthy,Message:Liveness probe failed: command timed out,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 17:58:59 +0000 UTC,LastTimestamp:2026-01-22 17:59:29.70354374 +0000 UTC m=+5391.186883055,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.100813 4758 reflector.go:484] object-"openshift-multus"/"default-dockercfg-2q5b6": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.100938 4758 reflector.go:484] object-"openshift-ingress-canary"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.101050 4758 reflector.go:484] object-"metallb-system"/"frr-k8s-webhook-server-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.101220 4758 reflector.go:484] object-"openshift-machine-config-operator"/"mco-proxy-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.101242 4758 reflector.go:484] object-"openshift-controller-manager"/"serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.101304 4758 reflector.go:484] object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-p9vjx": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.101324 4758 reflector.go:484] object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.101397 4758 reflector.go:484] object-"openstack"/"rabbitmq-erlang-cookie": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.101441 4758 reflector.go:484] object-"openstack"/"rabbitmq-server-conf": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.101550 4758 reflector.go:484] object-"openshift-nmstate"/"nmstate-handler-dockercfg-v97lh": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.101646 4758 reflector.go:484] object-"openshift-network-node-identity"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.101731 4758 reflector.go:484] object-"openstack"/"ovncontroller-metrics-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.101820 4758 reflector.go:484] object-"openshift-ingress"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.101833 4758 reflector.go:484] object-"openshift-service-ca-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.101952 4758 reflector.go:484] object-"openstack"/"openstack-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.102032 4758 reflector.go:484] pkg/kubelet/config/apiserver.go:66: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.102069 4758 reflector.go:484] object-"openshift-cluster-samples-operator"/"samples-operator-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.102103 4758 reflector.go:484] object-"openshift-machine-config-operator"/"kube-rbac-proxy": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.102132 4758 reflector.go:484] object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.102209 4758 reflector.go:484] object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.102256 4758 reflector.go:484] object-"openstack"/"rabbitmq-server-dockercfg-d8jxf": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.102341 4758 reflector.go:484] object-"openshift-apiserver"/"serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.102344 4758 reflector.go:484] object-"openstack"/"glance-scripts": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.102673 4758 reflector.go:484] object-"openshift-network-node-identity"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.102712 4758 reflector.go:484] object-"openshift-service-ca"/"signing-cabundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.102843 4758 reflector.go:484] object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-2w6mb": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.102930 4758 reflector.go:484] object-"openshift-console"/"oauth-serving-cert": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.103005 4758 reflector.go:484] object-"openstack"/"ovndbcluster-sb-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.103063 4758 reflector.go:484] object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:08.103177 4758 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": http2: client connection lost" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:08.103237 4758 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.103934 4758 reflector.go:484] object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.104028 4758 reflector.go:484] object-"openstack"/"openstack-config-data": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.104087 4758 reflector.go:484] object-"openshift-multus"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.104144 4758 reflector.go:484] object-"openstack"/"rabbitmq-cell1-config-data": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.104200 4758 reflector.go:484] object-"openshift-apiserver"/"etcd-serving-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.104258 4758 reflector.go:484] object-"openshift-service-ca"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.104329 4758 reflector.go:484] object-"openshift-cluster-machine-approver"/"machine-approver-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.104388 4758 reflector.go:484] object-"openstack"/"glance-default-external-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.104449 4758 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.104595 4758 reflector.go:484] object-"openstack"/"dns-svc": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.104651 4758 reflector.go:484] object-"openstack"/"neutron-config": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.104712 4758 reflector.go:484] object-"openshift-image-registry"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.104793 4758 reflector.go:484] object-"cert-manager"/"cert-manager-cainjector-dockercfg-x4h8f": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.104853 4758 reflector.go:484] object-"openshift-service-ca-operator"/"serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.104905 4758 reflector.go:484] object-"openshift-config-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.104983 4758 reflector.go:484] object-"openstack"/"horizon": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:08.105563 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": http2: client connection lost" Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:08.105590 4758 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.105661 4758 reflector.go:484] object-"openstack"/"test-operator-controller-priv-key": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.105949 4758 reflector.go:484] object-"openshift-apiserver-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.106032 4758 reflector.go:484] object-"openshift-authentication"/"v4-0-config-system-session": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.106095 4758 reflector.go:484] object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.106153 4758 reflector.go:484] object-"openstack"/"rabbitmq-notifications-plugins-conf": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.106207 4758 reflector.go:484] object-"openstack"/"cert-ovndbcluster-sb-ovndbs": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.106291 4758 reflector.go:484] object-"openstack"/"cert-galera-openstack-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.106382 4758 reflector.go:484] object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.106442 4758 reflector.go:484] object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.106495 4758 reflector.go:484] object-"openshift-nmstate"/"nginx-conf": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.106590 4758 reflector.go:484] object-"openstack"/"ovndbcluster-nb-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.106647 4758 reflector.go:484] object-"openshift-ingress-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.106703 4758 reflector.go:484] object-"openshift-authentication-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.106782 4758 reflector.go:484] object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-dbtnp": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.106837 4758 reflector.go:484] object-"openstack"/"prometheus-metric-storage": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.106893 4758 reflector.go:484] object-"openshift-console"/"console-oauth-config": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.106947 4758 reflector.go:484] object-"openshift-ingress"/"router-metrics-certs-default": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.106998 4758 reflector.go:484] object-"openshift-multus"/"multus-ac-dockercfg-9lkdf": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.107053 4758 reflector.go:484] object-"openshift-controller-manager-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.107123 4758 reflector.go:484] object-"openstack"/"cert-kube-state-metrics-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.107182 4758 reflector.go:484] object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-x59mw": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.107625 4758 reflector.go:484] object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.107683 4758 reflector.go:484] object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-d798m": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.107737 4758 reflector.go:484] object-"openshift-ingress"/"router-dockercfg-zdk86": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.108168 4758 reflector.go:484] object-"openshift-machine-api"/"control-plane-machine-set-operator-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.108209 4758 reflector.go:484] object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.108322 4758 reflector.go:484] object-"openstack"/"openstack-edpm-ipam": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.108402 4758 reflector.go:484] object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-8t2s8": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.108431 4758 reflector.go:484] object-"openstack"/"cert-rabbitmq-cell1-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.108569 4758 reflector.go:484] object-"openshift-controller-manager"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.108625 4758 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.108679 4758 reflector.go:484] object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.108779 4758 reflector.go:484] object-"openshift-authentication"/"v4-0-config-user-template-login": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.108801 4758 reflector.go:484] object-"openshift-console"/"service-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.108820 4758 reflector.go:484] object-"openstack"/"cert-placement-internal-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.108879 4758 reflector.go:484] object-"openshift-nmstate"/"nmstate-operator-dockercfg-2sf4f": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.108902 4758 reflector.go:484] object-"openstack"/"cert-swift-public-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.108957 4758 reflector.go:484] object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.109011 4758 reflector.go:484] object-"openstack-operators"/"webhook-server-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.109064 4758 reflector.go:484] object-"openstack"/"swift-ring-files": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.109151 4758 reflector.go:484] object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.109209 4758 reflector.go:484] object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-pz96z": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.109261 4758 reflector.go:484] object-"openstack"/"cinder-scripts": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.109311 4758 reflector.go:484] object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.109364 4758 reflector.go:484] object-"openstack"/"keystone-keystone-dockercfg-q7l7k": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.109417 4758 reflector.go:484] object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.109485 4758 reflector.go:484] object-"openshift-nmstate"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.109538 4758 reflector.go:484] object-"openshift-marketplace"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.109589 4758 reflector.go:484] object-"openstack"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.109639 4758 reflector.go:484] object-"openstack"/"rabbitmq-cell1-plugins-conf": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.109690 4758 reflector.go:484] object-"openshift-ingress"/"router-certs-default": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.109767 4758 reflector.go:484] object-"openstack"/"swift-swift-dockercfg-xgjlh": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.109816 4758 reflector.go:484] object-"openshift-controller-manager"/"openshift-global-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.109867 4758 reflector.go:484] object-"openshift-oauth-apiserver"/"trusted-ca-bundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.109958 4758 reflector.go:484] object-"openstack"/"cert-nova-internal-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.110047 4758 reflector.go:484] object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.110077 4758 reflector.go:484] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.110083 4758 reflector.go:484] object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-zpd54": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.110142 4758 reflector.go:484] object-"openstack-operators"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.110172 4758 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.110194 4758 reflector.go:484] object-"openshift-route-controller-manager"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.110199 4758 reflector.go:484] object-"openstack"/"openstack-config-secret": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.110332 4758 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.110354 4758 reflector.go:484] object-"openstack"/"rabbitmq-notifications-default-user": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.110364 4758 reflector.go:484] object-"openshift-nmstate"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.110438 4758 reflector.go:484] object-"openshift-authentication"/"v4-0-config-system-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.110494 4758 reflector.go:484] object-"openstack"/"cert-watcher-internal-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.110510 4758 reflector.go:484] object-"metallb-system"/"frr-k8s-daemon-dockercfg-s75rc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.110560 4758 reflector.go:484] object-"openshift-network-console"/"networking-console-plugin": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.110667 4758 reflector.go:484] object-"openstack"/"rabbitmq-cell1-server-dockercfg-5sdkn": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.110670 4758 reflector.go:484] object-"openstack"/"ovsdbserver-nb": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.110700 4758 reflector.go:484] object-"openstack"/"rabbitmq-notifications-server-dockercfg-8d4mj": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.110719 4758 reflector.go:484] object-"openstack"/"dns-swift-storage-0": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.110727 4758 reflector.go:484] object-"openstack"/"rabbitmq-plugins-conf": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.110794 4758 reflector.go:484] object-"openshift-dns"/"dns-default": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.110815 4758 reflector.go:484] object-"openstack"/"ovnnorthd-scripts": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.110835 4758 reflector.go:484] object-"openstack-operators"/"metrics-server-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.110869 4758 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.110887 4758 reflector.go:484] object-"openshift-apiserver"/"image-import-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.110931 4758 reflector.go:484] object-"openshift-route-controller-manager"/"serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.110943 4758 reflector.go:484] object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-z2sxt": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.111064 4758 reflector.go:484] object-"openstack"/"cert-ovn-metrics": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.111136 4758 reflector.go:484] object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.111208 4758 reflector.go:484] object-"cert-manager"/"cert-manager-dockercfg-qcl9m": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.111230 4758 reflector.go:484] object-"openshift-network-console"/"networking-console-plugin-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.111255 4758 reflector.go:484] object-"openstack-operators"/"infra-operator-webhook-server-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.111351 4758 reflector.go:484] object-"openshift-ovn-kubernetes"/"env-overrides": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.111377 4758 reflector.go:484] object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.111359 4758 reflector.go:484] object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.111427 4758 reflector.go:484] object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.111481 4758 reflector.go:484] object-"openshift-marketplace"/"community-operators-dockercfg-dmngl": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.111495 4758 reflector.go:484] object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.111541 4758 reflector.go:484] object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.111547 4758 reflector.go:484] object-"openshift-etcd-operator"/"etcd-service-ca-bundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.111556 4758 reflector.go:484] object-"metallb-system"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.111730 4758 reflector.go:484] object-"hostpath-provisioner"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.111768 4758 reflector.go:484] object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.111788 4758 reflector.go:484] object-"openshift-ingress-operator"/"metrics-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.111816 4758 reflector.go:484] object-"openstack"/"placement-placement-dockercfg-n4qvk": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.111844 4758 reflector.go:484] object-"openshift-machine-api"/"machine-api-operator-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.111857 4758 reflector.go:484] object-"openshift-etcd-operator"/"etcd-ca-bundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.111883 4758 reflector.go:484] object-"openstack"/"openstack-scripts": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.111880 4758 reflector.go:484] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.111911 4758 reflector.go:484] object-"openstack"/"rabbitmq-config-data": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.111951 4758 reflector.go:484] object-"openshift-marketplace"/"marketplace-operator-metrics": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.111962 4758 reflector.go:484] object-"openstack"/"rabbitmq-notifications-server-conf": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.111986 4758 reflector.go:484] object-"openshift-nmstate"/"openshift-nmstate-webhook": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.112053 4758 reflector.go:484] object-"openstack"/"cert-ovnnorthd-ovndbs": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.112077 4758 reflector.go:484] object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-4q6rk": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.112111 4758 reflector.go:484] object-"openshift-network-operator"/"iptables-alerter-script": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.112165 4758 reflector.go:484] object-"metallb-system"/"metallb-webhook-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.112212 4758 reflector.go:484] object-"openstack"/"rabbitmq-notifications-config-data": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.112238 4758 reflector.go:484] object-"openshift-kube-storage-version-migrator-operator"/"config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.112266 4758 reflector.go:484] object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.112292 4758 reflector.go:484] object-"openshift-apiserver"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.112318 4758 reflector.go:484] object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:08.112301 4758 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openstack/persistence-rabbitmq-notifications-server-0: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/persistentvolumeclaims/persistence-rabbitmq-notifications-server-0\": http2: client connection lost" pod="openstack/rabbitmq-notifications-server-0" volumeName="persistence" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.112344 4758 reflector.go:484] object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-pdg6h": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.112370 4758 reflector.go:484] object-"openshift-service-ca-operator"/"service-ca-operator-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.112393 4758 reflector.go:484] object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.112416 4758 reflector.go:484] object-"openshift-oauth-apiserver"/"etcd-client": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.112441 4758 reflector.go:484] object-"openstack"/"watcher-applier-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.112466 4758 reflector.go:484] object-"openshift-authentication"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.112490 4758 reflector.go:484] object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.112516 4758 reflector.go:484] object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.112540 4758 reflector.go:484] object-"openshift-ingress-canary"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.112564 4758 reflector.go:484] object-"openstack"/"cinder-scheduler-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.112590 4758 reflector.go:484] object-"openstack"/"cert-memcached-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.112610 4758 reflector.go:484] object-"openshift-ingress"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.112634 4758 reflector.go:484] object-"openshift-authentication"/"v4-0-config-user-template-error": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.112645 4758 reflector.go:484] object-"openshift-image-registry"/"image-registry-operator-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.112669 4758 reflector.go:484] object-"openstack"/"ovncontroller-ovncontroller-dockercfg-pxl5h": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.112692 4758 reflector.go:484] object-"openstack"/"dns": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.112719 4758 reflector.go:484] object-"openshift-image-registry"/"installation-pull-secrets": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.112757 4758 reflector.go:484] object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.112780 4758 reflector.go:484] object-"openshift-machine-config-operator"/"node-bootstrapper-token": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.112803 4758 reflector.go:484] object-"openshift-authentication-operator"/"authentication-operator-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.112827 4758 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.112849 4758 reflector.go:484] object-"openshift-authentication"/"v4-0-config-system-cliconfig": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.112879 4758 reflector.go:484] object-"openshift-multus"/"multus-admission-controller-secret": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.112902 4758 reflector.go:484] object-"openstack"/"cert-nova-metadata-internal-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.112925 4758 reflector.go:484] object-"openstack"/"swift-proxy-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.112948 4758 reflector.go:484] object-"metallb-system"/"frr-k8s-certs-secret": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.112973 4758 reflector.go:484] object-"openshift-console"/"console-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.112997 4758 reflector.go:484] object-"openstack-operators"/"openstack-operator-index-dockercfg-ck689": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.113021 4758 reflector.go:484] object-"openshift-console-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.113046 4758 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-operator-images": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.113068 4758 reflector.go:484] object-"openstack-operators"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.113091 4758 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.113116 4758 reflector.go:484] object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.113138 4758 reflector.go:484] object-"openstack"/"cert-neutron-public-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.113162 4758 reflector.go:484] object-"openshift-console-operator"/"console-operator-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.113175 4758 reflector.go:484] object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.113199 4758 reflector.go:484] object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.113210 4758 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.113233 4758 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.113258 4758 reflector.go:484] object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-gmg82": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.113280 4758 reflector.go:484] object-"metallb-system"/"frr-startup": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.113288 4758 reflector.go:484] object-"openshift-operators"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.113299 4758 reflector.go:484] object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.113316 4758 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.113343 4758 reflector.go:484] object-"openstack"/"barbican-api-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.113358 4758 reflector.go:484] object-"openstack"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.113387 4758 reflector.go:484] object-"openstack"/"keystone": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.113410 4758 reflector.go:484] object-"openshift-apiserver"/"encryption-config-1": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.113426 4758 reflector.go:484] object-"openshift-cluster-machine-approver"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.113437 4758 reflector.go:484] object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-9zqsl": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.113453 4758 reflector.go:484] object-"openshift-console-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.113475 4758 reflector.go:484] object-"openshift-network-operator"/"metrics-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.113491 4758 reflector.go:484] object-"openshift-dns-operator"/"metrics-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.113508 4758 reflector.go:484] object-"openshift-ingress-operator"/"trusted-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.113525 4758 reflector.go:484] object-"openstack"/"placement-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.113534 4758 reflector.go:484] object-"openstack"/"ovsdbserver-sb": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.113561 4758 reflector.go:484] object-"openshift-marketplace"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.112163 4758 reflector.go:484] object-"openstack"/"prometheus-metric-storage-rulefiles-2": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.113633 4758 reflector.go:484] object-"openshift-ingress"/"service-ca-bundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.113770 4758 reflector.go:484] object-"openshift-apiserver"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.113813 4758 reflector.go:484] object-"openstack"/"cert-metric-storage-prometheus-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.113880 4758 reflector.go:484] object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.113911 4758 reflector.go:484] object-"metallb-system"/"metallb-excludel2": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.113940 4758 reflector.go:484] object-"openshift-cluster-samples-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.113973 4758 reflector.go:484] object-"openstack"/"barbican-keystone-listener-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.113991 4758 reflector.go:484] object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114013 4758 reflector.go:484] object-"openshift-etcd-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114020 4758 reflector.go:484] object-"openstack"/"nova-cell1-conductor-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114022 4758 reflector.go:484] object-"openshift-route-controller-manager"/"client-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114059 4758 reflector.go:484] object-"openshift-console"/"default-dockercfg-chnjx": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114110 4758 reflector.go:484] object-"metallb-system"/"controller-dockercfg-qdnhd": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114124 4758 reflector.go:484] object-"openshift-cluster-machine-approver"/"machine-approver-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114131 4758 reflector.go:484] object-"openstack"/"cert-rabbitmq-notifications-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114147 4758 reflector.go:484] object-"openshift-apiserver"/"trusted-ca-bundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114155 4758 reflector.go:484] object-"openshift-image-registry"/"image-registry-certificates": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114166 4758 reflector.go:484] object-"openshift-service-ca"/"signing-key": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114219 4758 reflector.go:484] object-"openshift-console"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114243 4758 reflector.go:484] object-"openstack"/"cert-cinder-public-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114258 4758 reflector.go:484] object-"openshift-operators"/"observability-operator-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114267 4758 reflector.go:484] object-"metallb-system"/"manager-account-dockercfg-q7gzx": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114279 4758 reflector.go:484] object-"openstack"/"prometheus-metric-storage-rulefiles-1": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114298 4758 reflector.go:484] object-"openstack"/"ovncontroller-scripts": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114323 4758 reflector.go:484] object-"openshift-machine-api"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114345 4758 reflector.go:484] object-"openshift-multus"/"cni-copy-resources": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114365 4758 reflector.go:484] object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114388 4758 reflector.go:484] object-"openstack"/"watcher-watcher-dockercfg-bvchw": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114406 4758 reflector.go:484] object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114414 4758 reflector.go:484] object-"openstack"/"cert-ovndbcluster-nb-ovndbs": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114425 4758 reflector.go:484] object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114448 4758 reflector.go:484] object-"metallb-system"/"speaker-dockercfg-9jfxj": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114458 4758 reflector.go:484] object-"openstack"/"ceilometer-scripts": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114448 4758 reflector.go:484] object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-s6bn2": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114501 4758 reflector.go:484] object-"metallb-system"/"metallb-memberlist": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114517 4758 reflector.go:484] object-"openshift-apiserver"/"etcd-client": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114551 4758 reflector.go:484] object-"openstack"/"horizon-scripts": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114560 4758 reflector.go:484] object-"openshift-multus"/"multus-daemon-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114564 4758 reflector.go:484] object-"openstack"/"placement-scripts": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114576 4758 reflector.go:484] object-"openshift-operator-lifecycle-manager"/"pprof-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114590 4758 reflector.go:484] object-"openstack"/"ovndbcluster-sb-scripts": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114224 4758 reflector.go:484] object-"openshift-ingress"/"router-stats-default": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114247 4758 reflector.go:484] object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114620 4758 reflector.go:484] object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114634 4758 reflector.go:484] object-"openshift-network-diagnostics"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114634 4758 reflector.go:484] object-"openstack"/"nova-cell0-conductor-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114303 4758 reflector.go:484] object-"openstack"/"nova-cell1-novncproxy-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114663 4758 reflector.go:484] object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114673 4758 reflector.go:484] object-"openstack"/"barbican-worker-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114685 4758 reflector.go:484] object-"openstack"/"cinder-volume-nfs-2-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114694 4758 reflector.go:484] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114709 4758 reflector.go:484] object-"openshift-machine-api"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114712 4758 reflector.go:484] object-"openstack"/"openstack-cell1-scripts": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114720 4758 reflector.go:484] object-"openstack"/"keystone-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114731 4758 reflector.go:484] object-"openshift-image-registry"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114758 4758 reflector.go:484] object-"openstack"/"glance-glance-dockercfg-th7td": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114763 4758 reflector.go:484] object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-2zlds": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114770 4758 reflector.go:484] object-"openstack"/"cert-keystone-internal-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.113541 4758 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114782 4758 reflector.go:484] object-"openstack"/"metric-storage-prometheus-dockercfg-4ftsd": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114791 4758 reflector.go:484] object-"openshift-network-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114426 4758 reflector.go:484] object-"openstack"/"cert-horizon-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114865 4758 reflector.go:484] object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114878 4758 reflector.go:484] object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-8x67n": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114894 4758 reflector.go:484] object-"openstack"/"neutron-httpd-config": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114902 4758 reflector.go:484] object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-zfvmv": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114931 4758 reflector.go:484] object-"openshift-console-operator"/"trusted-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114936 4758 reflector.go:484] object-"openstack"/"neutron-neutron-dockercfg-zvr2k": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114945 4758 reflector.go:484] object-"openshift-controller-manager"/"config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114324 4758 reflector.go:484] object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-brw4q": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114957 4758 reflector.go:484] object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114968 4758 reflector.go:484] object-"openshift-network-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114975 4758 reflector.go:484] object-"openstack"/"cert-barbican-public-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114988 4758 reflector.go:484] object-"openshift-kube-storage-version-migrator-operator"/"serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114506 4758 reflector.go:484] object-"openshift-operators"/"observability-operator-sa-dockercfg-rdwz2": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115017 4758 reflector.go:484] object-"openshift-operators"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115029 4758 reflector.go:484] object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115043 4758 reflector.go:484] object-"openstack"/"cert-nova-novncproxy-cell1-public-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115051 4758 reflector.go:484] object-"openshift-image-registry"/"trusted-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115060 4758 reflector.go:484] object-"openstack"/"cert-neutron-ovndbs": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114794 4758 reflector.go:484] object-"openstack"/"tempest-tests-tempest-custom-data-s0": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114150 4758 reflector.go:484] object-"openstack"/"openstack-cell1-config-data": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115089 4758 reflector.go:484] object-"openstack"/"watcher-api-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114989 4758 reflector.go:484] object-"openshift-config-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115114 4758 reflector.go:484] object-"openshift-multus"/"default-cni-sysctl-allowlist": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114805 4758 reflector.go:484] object-"openshift-dns"/"dns-dockercfg-jwfmh": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115044 4758 reflector.go:484] object-"openstack"/"kube-state-metrics-tls-config": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115141 4758 reflector.go:484] object-"openshift-authentication"/"audit": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114694 4758 reflector.go:484] object-"openstack"/"ovndbcluster-nb-scripts": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115162 4758 reflector.go:484] object-"openshift-authentication"/"v4-0-config-user-template-provider-selection": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115168 4758 reflector.go:484] object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115190 4758 reflector.go:484] object-"cert-manager"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115199 4758 reflector.go:484] object-"openstack-operators"/"test-operator-controller-manager-dockercfg-nwvvt": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115215 4758 reflector.go:484] object-"openstack"/"dnsmasq-dns-dockercfg-w2txv": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115225 4758 reflector.go:484] object-"openshift-console"/"console-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115234 4758 reflector.go:484] object-"openstack"/"telemetry-ceilometer-dockercfg-kvpw9": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115246 4758 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.113992 4758 reflector.go:484] object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-bsxhx": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115267 4758 reflector.go:484] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115276 4758 reflector.go:484] object-"openshift-machine-config-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115283 4758 reflector.go:484] object-"openshift-ingress-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115295 4758 reflector.go:484] object-"openshift-operators"/"obo-prometheus-operator-dockercfg-4jql8": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115143 4758 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-server-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115307 4758 reflector.go:484] object-"openshift-oauth-apiserver"/"encryption-config-1": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115315 4758 reflector.go:484] object-"openshift-service-ca"/"service-ca-dockercfg-pn86c": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115326 4758 reflector.go:484] object-"openstack"/"cert-watcher-public-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115335 4758 reflector.go:484] object-"openshift-dns-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115073 4758 reflector.go:484] object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-g7xdx": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115276 4758 reflector.go:484] object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-nzrzh": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115376 4758 reflector.go:484] object-"openshift-network-diagnostics"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115384 4758 reflector.go:484] object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115390 4758 reflector.go:484] object-"openshift-multus"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115406 4758 reflector.go:484] object-"openshift-authentication-operator"/"trusted-ca-bundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115411 4758 reflector.go:484] object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-lk2r2": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115419 4758 reflector.go:484] object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-qcqlv": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115041 4758 reflector.go:484] object-"openshift-nmstate"/"default-dockercfg-ckpvf": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115438 4758 reflector.go:484] object-"openshift-nmstate"/"plugin-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115095 4758 reflector.go:484] object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115450 4758 reflector.go:484] object-"openshift-cluster-version"/"cluster-version-operator-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115400 4758 reflector.go:484] object-"openshift-network-node-identity"/"ovnkube-identity-cm": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115469 4758 reflector.go:484] object-"openshift-network-node-identity"/"network-node-identity-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115478 4758 reflector.go:484] object-"openstack"/"swift-conf": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115027 4758 reflector.go:484] object-"openshift-ingress-canary"/"default-dockercfg-2llfx": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115494 4758 reflector.go:484] object-"openstack"/"horizon-horizon-dockercfg-n2vxv": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115504 4758 reflector.go:484] object-"openstack"/"combined-ca-bundle": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115512 4758 reflector.go:484] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115524 4758 reflector.go:484] object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:08.113958 4758 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": http2: client connection lost" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115494 4758 reflector.go:484] object-"openshift-controller-manager"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114372 4758 reflector.go:484] object-"openshift-machine-config-operator"/"mcc-proxy-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114634 4758 reflector.go:484] object-"metallb-system"/"controller-certs-secret": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114413 4758 reflector.go:484] object-"openstack"/"rabbitmq-cell1-server-conf": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115471 4758 reflector.go:484] object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115019 4758 reflector.go:484] object-"openstack"/"cinder-backup-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115337 4758 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114223 4758 reflector.go:484] object-"openshift-marketplace"/"marketplace-trusted-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115250 4758 reflector.go:484] object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115524 4758 reflector.go:484] object-"openshift-console"/"trusted-ca-bundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115192 4758 reflector.go:484] object-"openstack"/"memcached-memcached-dockercfg-2w6nn": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115228 4758 reflector.go:484] object-"openshift-console"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115389 4758 reflector.go:484] object-"openshift-console-operator"/"serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115456 4758 reflector.go:484] object-"openstack"/"nova-nova-dockercfg-r6mc9": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115510 4758 reflector.go:484] object-"openstack"/"cinder-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115506 4758 reflector.go:484] object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115058 4758 reflector.go:484] object-"openshift-apiserver-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114935 4758 reflector.go:484] object-"openstack"/"cinder-cinder-dockercfg-85hcg": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115412 4758 reflector.go:484] object-"openshift-oauth-apiserver"/"etcd-serving-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115288 4758 reflector.go:484] object-"metallb-system"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115249 4758 reflector.go:484] object-"openshift-oauth-apiserver"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115106 4758 reflector.go:484] object-"openshift-authentication"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115105 4758 reflector.go:484] object-"openshift-image-registry"/"registry-dockercfg-kzzsd": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115115 4758 reflector.go:484] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115119 4758 reflector.go:484] object-"openshift-etcd-operator"/"etcd-operator-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115130 4758 reflector.go:484] object-"openstack"/"galera-openstack-dockercfg-g2jsf": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115147 4758 reflector.go:484] object-"openshift-multus"/"metrics-daemon-secret": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115148 4758 reflector.go:484] object-"openstack"/"cert-ovncontroller-ovndbs": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115155 4758 reflector.go:484] object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115169 4758 reflector.go:484] object-"openshift-operators"/"perses-operator-dockercfg-c658k": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114831 4758 reflector.go:484] object-"metallb-system"/"speaker-certs-secret": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115004 4758 reflector.go:484] object-"openshift-oauth-apiserver"/"serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115023 4758 reflector.go:484] object-"openshift-ingress-canary"/"canary-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115045 4758 reflector.go:484] object-"openstack"/"cert-neutron-internal-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115181 4758 reflector.go:484] object-"openstack"/"rabbitmq-notifications-erlang-cookie": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115196 4758 reflector.go:484] object-"openstack"/"default-dockercfg-d4w66": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115267 4758 reflector.go:484] object-"openshift-network-node-identity"/"env-overrides": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114800 4758 reflector.go:484] object-"openshift-apiserver"/"config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115288 4758 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115289 4758 reflector.go:484] object-"openshift-image-registry"/"node-ca-dockercfg-4777p": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115300 4758 reflector.go:484] object-"openshift-authentication"/"v4-0-config-system-service-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115307 4758 reflector.go:484] object-"openshift-machine-api"/"kube-rbac-proxy": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115317 4758 reflector.go:484] object-"openshift-oauth-apiserver"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115318 4758 reflector.go:484] object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115337 4758 reflector.go:484] object-"openshift-authentication"/"v4-0-config-system-router-certs": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115339 4758 reflector.go:484] object-"openstack"/"prometheus-metric-storage-tls-assets-0": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115357 4758 reflector.go:484] object-"openshift-cluster-version"/"default-dockercfg-gxtc4": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115410 4758 reflector.go:484] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115417 4758 reflector.go:484] object-"openstack"/"nova-api-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115433 4758 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115447 4758 reflector.go:484] object-"openstack"/"swift-storage-config-data": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115463 4758 reflector.go:484] object-"openshift-authentication-operator"/"service-ca-bundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115476 4758 reflector.go:484] object-"openshift-etcd-operator"/"etcd-operator-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115488 4758 reflector.go:484] object-"openstack"/"glance-default-internal-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115494 4758 reflector.go:484] object-"openshift-oauth-apiserver"/"audit-1": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115509 4758 reflector.go:484] object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115525 4758 reflector.go:484] object-"openstack"/"cert-cinder-internal-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115527 4758 reflector.go:484] object-"openshift-service-ca-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115528 4758 reflector.go:484] object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-s6gv4": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115546 4758 reflector.go:484] object-"openstack"/"barbican-barbican-dockercfg-z4pqk": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115548 4758 reflector.go:484] object-"openstack"/"prometheus-metric-storage-rulefiles-0": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115551 4758 reflector.go:484] object-"openshift-console"/"console-dockercfg-f62pw": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115553 4758 reflector.go:484] object-"openshift-controller-manager"/"client-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115567 4758 reflector.go:484] object-"openstack"/"cinder-api-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115574 4758 reflector.go:484] object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-tzrkw": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115579 4758 reflector.go:484] object-"openshift-machine-config-operator"/"proxy-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115587 4758 reflector.go:484] object-"openshift-config-operator"/"config-operator-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115593 4758 reflector.go:484] object-"openstack"/"openstackclient-openstackclient-dockercfg-kmlnc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115597 4758 reflector.go:484] object-"openshift-route-controller-manager"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115601 4758 reflector.go:484] object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-2fs5z": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115600 4758 reflector.go:484] object-"openshift-machine-api"/"machine-api-operator-images": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115612 4758 reflector.go:484] object-"metallb-system"/"metallb-operator-controller-manager-service-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115613 4758 reflector.go:484] object-"openstack"/"cert-glance-default-internal-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114998 4758 reflector.go:484] object-"openstack"/"prometheus-metric-storage-web-config": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115630 4758 reflector.go:484] object-"openstack"/"ovnnorthd-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115624 4758 reflector.go:484] object-"openstack"/"cinder-volume-nfs-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115637 4758 reflector.go:484] object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115637 4758 reflector.go:484] object-"openstack"/"barbican-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115641 4758 reflector.go:484] object-"openstack"/"rabbitmq-cell1-erlang-cookie": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115654 4758 reflector.go:484] object-"openstack"/"rabbitmq-cell1-default-user": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115667 4758 reflector.go:484] object-"openshift-controller-manager-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115664 4758 reflector.go:484] object-"openstack"/"rabbitmq-default-user": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115664 4758 reflector.go:484] object-"openstack"/"cert-keystone-public-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.115679 4758 reflector.go:484] object-"metallb-system"/"metallb-operator-webhook-server-service-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:08.114838 4758 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:08.140583 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2xj52" podUID="644142ed-c937-406d-9fd5-3fe856879a92" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.97:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:08.269276 4758 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:08.269341 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:08.397500 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="d5a7a812-eaba-4ae7-8d97-e80ae4f70d78" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.221:8081/readyz\": dial tcp 10.217.0.221:8081: connect: connection refused" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:08.808111 4758 scope.go:117] "RemoveContainer" containerID="71d0f9a93a1f198cee3e61be87dac5fd13220229181dc2ee3ad7a9d1aecf76fb" Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:08.808441 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:09.217900 4758 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-thgv5 container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.37:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:09.217965 4758 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-thgv5 container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.37:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:09.218030 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-thgv5" podUID="e12dec2b-da40-4cad-92b5-91ab59c0e7c2" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.37:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:09.217971 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-thgv5" podUID="e12dec2b-da40-4cad-92b5-91ab59c0e7c2" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.37:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:10.163836 4758 request.go:700] Waited for 1.000525473s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84524 Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:11.164269 4758 request.go:700] Waited for 1.899019074s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&resourceVersion=84524 Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:12.140043 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wxd6d" podUID="cdd1962b-fbf0-480c-b5e2-e28ee6988046" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.91:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:12.183695 4758 request.go:700] Waited for 2.835167391s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/secrets?fieldSelector=metadata.name%3Dconfig-operator-serving-cert&resourceVersion=84252 Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:13.184320 4758 request.go:700] Waited for 3.74017992s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dneutron-operator-controller-manager-dockercfg-s6gv4&resourceVersion=84343 Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:13.836428 4758 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:13.836510 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:14.124112 4758 reflector.go:561] object-"openstack"/"cert-keystone-public-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-keystone-public-svc&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:14.124236 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-keystone-public-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-keystone-public-svc&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:14.143810 4758 reflector.go:561] object-"openshift-cluster-samples-operator"/"samples-operator-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/secrets?fieldSelector=metadata.name%3Dsamples-operator-tls&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:14.144102 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/secrets?fieldSelector=metadata.name%3Dsamples-operator-tls&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:14.159456 4758 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": read tcp 192.168.126.11:43266->192.168.126.11:6443: read: connection reset by peer" start-of-body= Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:14.159500 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": read tcp 192.168.126.11:43266->192.168.126.11:6443: read: connection reset by peer" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:14.159895 4758 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:14.159956 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:14.164140 4758 reflector.go:561] object-"openshift-image-registry"/"installation-pull-secrets": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dinstallation-pull-secrets&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:14.164250 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"installation-pull-secrets\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dinstallation-pull-secrets&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:14.184818 4758 reflector.go:561] object-"openstack"/"metric-storage-prometheus-dockercfg-4ftsd": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmetric-storage-prometheus-dockercfg-4ftsd&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:14.184887 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"metric-storage-prometheus-dockercfg-4ftsd\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmetric-storage-prometheus-dockercfg-4ftsd&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:14.204425 4758 request.go:700] Waited for 4.66978515s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-api-config-data&resourceVersion=84343 Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:14.206010 4758 reflector.go:561] object-"openstack"/"cinder-api-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-api-config-data&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:14.206097 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cinder-api-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-api-config-data&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:14.226833 4758 reflector.go:561] object-"openshift-ingress-operator"/"metrics-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:14.226968 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-operator\"/\"metrics-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:14.244777 4758 reflector.go:561] object-"openstack-operators"/"webhook-server-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dwebhook-server-cert&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:14.244892 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"webhook-server-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dwebhook-server-cert&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:14.264020 4758 reflector.go:561] object-"openshift-apiserver"/"etcd-client": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Detcd-client&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:14.264119 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Detcd-client&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:14.283937 4758 reflector.go:561] object-"openstack"/"glance-glance-dockercfg-th7td": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-glance-dockercfg-th7td&resourceVersion=84680": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:14.283990 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"glance-glance-dockercfg-th7td\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-glance-dockercfg-th7td&resourceVersion=84680\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:14.304636 4758 reflector.go:561] object-"openshift-controller-manager"/"client-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&resourceVersion=84524": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:14.304725 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&resourceVersion=84524\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:14.324521 4758 reflector.go:561] object-"openshift-nmstate"/"openshift-nmstate-webhook": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dopenshift-nmstate-webhook&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:14.324603 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"openshift-nmstate-webhook\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dopenshift-nmstate-webhook&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:14.344465 4758 reflector.go:561] object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-dockercfg-vw8fw&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:14.344541 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-vw8fw\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-dockercfg-vw8fw&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:14.364357 4758 reflector.go:561] object-"openshift-authentication-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84276": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:14.364473 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84276\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:14.384172 4758 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-server-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-tls&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:14.384365 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-tls&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:14.404128 4758 reflector.go:561] pkg/kubelet/config/apiserver.go:66: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dcrc&resourceVersion=84645": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:14.404221 4758 reflector.go:158] "Unhandled Error" err="pkg/kubelet/config/apiserver.go:66: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dcrc&resourceVersion=84645\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:14.423796 4758 reflector.go:561] object-"openshift-console"/"oauth-serving-cert": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Doauth-serving-cert&resourceVersion=84485": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:14.423929 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"oauth-serving-cert\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Doauth-serving-cert&resourceVersion=84485\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:14.444104 4758 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-kubernetes-node-dockercfg-pwtwl&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:14.444219 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-pwtwl\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-kubernetes-node-dockercfg-pwtwl&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:14.463727 4758 reflector.go:561] object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/secrets?fieldSelector=metadata.name%3Dopenshift-kube-scheduler-operator-dockercfg-qt55r&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:14.463822 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-qt55r\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/secrets?fieldSelector=metadata.name%3Dopenshift-kube-scheduler-operator-dockercfg-qt55r&resourceVersion=84512\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:14.483651 4758 reflector.go:561] object-"openstack"/"dnsmasq-dns-dockercfg-w2txv": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Ddnsmasq-dns-dockercfg-w2txv&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:14.483734 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"dnsmasq-dns-dockercfg-w2txv\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Ddnsmasq-dns-dockercfg-w2txv&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:14.503693 4758 reflector.go:561] object-"openstack"/"watcher-applier-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dwatcher-applier-config-data&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:14.503781 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"watcher-applier-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dwatcher-applier-config-data&resourceVersion=84627\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:14.523763 4758 reflector.go:561] object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/secrets?fieldSelector=metadata.name%3Dkube-storage-version-migrator-operator-dockercfg-2bh8d&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:14.523806 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2bh8d\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/secrets?fieldSelector=metadata.name%3Dkube-storage-version-migrator-operator-dockercfg-2bh8d&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:14.544120 4758 reflector.go:561] object-"metallb-system"/"controller-dockercfg-qdnhd": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dcontroller-dockercfg-qdnhd&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:14.544209 4758 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"controller-dockercfg-qdnhd\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dcontroller-dockercfg-qdnhd&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:14.563953 4758 reflector.go:561] object-"openstack"/"nova-cell1-novncproxy-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-cell1-novncproxy-config-data&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:14.564060 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-cell1-novncproxy-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-cell1-novncproxy-config-data&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:14.584310 4758 reflector.go:561] object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-4q6rk": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-operator-controller-manager-dockercfg-4q6rk&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:14.584417 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"openstack-operator-controller-manager-dockercfg-4q6rk\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-operator-controller-manager-dockercfg-4q6rk&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:14.604192 4758 reflector.go:561] object-"openstack"/"cert-nova-metadata-internal-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-nova-metadata-internal-svc&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:14.604262 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-nova-metadata-internal-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-nova-metadata-internal-svc&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:14.623677 4758 reflector.go:561] object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-g7xdx": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Doctavia-operator-controller-manager-dockercfg-g7xdx&resourceVersion=84680": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:14.623750 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"octavia-operator-controller-manager-dockercfg-g7xdx\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Doctavia-operator-controller-manager-dockercfg-g7xdx&resourceVersion=84680\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:14.644171 4758 reflector.go:561] object-"openstack"/"swift-swift-dockercfg-xgjlh": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dswift-swift-dockercfg-xgjlh&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:14.644265 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"swift-swift-dockercfg-xgjlh\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dswift-swift-dockercfg-xgjlh&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:14.664737 4758 reflector.go:561] object-"openstack"/"tempest-tests-tempest-custom-data-s0": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dtempest-tests-tempest-custom-data-s0&resourceVersion=84357": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:14.664814 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"tempest-tests-tempest-custom-data-s0\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dtempest-tests-tempest-custom-data-s0&resourceVersion=84357\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:14.684579 4758 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:14.684619 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:14.704806 4758 reflector.go:561] object-"cert-manager"/"cert-manager-dockercfg-qcl9m": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-dockercfg-qcl9m&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:14.704860 4758 reflector.go:158] "Unhandled Error" err="object-\"cert-manager\"/\"cert-manager-dockercfg-qcl9m\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-dockercfg-qcl9m&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:14.723841 4758 reflector.go:561] object-"openstack"/"ovndbcluster-nb-scripts": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-nb-scripts&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:14.723894 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovndbcluster-nb-scripts\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-nb-scripts&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:14.743911 4758 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serving-cert&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:14.743993 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serving-cert&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:14.763844 4758 reflector.go:561] object-"openstack"/"nova-cell0-conductor-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-cell0-conductor-config-data&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:14.763902 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-cell0-conductor-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-cell0-conductor-config-data&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:14.784146 4758 reflector.go:561] object-"openshift-machine-api"/"kube-rbac-proxy": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:14.784239 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:14.804168 4758 reflector.go:561] object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-operator-dockercfg-r9srn&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:14.804242 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-r9srn\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-operator-dockercfg-r9srn&resourceVersion=84627\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:14.824275 4758 reflector.go:561] object-"openstack"/"cert-watcher-public-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-watcher-public-svc&resourceVersion=84193": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:14.824359 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-watcher-public-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-watcher-public-svc&resourceVersion=84193\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:14.844201 4758 reflector.go:561] object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-dbtnp": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dtelemetry-operator-controller-manager-dockercfg-dbtnp&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:14.844254 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"telemetry-operator-controller-manager-dockercfg-dbtnp\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dtelemetry-operator-controller-manager-dockercfg-dbtnp&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:14.863909 4758 reflector.go:561] object-"metallb-system"/"frr-k8s-daemon-dockercfg-s75rc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dfrr-k8s-daemon-dockercfg-s75rc&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:14.863946 4758 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"frr-k8s-daemon-dockercfg-s75rc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dfrr-k8s-daemon-dockercfg-s75rc&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:14.884342 4758 reflector.go:561] object-"openshift-marketplace"/"community-operators-dockercfg-dmngl": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dcommunity-operators-dockercfg-dmngl&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:14.884405 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"community-operators-dockercfg-dmngl\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dcommunity-operators-dockercfg-dmngl&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:14.904860 4758 reflector.go:561] object-"openshift-network-node-identity"/"network-node-identity-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/secrets?fieldSelector=metadata.name%3Dnetwork-node-identity-cert&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:14.904933 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/secrets?fieldSelector=metadata.name%3Dnetwork-node-identity-cert&resourceVersion=84512\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:14.924421 4758 reflector.go:561] object-"openstack"/"swift-ring-files": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dswift-ring-files&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:14.924491 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"swift-ring-files\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dswift-ring-files&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:14.937982 4758 patch_prober.go:28] interesting pod/controller-manager-5b46f89db7-56qr2 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:14.938023 4758 patch_prober.go:28] interesting pod/controller-manager-5b46f89db7-56qr2 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:14.938033 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-5b46f89db7-56qr2" podUID="11e5039c-273e-4208-9295-329a27e6d22b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:14.938078 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5b46f89db7-56qr2" podUID="11e5039c-273e-4208-9295-329a27e6d22b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:14.944208 4758 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-template-error": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-template-error&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:14.944266 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-template-error&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:14.964598 4758 reflector.go:561] object-"openstack"/"cinder-scheduler-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-scheduler-config-data&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:14.964665 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cinder-scheduler-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-scheduler-config-data&resourceVersion=84627\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:14.984057 4758 reflector.go:561] object-"openshift-config-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:14.984141 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:15.003846 4758 reflector.go:561] object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dredhat-operators-dockercfg-ct8rh&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.003937 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-ct8rh\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dredhat-operators-dockercfg-ct8rh&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:15.024559 4758 reflector.go:561] object-"openshift-authentication-operator"/"authentication-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dauthentication-operator-config&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.024632 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"authentication-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dauthentication-operator-config&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:15.044454 4758 reflector.go:561] object-"openstack"/"cert-cinder-public-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-cinder-public-svc&resourceVersion=84567": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.044512 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-cinder-public-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-cinder-public-svc&resourceVersion=84567\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:15.064168 4758 reflector.go:561] object-"openshift-oauth-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.064211 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:15.084034 4758 reflector.go:561] object-"openshift-operators"/"obo-prometheus-operator-dockercfg-4jql8": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobo-prometheus-operator-dockercfg-4jql8&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.084124 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-4jql8\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobo-prometheus-operator-dockercfg-4jql8&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:15.107355 4758 reflector.go:561] object-"openstack"/"cert-nova-novncproxy-cell1-public-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-nova-novncproxy-cell1-public-svc&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.107434 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-nova-novncproxy-cell1-public-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-nova-novncproxy-cell1-public-svc&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.122144 4758 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:35704->38.102.83.223:6443: read: connection reset by peer" interval="200ms" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:15.124101 4758 reflector.go:561] object-"openstack"/"keystone-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone-scripts&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.124174 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"keystone-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone-scripts&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:15.144306 4758 reflector.go:561] object-"openstack"/"nova-cell1-conductor-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-cell1-conductor-config-data&resourceVersion=84680": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.144426 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-cell1-conductor-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-cell1-conductor-config-data&resourceVersion=84680\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:15.164012 4758 reflector.go:561] object-"openshift-image-registry"/"image-registry-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dimage-registry-tls&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.164082 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"image-registry-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dimage-registry-tls&resourceVersion=84512\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:15.184146 4758 reflector.go:561] object-"openshift-ingress"/"router-dockercfg-zdk86": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-dockercfg-zdk86&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.184225 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress\"/\"router-dockercfg-zdk86\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-dockercfg-zdk86&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:15.204469 4758 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.204545 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:15.223713 4758 request.go:700] Waited for 5.606735179s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dtest-operator-controller-manager-dockercfg-nwvvt&resourceVersion=84295 Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:15.224100 4758 reflector.go:561] object-"openstack-operators"/"test-operator-controller-manager-dockercfg-nwvvt": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dtest-operator-controller-manager-dockercfg-nwvvt&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.224175 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"test-operator-controller-manager-dockercfg-nwvvt\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dtest-operator-controller-manager-dockercfg-nwvvt&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:15.243908 4758 reflector.go:561] object-"openstack"/"rabbitmq-server-conf": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-server-conf&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.243978 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-server-conf\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-server-conf&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:15.264292 4758 reflector.go:561] object-"openshift-ingress-canary"/"canary-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Dcanary-serving-cert&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.264358 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-canary\"/\"canary-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Dcanary-serving-cert&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:15.284020 4758 reflector.go:561] object-"openstack"/"openstack-cell1-scripts": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-cell1-scripts&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.284075 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-cell1-scripts\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-cell1-scripts&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:15.299764 4758 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:15.299808 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:15.299861 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:15.300627 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"087a29a92b87397845777f3d37268935361fbcdc0080c0ed7d757240b78974bb"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed liveness probe, will be restarted" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:15.300720 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://087a29a92b87397845777f3d37268935361fbcdc0080c0ed7d757240b78974bb" gracePeriod=30 Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:15.304873 4758 reflector.go:561] object-"openstack"/"combined-ca-bundle": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcombined-ca-bundle&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.304929 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"combined-ca-bundle\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcombined-ca-bundle&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.323523 4758 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" interval="400ms" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:15.323875 4758 reflector.go:561] object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-nova-novncproxy-cell1-vencrypt&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.323961 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-nova-novncproxy-cell1-vencrypt\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-nova-novncproxy-cell1-vencrypt&resourceVersion=84627\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:15.344882 4758 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=84680": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.344951 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=84680\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:15.363841 4758 reflector.go:561] object-"openshift-service-ca"/"signing-cabundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dsigning-cabundle&resourceVersion=84485": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.363896 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca\"/\"signing-cabundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dsigning-cabundle&resourceVersion=84485\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:15.384436 4758 reflector.go:561] object-"openstack-operators"/"metrics-server-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dmetrics-server-cert&resourceVersion=84680": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.384500 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"metrics-server-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dmetrics-server-cert&resourceVersion=84680\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:15.413469 4758 reflector.go:561] object-"metallb-system"/"speaker-certs-secret": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dspeaker-certs-secret&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.413565 4758 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"speaker-certs-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dspeaker-certs-secret&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:15.413848 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jr994" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:15.424375 4758 reflector.go:561] object-"openstack"/"prometheus-metric-storage": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dprometheus-metric-storage&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.424462 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"prometheus-metric-storage\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dprometheus-metric-storage&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:15.444572 4758 reflector.go:561] object-"openstack"/"cert-rabbitmq-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-rabbitmq-svc&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.444656 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-rabbitmq-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-rabbitmq-svc&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:15.464451 4758 reflector.go:561] object-"openshift-image-registry"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.464595 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:15.484538 4758 reflector.go:561] object-"openshift-console"/"default-dockercfg-chnjx": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-chnjx&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.484637 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"default-dockercfg-chnjx\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-chnjx&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:15.504642 4758 reflector.go:561] object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dredhat-marketplace-dockercfg-x2ctb&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.504729 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-x2ctb\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dredhat-marketplace-dockercfg-x2ctb&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:15.523939 4758 reflector.go:561] object-"openshift-oauth-apiserver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.524034 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:15.543962 4758 reflector.go:561] object-"openstack"/"cert-ovndbcluster-sb-ovndbs": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-ovndbcluster-sb-ovndbs&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.544055 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-ovndbcluster-sb-ovndbs\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-ovndbcluster-sb-ovndbs&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:15.564412 4758 reflector.go:561] object-"openshift-etcd-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84439": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.564503 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84439\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:15.578900 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-2mr2s" podUID="901f347a-3b10-4392-8247-41a859112544" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:15.579005 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-2mr2s" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:15.583893 4758 reflector.go:561] object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-serving-cert&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.583967 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-serving-cert&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:15.605443 4758 reflector.go:561] object-"openstack"/"cert-swift-public-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-swift-public-svc&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.605556 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-swift-public-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-swift-public-svc&resourceVersion=84627\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:15.624673 4758 reflector.go:561] object-"openshift-etcd-operator"/"etcd-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-operator-config&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.624805 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-operator-config&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:15.644684 4758 reflector.go:561] object-"openshift-nmstate"/"plugin-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dplugin-serving-cert&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.644764 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"plugin-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dplugin-serving-cert&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:15.655159 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-tlt96" podUID="e7fdd2cd-e517-46b5-acb3-22b59b7f132f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:15.663878 4758 reflector.go:561] object-"openshift-etcd-operator"/"etcd-service-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-service-ca-bundle&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.663960 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-service-ca-bundle&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:15.684074 4758 reflector.go:561] object-"openstack"/"cert-horizon-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-horizon-svc&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.684153 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-horizon-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-horizon-svc&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:15.704149 4758 reflector.go:561] object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmetrics-daemon-sa-dockercfg-d427c&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.704260 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-d427c\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmetrics-daemon-sa-dockercfg-d427c&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.724334 4758 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" interval="800ms" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:15.724341 4758 reflector.go:561] object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-brw4q": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dcinder-operator-controller-manager-dockercfg-brw4q&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.724464 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"cinder-operator-controller-manager-dockercfg-brw4q\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dcinder-operator-controller-manager-dockercfg-brw4q&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:15.739943 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-zkfzz" podUID="25848d11-6830-45f8-aff0-0082594b5f3f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:15.744133 4758 reflector.go:561] object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-2fs5z": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dovn-operator-controller-manager-dockercfg-2fs5z&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.744188 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"ovn-operator-controller-manager-dockercfg-2fs5z\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dovn-operator-controller-manager-dockercfg-2fs5z&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:15.763852 4758 reflector.go:561] object-"openshift-apiserver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84357": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.763900 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84357\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:15.783835 4758 reflector.go:561] object-"openstack"/"cert-rabbitmq-notifications-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-rabbitmq-notifications-svc&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.783922 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-rabbitmq-notifications-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-rabbitmq-notifications-svc&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:15.804403 4758 reflector.go:561] object-"openshift-operators"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.804448 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:15.824272 4758 reflector.go:561] object-"openshift-cluster-version"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84439": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.824341 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84439\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:15.843957 4758 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-template-provider-selection": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-template-provider-selection&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.844019 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-template-provider-selection&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:15.864490 4758 reflector.go:561] object-"openshift-multus"/"multus-daemon-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dmultus-daemon-config&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.864588 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-daemon-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dmultus-daemon-config&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:15.884706 4758 reflector.go:561] object-"openstack"/"rabbitmq-notifications-default-user": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-notifications-default-user&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.884842 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-notifications-default-user\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-notifications-default-user&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:15.904735 4758 reflector.go:561] object-"openshift-ingress"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.904854 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:15.924792 4758 reflector.go:561] object-"openstack"/"swift-proxy-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dswift-proxy-config-data&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.924863 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"swift-proxy-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dswift-proxy-config-data&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:15.943880 4758 reflector.go:561] object-"openshift-machine-config-operator"/"mcc-proxy-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmcc-proxy-tls&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.943961 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmcc-proxy-tls&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:15.964004 4758 reflector.go:561] object-"openstack"/"cert-rabbitmq-cell1-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-rabbitmq-cell1-svc&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.964078 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-rabbitmq-cell1-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-rabbitmq-cell1-svc&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:15.984117 4758 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackageserver-service-cert&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:15.984192 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackageserver-service-cert&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:16.004013 4758 reflector.go:561] object-"openstack"/"barbican-worker-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-worker-config-data&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:16.004068 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"barbican-worker-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-worker-config-data&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:16.022961 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2fkhp" podUID="659f7d3e-5518-4d19-bb54-e39295a667d2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:16.022993 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-dfb5n" podUID="78689fee-3fe7-47d2-866d-6465d23378ea" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:16.024708 4758 reflector.go:561] object-"openstack"/"telemetry-ceilometer-dockercfg-kvpw9": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dtelemetry-ceilometer-dockercfg-kvpw9&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:16.024838 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"telemetry-ceilometer-dockercfg-kvpw9\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dtelemetry-ceilometer-dockercfg-kvpw9&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:16.040489 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-85b8fd6746-9vvd6" podUID="71c16ac1-3276-4096-93c5-d10765320713" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.96:8081/readyz\": dial tcp 10.217.0.96:8081: connect: connection refused" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:16.040605 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-85b8fd6746-9vvd6" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:16.041892 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-85b8fd6746-9vvd6" podUID="71c16ac1-3276-4096-93c5-d10765320713" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.96:8081/readyz\": dial tcp 10.217.0.96:8081: connect: connection refused" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:16.044343 4758 reflector.go:561] object-"openshift-route-controller-manager"/"config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:16.044408 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:16.063869 4758 reflector.go:561] object-"openshift-service-ca"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:16.063929 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:16.082826 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="743945d0-7488-4665-beaf-f2026e10a424" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.1.9:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:16.083021 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="743945d0-7488-4665-beaf-f2026e10a424" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.1.9:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:16.084566 4758 reflector.go:561] object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/secrets?fieldSelector=metadata.name%3Dcluster-samples-operator-dockercfg-xpp9w&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:16.084623 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-xpp9w\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/secrets?fieldSelector=metadata.name%3Dcluster-samples-operator-dockercfg-xpp9w&resourceVersion=84627\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:16.104571 4758 reflector.go:561] object-"openstack"/"keystone-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone-config-data&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:16.104703 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"keystone-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone-config-data&resourceVersion=84627\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:16.123864 4758 reflector.go:561] object-"openshift-dns-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84691": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:16.124025 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84691\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:16.143897 4758 reflector.go:561] object-"openstack"/"cinder-volume-nfs-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-volume-nfs-config-data&resourceVersion=84680": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:16.143971 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cinder-volume-nfs-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-volume-nfs-config-data&resourceVersion=84680\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:16.164680 4758 reflector.go:561] object-"openshift-image-registry"/"node-ca-dockercfg-4777p": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dnode-ca-dockercfg-4777p&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:16.164772 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"node-ca-dockercfg-4777p\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dnode-ca-dockercfg-4777p&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:16.184558 4758 reflector.go:561] object-"openshift-ingress-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:16.184621 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:16.204191 4758 reflector.go:561] object-"openshift-service-ca"/"service-ca-dockercfg-pn86c": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/secrets?fieldSelector=metadata.name%3Dservice-ca-dockercfg-pn86c&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:16.204263 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca\"/\"service-ca-dockercfg-pn86c\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/secrets?fieldSelector=metadata.name%3Dservice-ca-dockercfg-pn86c&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:16.224150 4758 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/secrets?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-dockercfg-xtcjv&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:16.224280 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-xtcjv\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/secrets?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-dockercfg-xtcjv&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:16.243814 4758 request.go:700] Waited for 6.539338419s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&resourceVersion=84313 Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:16.244463 4758 reflector.go:561] object-"openshift-route-controller-manager"/"client-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:16.244597 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:16.264930 4758 reflector.go:561] object-"openshift-marketplace"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84400": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:16.265037 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84400\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:16.284727 4758 reflector.go:561] object-"openshift-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:16.284870 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:16.304489 4758 reflector.go:561] object-"metallb-system"/"metallb-excludel2": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dmetallb-excludel2&resourceVersion=84400": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:16.304608 4758 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"metallb-excludel2\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dmetallb-excludel2&resourceVersion=84400\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:16.324499 4758 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openstack/ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/persistentvolumeclaims/ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:35720->38.102.83.223:6443: read: connection reset by peer" pod="openstack/ovsdbserver-nb-0" volumeName="ovndbcluster-nb-etc-ovn" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:16.345172 4758 status_manager.go:851] "Failed to get status for pod" podUID="19b4b900-d90f-4e59-b082-61f058f5882b" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-4jthc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/placement-operator-controller-manager-5d646b7d76-4jthc\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:35736->38.102.83.223:6443: read: connection reset by peer" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:16.365149 4758 reflector.go:561] object-"openshift-controller-manager"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=84680": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:35896->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:16.365301 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=84680\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:35896->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:16.384438 4758 reflector.go:561] object-"openstack"/"watcher-decision-engine-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dwatcher-decision-engine-config-data&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:35868->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:16.384560 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"watcher-decision-engine-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dwatcher-decision-engine-config-data&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:35868->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:16.404435 4758 reflector.go:561] object-"openshift-network-node-identity"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84439": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:35886->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:16.404585 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84439\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:35886->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:16.424227 4758 reflector.go:561] object-"openstack"/"cert-galera-openstack-cell1-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-galera-openstack-cell1-svc&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:35904->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:16.424371 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-galera-openstack-cell1-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-galera-openstack-cell1-svc&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:35904->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:16.444149 4758 reflector.go:561] object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/secrets?fieldSelector=metadata.name%3Dconsole-operator-dockercfg-4xjcr&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:35922->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:16.444267 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"console-operator-dockercfg-4xjcr\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/secrets?fieldSelector=metadata.name%3Dconsole-operator-dockercfg-4xjcr&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:35922->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:16.484241 4758 reflector.go:561] object-"openstack"/"ovsdbserver-sb": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovsdbserver-sb&resourceVersion=84580": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:35916->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:16.484314 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovsdbserver-sb\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovsdbserver-sb&resourceVersion=84580\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:35916->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:16.484283 4758 reflector.go:561] object-"openstack"/"horizon-horizon-dockercfg-n2vxv": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dhorizon-horizon-dockercfg-n2vxv&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:35962->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:16.484395 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"horizon-horizon-dockercfg-n2vxv\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dhorizon-horizon-dockercfg-n2vxv&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:35962->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:16.504643 4758 reflector.go:561] object-"openshift-operators"/"perses-operator-dockercfg-c658k": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dperses-operator-dockercfg-c658k&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:35982->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:16.504830 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"perses-operator-dockercfg-c658k\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dperses-operator-dockercfg-c658k&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:35982->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:16.524673 4758 reflector.go:561] object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84400": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:35948->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:16.525040 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84400\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:35948->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:16.525457 4758 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" interval="1.6s" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:16.544381 4758 reflector.go:561] object-"openstack"/"ovnnorthd-scripts": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovnnorthd-scripts&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:35932->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:16.544461 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovnnorthd-scripts\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovnnorthd-scripts&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:35932->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:16.564766 4758 reflector.go:561] object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-zfvmv": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dironic-operator-controller-manager-dockercfg-zfvmv&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:35940->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:16.564846 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"ironic-operator-controller-manager-dockercfg-zfvmv\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dironic-operator-controller-manager-dockercfg-zfvmv&resourceVersion=84512\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:35940->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:16.575921 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-4jthc" podUID="19b4b900-d90f-4e59-b082-61f058f5882b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.93:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:16.575921 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-7tzm4" podUID="c73a71b4-f1fd-4a6c-9832-ce9b48a5f220" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:16.584813 4758 reflector.go:561] object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-8x67n": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dheat-operator-controller-manager-dockercfg-8x67n&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36084->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:16.584892 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"heat-operator-controller-manager-dockercfg-8x67n\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dheat-operator-controller-manager-dockercfg-8x67n&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36084->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:16.604362 4758 reflector.go:561] object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84524": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36122->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:16.604431 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84524\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36122->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:16.624106 4758 reflector.go:561] object-"openshift-service-ca"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84173": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36054->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:16.624187 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84173\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36054->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:16.627916 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-2mr2s" podUID="901f347a-3b10-4392-8247-41a859112544" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:16.644586 4758 reflector.go:561] object-"openshift-multus"/"default-dockercfg-2q5b6": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-2q5b6&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36114->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:16.644647 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-dockercfg-2q5b6\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-2q5b6&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36114->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:16.664492 4758 reflector.go:561] object-"openstack"/"placement-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dplacement-config-data&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36048->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:16.664570 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"placement-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dplacement-config-data&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36048->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:16.685548 4758 reflector.go:561] object-"openshift-controller-manager-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84400": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36106->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:16.685615 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84400\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36106->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:16.704782 4758 reflector.go:561] object-"openshift-image-registry"/"registry-dockercfg-kzzsd": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dregistry-dockercfg-kzzsd&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36006->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:16.704850 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"registry-dockercfg-kzzsd\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dregistry-dockercfg-kzzsd&resourceVersion=84627\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36006->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:16.724335 4758 reflector.go:561] object-"metallb-system"/"metallb-operator-webhook-server-service-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmetallb-operator-webhook-server-service-cert&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36062->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:16.724405 4758 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"metallb-operator-webhook-server-service-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmetallb-operator-webhook-server-service-cert&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36062->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:16.744520 4758 reflector.go:561] object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dkube-controller-manager-operator-dockercfg-gkqpw&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36102->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:16.744590 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-gkqpw\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dkube-controller-manager-operator-dockercfg-gkqpw&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36102->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:16.763925 4758 reflector.go:561] object-"openshift-controller-manager-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84485": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36030->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:16.764004 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84485\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36030->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:16.784078 4758 reflector.go:561] object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-apiserver-operator-config&resourceVersion=84116": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36012->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:16.784151 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-apiserver-operator-config&resourceVersion=84116\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36012->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:16.804330 4758 reflector.go:561] object-"openstack"/"cinder-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-scripts&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36020->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:16.804416 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cinder-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-scripts&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36020->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:16.824053 4758 reflector.go:561] object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-2zlds": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovnnorthd-ovnnorthd-dockercfg-2zlds&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:35992->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:16.824158 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovnnorthd-ovnnorthd-dockercfg-2zlds\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovnnorthd-ovnnorthd-dockercfg-2zlds&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:35992->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:16.844609 4758 reflector.go:561] object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36138->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:16.844706 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36138->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:16.864752 4758 reflector.go:561] object-"openstack-operators"/"openstack-operator-index-dockercfg-ck689": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-operator-index-dockercfg-ck689&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36042->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:16.864835 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"openstack-operator-index-dockercfg-ck689\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-operator-index-dockercfg-ck689&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36042->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:16.884643 4758 reflector.go:561] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Dnode-resolver-dockercfg-kz9s7&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36080->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:16.884717 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"node-resolver-dockercfg-kz9s7\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Dnode-resolver-dockercfg-kz9s7&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36080->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:16.902159 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-59n7w" podUID="d4c5d14c-33e9-4cb0-9ff4-9747c2cd3c13" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.95:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:16.902257 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-59n7w" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:16.902296 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4rlkk" podUID="40845474-36a2-48c0-a0df-af5deb2a31fd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.94:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:16.902545 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-np2j4" podUID="4612798c-6ae6-4a07-afe6-3f3574ee467b" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.55:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:16.902574 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-np2j4" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:16.902991 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="frr-k8s-webhook-server" containerStatusID={"Type":"cri-o","ID":"a86ae74b37544ab164be41ebf400131e9e7d915da894679621c4bbdc42ef92f9"} pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-np2j4" containerMessage="Container frr-k8s-webhook-server failed liveness probe, will be restarted" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:16.903026 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-np2j4" podUID="4612798c-6ae6-4a07-afe6-3f3574ee467b" containerName="frr-k8s-webhook-server" containerID="cri-o://a86ae74b37544ab164be41ebf400131e9e7d915da894679621c4bbdc42ef92f9" gracePeriod=10 Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:16.903131 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-np2j4" podUID="4612798c-6ae6-4a07-afe6-3f3574ee467b" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.55:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:16.903224 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-np2j4" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:16.903793 4758 reflector.go:561] object-"openstack"/"cert-nova-public-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-nova-public-svc&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36146->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:16.903872 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-nova-public-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-nova-public-svc&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36146->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:16.923985 4758 reflector.go:561] object-"openshift-nmstate"/"nmstate-operator-dockercfg-2sf4f": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dnmstate-operator-dockercfg-2sf4f&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36092->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:16.924069 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"nmstate-operator-dockercfg-2sf4f\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dnmstate-operator-dockercfg-2sf4f&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36092->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:16.944369 4758 reflector.go:561] object-"openshift-service-ca-operator"/"service-ca-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dservice-ca-operator-config&resourceVersion=84485": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36194->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:16.944845 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dservice-ca-operator-config&resourceVersion=84485\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36194->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:16.964302 4758 reflector.go:561] object-"openshift-multus"/"multus-admission-controller-secret": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-admission-controller-secret&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36180->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:16.964373 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-admission-controller-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-admission-controller-secret&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36180->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:16.984356 4758 reflector.go:561] object-"openstack"/"neutron-neutron-dockercfg-zvr2k": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dneutron-neutron-dockercfg-zvr2k&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36166->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:16.984427 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"neutron-neutron-dockercfg-zvr2k\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dneutron-neutron-dockercfg-zvr2k&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36166->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:17.004262 4758 reflector.go:561] object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-gmg82": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dnova-operator-controller-manager-dockercfg-gmg82&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36176->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.004349 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"nova-operator-controller-manager-dockercfg-gmg82\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dnova-operator-controller-manager-dockercfg-gmg82&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36176->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:17.025644 4758 reflector.go:561] object-"openstack"/"cert-cinder-internal-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-cinder-internal-svc&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36314->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.025775 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-cinder-internal-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-cinder-internal-svc&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36314->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:17.044391 4758 reflector.go:561] object-"openstack"/"memcached-config-data": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dmemcached-config-data&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36202->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.044471 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"memcached-config-data\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dmemcached-config-data&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36202->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:17.064718 4758 reflector.go:561] object-"openstack"/"cert-placement-public-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-placement-public-svc&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36218->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.064831 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-placement-public-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-placement-public-svc&resourceVersion=84512\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36218->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:17.084413 4758 reflector.go:561] object-"openshift-machine-config-operator"/"mco-proxy-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmco-proxy-tls&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36238->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.084481 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmco-proxy-tls&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36238->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:17.095909 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2xj52" podUID="644142ed-c937-406d-9fd5-3fe856879a92" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.97:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:17.103892 4758 reflector.go:561] object-"openshift-cluster-machine-approver"/"kube-rbac-proxy": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&resourceVersion=84400": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36252->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.103956 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&resourceVersion=84400\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36252->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:17.125346 4758 reflector.go:561] object-"openstack"/"neutron-config": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dneutron-config&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36290->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.125457 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"neutron-config\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dneutron-config&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36290->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:17.144416 4758 reflector.go:561] object-"openshift-dns"/"dns-dockercfg-jwfmh": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-dockercfg-jwfmh&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36270->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.144529 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"dns-dockercfg-jwfmh\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-dockercfg-jwfmh&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36270->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:17.164763 4758 reflector.go:561] object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-config&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36326->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.164854 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-config&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36326->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:17.184335 4758 reflector.go:561] object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84116": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36196->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.184462 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84116\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36196->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:17.204661 4758 reflector.go:561] object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-8t2s8": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-operator-controller-init-dockercfg-8t2s8&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36220->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.204805 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"openstack-operator-controller-init-dockercfg-8t2s8\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-operator-controller-init-dockercfg-8t2s8&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36220->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:17.224011 4758 reflector.go:561] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36272->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.224123 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36272->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:17.244105 4758 request.go:700] Waited for 2.115767142s, retries: 1, retry-after: 1s - retry-reason: due to retryable error, error: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84439": read tcp 38.102.83.223:36260->38.102.83.223:6443: read: connection reset by peer - request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84439 Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:17.244679 4758 reflector.go:561] object-"openshift-machine-api"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84439": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36260->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.244810 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84439\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36260->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:17.264245 4758 reflector.go:561] object-"openshift-controller-manager"/"openshift-global-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-global-ca&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36306->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.264348 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-global-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-global-ca&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36306->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:17.284390 4758 reflector.go:561] object-"openstack"/"cinder-volume-nfs-2-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-volume-nfs-2-config-data&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36232->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.284506 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cinder-volume-nfs-2-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-volume-nfs-2-config-data&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36232->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:17.304189 4758 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-serving-cert&resourceVersion=84193": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36222->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.304333 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-serving-cert&resourceVersion=84193\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36222->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:17.324177 4758 reflector.go:561] object-"openshift-marketplace"/"marketplace-operator-metrics": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-metrics&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36336->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.324309 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-metrics&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36336->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:17.344679 4758 reflector.go:561] object-"openstack"/"rabbitmq-cell1-server-conf": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-cell1-server-conf&resourceVersion=84439": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36462->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.344805 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-cell1-server-conf\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-cell1-server-conf&resourceVersion=84439\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36462->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:17.364276 4758 reflector.go:561] object-"metallb-system"/"frr-startup": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dfrr-startup&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36410->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.364371 4758 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"frr-startup\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dfrr-startup&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36410->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:17.383847 4758 reflector.go:561] object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-controller-manager-operator-config&resourceVersion=84173": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36444->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.383920 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-controller-manager-operator-config&resourceVersion=84173\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36444->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:17.404307 4758 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dovnkube-config&resourceVersion=84439": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36470->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.404367 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dovnkube-config&resourceVersion=84439\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36470->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:17.424838 4758 reflector.go:561] object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dkube-controller-manager-operator-serving-cert&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36440->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.424925 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dkube-controller-manager-operator-serving-cert&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36440->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:17.444141 4758 reflector.go:561] object-"openshift-cluster-version"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84400": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36452->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.444220 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84400\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36452->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:17.464556 4758 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36426->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.464636 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36426->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:17.484909 4758 reflector.go:561] object-"openshift-machine-config-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84485": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36354->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.484983 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84485\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36354->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:17.503712 4758 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-session": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-session&resourceVersion=84193": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36408->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.503802 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-session\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-session&resourceVersion=84193\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36408->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:17.524055 4758 reflector.go:561] object-"openstack-operators"/"infra-operator-webhook-server-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dinfra-operator-webhook-server-cert&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36376->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.524121 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"infra-operator-webhook-server-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dinfra-operator-webhook-server-cert&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36376->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:17.544121 4758 reflector.go:561] object-"openstack"/"cert-metric-storage-prometheus-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-metric-storage-prometheus-svc&resourceVersion=84193": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36386->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.544209 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-metric-storage-prometheus-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-metric-storage-prometheus-svc&resourceVersion=84193\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36386->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:17.564100 4758 reflector.go:561] object-"openstack"/"cert-barbican-internal-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-barbican-internal-svc&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36362->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.564173 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-barbican-internal-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-barbican-internal-svc&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36362->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:17.584559 4758 reflector.go:561] object-"openshift-console-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36340->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.584658 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36340->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:17.604185 4758 reflector.go:561] object-"openshift-etcd-operator"/"etcd-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-ca-bundle&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36372->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.604267 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-ca-bundle&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36372->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:17.624417 4758 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-ocp-branding-template&resourceVersion=84193": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36402->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.624486 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-ocp-branding-template&resourceVersion=84193\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36402->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.631145 4758 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/events/ceilometer-0.188d1f633c8ab212\": dial tcp 38.102.83.223:6443: connect: connection refused" event="&Event{ObjectMeta:{ceilometer-0.188d1f633c8ab212 openstack 84476 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openstack,Name:ceilometer-0,UID:93923998-0016-4db9-adff-a433c7a8d57c,APIVersion:v1,ResourceVersion:49775,FieldPath:spec.containers{ceilometer-notification-agent},},Reason:Unhealthy,Message:Liveness probe failed: command timed out,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 17:58:59 +0000 UTC,LastTimestamp:2026-01-22 17:59:29.70354374 +0000 UTC m=+5391.186883055,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:17.643959 4758 reflector.go:561] object-"openshift-console"/"service-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dservice-ca&resourceVersion=84173": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36472->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.644043 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"service-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dservice-ca&resourceVersion=84173\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36472->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:17.664829 4758 reflector.go:561] object-"openshift-cluster-machine-approver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84439": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36494->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.664976 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84439\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36494->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:17.684047 4758 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serviceaccount-dockercfg-rq7zk&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36638->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.684158 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-rq7zk\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serviceaccount-dockercfg-rq7zk&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36638->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:17.704458 4758 reflector.go:561] object-"openshift-authentication"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36506->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.704550 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36506->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:17.724078 4758 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-router-certs": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-router-certs&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36526->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.724213 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-router-certs&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36526->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:17.744633 4758 reflector.go:561] object-"openshift-network-diagnostics"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84116": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36556->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.744697 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84116\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36556->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:17.764663 4758 reflector.go:561] object-"openshift-authentication-operator"/"trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=84580": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36594->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.764754 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=84580\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36594->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:17.784722 4758 reflector.go:561] object-"openstack"/"rabbitmq-cell1-erlang-cookie": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-cell1-erlang-cookie&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36612->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.784855 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-cell1-erlang-cookie\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-cell1-erlang-cookie&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36612->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:17.804946 4758 reflector.go:561] object-"openstack"/"cert-placement-internal-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-placement-internal-svc&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36624->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.805074 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-placement-internal-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-placement-internal-svc&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36624->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:17.823950 4758 reflector.go:561] object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-z2sxt": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobo-prometheus-operator-admission-webhook-dockercfg-z2sxt&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36510->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.824049 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-z2sxt\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobo-prometheus-operator-admission-webhook-dockercfg-z2sxt&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36510->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:17.840021 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-sb974" podUID="35a3fafd-45ea-465d-90ef-36148a60685e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:17.840144 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-sb974" podUID="35a3fafd-45ea-465d-90ef-36148a60685e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:17.844068 4758 reflector.go:561] object-"openshift-image-registry"/"image-registry-certificates": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dimage-registry-certificates&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36542->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.844164 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"image-registry-certificates\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dimage-registry-certificates&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36542->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:17.864236 4758 reflector.go:561] object-"openstack"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84357": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36572->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.864320 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84357\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36572->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:17.884970 4758 reflector.go:561] object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-s6bn2": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dwatcher-operator-controller-manager-dockercfg-s6bn2&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36602->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.885082 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"watcher-operator-controller-manager-dockercfg-s6bn2\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dwatcher-operator-controller-manager-dockercfg-s6bn2&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36602->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:17.904394 4758 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-node-metrics-cert&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36774->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.904548 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-node-metrics-cert&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36774->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:17.927558 4758 reflector.go:561] object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dprometheus-metric-storage-thanos-prometheus-http-client-file&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36664->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.927734 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"prometheus-metric-storage-thanos-prometheus-http-client-file\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dprometheus-metric-storage-thanos-prometheus-http-client-file&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36664->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:17.945017 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-59n7w" podUID="d4c5d14c-33e9-4cb0-9ff4-9747c2cd3c13" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.95:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:17.945048 4758 reflector.go:561] object-"openshift-oauth-apiserver"/"encryption-config-1": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Dencryption-config-1&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36696->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.945172 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Dencryption-config-1&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36696->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:17.964303 4758 reflector.go:561] object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobo-prometheus-operator-admission-webhook-service-cert&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36722->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.964436 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobo-prometheus-operator-admission-webhook-service-cert&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36722->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:17.984880 4758 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dmachine-api-operator-dockercfg-mfbb7&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36748->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:17.984981 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-mfbb7\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dmachine-api-operator-dockercfg-mfbb7&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36748->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:18.004975 4758 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-control-plane-metrics-cert&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36762->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.005086 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-control-plane-metrics-cert&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36762->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:18.024896 4758 reflector.go:561] object-"cert-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36654->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.025004 4758 reflector.go:158] "Unhandled Error" err="object-\"cert-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36654->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:18.035470 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-lpprz" podUID="cc433179-ae5b-4250-80c2-97af371fdfed" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:18.035573 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/speaker-lpprz" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:18.036006 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-lpprz" podUID="cc433179-ae5b-4250-80c2-97af371fdfed" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:18.036170 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-lpprz" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:18.036949 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="speaker" containerStatusID={"Type":"cri-o","ID":"94d80fab259bbdba24e6cb6f6b906c1c7fc7544cc57f0cf0de9ee3c67a648b6c"} pod="metallb-system/speaker-lpprz" containerMessage="Container speaker failed liveness probe, will be restarted" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:18.037043 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/speaker-lpprz" podUID="cc433179-ae5b-4250-80c2-97af371fdfed" containerName="speaker" containerID="cri-o://94d80fab259bbdba24e6cb6f6b906c1c7fc7544cc57f0cf0de9ee3c67a648b6c" gracePeriod=2 Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:18.046452 4758 reflector.go:561] object-"openstack"/"cert-memcached-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-memcached-svc&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36680->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.046548 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-memcached-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-memcached-svc&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36680->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:18.064181 4758 reflector.go:561] object-"openstack"/"cert-nova-internal-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-nova-internal-svc&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36708->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.064260 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-nova-internal-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-nova-internal-svc&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36708->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:18.083849 4758 reflector.go:561] object-"openstack"/"nova-scheduler-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-scheduler-config-data&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36738->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.083932 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-scheduler-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-scheduler-config-data&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36738->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:18.104515 4758 reflector.go:561] object-"openshift-nmstate"/"nginx-conf": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/configmaps?fieldSelector=metadata.name%3Dnginx-conf&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36790->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.104630 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"nginx-conf\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/configmaps?fieldSelector=metadata.name%3Dnginx-conf&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36790->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:18.124450 4758 reflector.go:561] object-"openstack"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84524": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36796->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.124522 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84524\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36796->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.127037 4758 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" interval="3.2s" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:18.144473 4758 reflector.go:561] object-"openstack"/"cert-keystone-internal-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-keystone-internal-svc&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36804->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.144540 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-keystone-internal-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-keystone-internal-svc&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36804->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:18.164798 4758 reflector.go:561] object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/secrets?fieldSelector=metadata.name%3Dservice-ca-operator-dockercfg-rg9jl&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36808->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.164882 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-rg9jl\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/secrets?fieldSelector=metadata.name%3Dservice-ca-operator-dockercfg-rg9jl&resourceVersion=84627\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36808->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:18.184232 4758 reflector.go:561] object-"metallb-system"/"metallb-operator-controller-manager-service-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmetallb-operator-controller-manager-service-cert&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36816->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.184314 4758 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"metallb-operator-controller-manager-service-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmetallb-operator-controller-manager-service-cert&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36816->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:18.204440 4758 reflector.go:561] object-"openstack"/"barbican-keystone-listener-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-keystone-listener-config-data&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36938->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.204554 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"barbican-keystone-listener-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-keystone-listener-config-data&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36938->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:18.224468 4758 reflector.go:561] object-"openstack"/"keystone": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36830->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.224539 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"keystone\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone&resourceVersion=84512\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36830->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:18.244231 4758 reflector.go:561] object-"openshift-dns"/"dns-default-metrics-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-default-metrics-tls&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36852->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.244295 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"dns-default-metrics-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-default-metrics-tls&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36852->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:18.263617 4758 request.go:700] Waited for 3.128943999s, retries: 1, retry-after: 1s - retry-reason: due to retryable error, error: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmanager-account-dockercfg-q7gzx&resourceVersion=84680": read tcp 38.102.83.223:36880->38.102.83.223:6443: read: connection reset by peer - request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmanager-account-dockercfg-q7gzx&resourceVersion=84680 Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:18.263956 4758 reflector.go:561] object-"metallb-system"/"manager-account-dockercfg-q7gzx": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmanager-account-dockercfg-q7gzx&resourceVersion=84680": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36880->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.264009 4758 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"manager-account-dockercfg-q7gzx\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmanager-account-dockercfg-q7gzx&resourceVersion=84680\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36880->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:18.269695 4758 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:18.269730 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:18.270009 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:18.283991 4758 reflector.go:561] object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-zpd54": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dhorizon-operator-controller-manager-dockercfg-zpd54&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36890->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.284089 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"horizon-operator-controller-manager-dockercfg-zpd54\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dhorizon-operator-controller-manager-dockercfg-zpd54&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36890->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:18.304549 4758 reflector.go:561] object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-nzrzh": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dswift-operator-controller-manager-dockercfg-nzrzh&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36922->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.304674 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"swift-operator-controller-manager-dockercfg-nzrzh\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dswift-operator-controller-manager-dockercfg-nzrzh&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36922->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:18.324619 4758 reflector.go:561] object-"openshift-image-registry"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36942->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.324688 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36942->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:18.343995 4758 reflector.go:561] object-"metallb-system"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36840->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.344062 4758 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36840->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:18.364581 4758 reflector.go:561] object-"openstack"/"ovncontroller-ovncontroller-dockercfg-pxl5h": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovncontroller-ovncontroller-dockercfg-pxl5h&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36864->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.364644 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovncontroller-ovncontroller-dockercfg-pxl5h\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovncontroller-ovncontroller-dockercfg-pxl5h&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36864->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:18.384528 4758 reflector.go:561] object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-kube-scheduler-operator-config&resourceVersion=84439": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36888->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.384607 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-kube-scheduler-operator-config&resourceVersion=84439\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36888->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:18.400440 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="d5a7a812-eaba-4ae7-8d97-e80ae4f70d78" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.221:8081/readyz\": dial tcp 10.217.0.221:8081: connect: connection refused" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:18.400600 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.403039 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T18:00:18Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T18:00:18Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T18:00:18Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T18:00:18Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.403609 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:18.403612 4758 reflector.go:561] object-"openshift-network-operator"/"iptables-alerter-script": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Diptables-alerter-script&resourceVersion=84485": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36912->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.403683 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-operator\"/\"iptables-alerter-script\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Diptables-alerter-script&resourceVersion=84485\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36912->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.404125 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.404400 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.404672 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.404686 4758 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:18.423913 4758 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dovnkube-script-lib&resourceVersion=84400": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36926->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.424008 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dovnkube-script-lib&resourceVersion=84400\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36926->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:18.444264 4758 reflector.go:561] object-"openstack"/"ovndbcluster-sb-scripts": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-sb-scripts&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36956->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.444327 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovndbcluster-sb-scripts\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-sb-scripts&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36956->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:18.464301 4758 reflector.go:561] object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84524": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36978->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.464396 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84524\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36978->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:18.484659 4758 reflector.go:561] object-"hostpath-provisioner"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36958->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.484766 4758 reflector.go:158] "Unhandled Error" err="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36958->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:18.504662 4758 reflector.go:561] object-"metallb-system"/"metallb-memberlist": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmetallb-memberlist&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36984->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.504763 4758 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"metallb-memberlist\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmetallb-memberlist&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36984->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:18.524851 4758 reflector.go:561] object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Doauth-apiserver-sa-dockercfg-6r2bq&resourceVersion=84193": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36972->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.524952 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-6r2bq\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Doauth-apiserver-sa-dockercfg-6r2bq&resourceVersion=84193\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36972->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:18.544702 4758 reflector.go:561] object-"openstack"/"ovnnorthd-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovnnorthd-config&resourceVersion=84276": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36994->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.544803 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovnnorthd-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovnnorthd-config&resourceVersion=84276\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:36994->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:18.563761 4758 reflector.go:561] object-"openshift-image-registry"/"trusted-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&resourceVersion=84439": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37066->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.563830 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&resourceVersion=84439\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37066->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:18.584641 4758 reflector.go:561] object-"openstack"/"ovndbcluster-nb-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-nb-config&resourceVersion=84173": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37022->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.584785 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovndbcluster-nb-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-nb-config&resourceVersion=84173\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37022->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:18.604181 4758 reflector.go:561] object-"openstack"/"watcher-api-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dwatcher-api-config-data&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37038->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.604242 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"watcher-api-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dwatcher-api-config-data&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37038->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:18.624481 4758 reflector.go:561] object-"openstack"/"rabbitmq-cell1-server-dockercfg-5sdkn": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-cell1-server-dockercfg-5sdkn&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37006->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.624581 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-cell1-server-dockercfg-5sdkn\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-cell1-server-dockercfg-5sdkn&resourceVersion=84627\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37006->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:18.644200 4758 reflector.go:561] object-"openshift-apiserver"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37118->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.644262 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=84512\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37118->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:18.664576 4758 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackage-server-manager-serving-cert&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37078->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.664669 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackage-server-manager-serving-cert&resourceVersion=84627\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37078->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:18.684436 4758 reflector.go:561] object-"openshift-controller-manager"/"config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=84524": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37148->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.684525 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=84524\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37148->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:18.704992 4758 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dmachine-approver-config&resourceVersion=84580": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37108->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.705079 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dmachine-approver-config&resourceVersion=84580\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37108->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:18.725671 4758 reflector.go:561] object-"openshift-console"/"console-dockercfg-f62pw": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Dconsole-dockercfg-f62pw&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37214->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.725762 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"console-dockercfg-f62pw\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Dconsole-dockercfg-f62pw&resourceVersion=84627\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37214->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:18.745824 4758 reflector.go:561] object-"openshift-route-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37310->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.745893 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37310->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:18.764609 4758 reflector.go:561] object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-lk2r2": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dkeystone-operator-controller-manager-dockercfg-lk2r2&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37252->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.764689 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"keystone-operator-controller-manager-dockercfg-lk2r2\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dkeystone-operator-controller-manager-dockercfg-lk2r2&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37252->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:18.784967 4758 reflector.go:561] object-"openshift-operators"/"observability-operator-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobservability-operator-tls&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37270->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.785104 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"observability-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobservability-operator-tls&resourceVersion=84627\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37270->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:18.804519 4758 reflector.go:561] object-"openshift-dns"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84580": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37236->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.804630 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84580\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37236->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:18.824751 4758 reflector.go:561] object-"openstack"/"dns": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Ddns&resourceVersion=84485": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37396->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.824822 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"dns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Ddns&resourceVersion=84485\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37396->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:18.832597 4758 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:18.832641 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:18.844614 4758 reflector.go:561] object-"openshift-authentication"/"audit": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Daudit&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37446->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.844715 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"audit\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Daudit&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37446->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:18.864858 4758 reflector.go:561] object-"openstack"/"cinder-cinder-dockercfg-85hcg": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-cinder-dockercfg-85hcg&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37486->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.864957 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cinder-cinder-dockercfg-85hcg\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-cinder-dockercfg-85hcg&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37486->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:18.884700 4758 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-cliconfig": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dv4-0-config-system-cliconfig&resourceVersion=84485": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37568->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.884779 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dv4-0-config-system-cliconfig&resourceVersion=84485\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37568->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:18.904153 4758 reflector.go:561] object-"openshift-service-ca-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84485": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37530->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.904196 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84485\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37530->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:18.923676 4758 reflector.go:561] object-"openstack"/"cert-neutron-internal-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-neutron-internal-svc&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37622->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.923709 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-neutron-internal-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-neutron-internal-svc&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37622->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:18.944139 4758 reflector.go:561] object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-x59mw": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovncluster-ovndbcluster-sb-dockercfg-x59mw&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37714->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.944188 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovncluster-ovndbcluster-sb-dockercfg-x59mw\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovncluster-ovndbcluster-sb-dockercfg-x59mw&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37714->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:18.964320 4758 reflector.go:561] object-"openstack"/"prometheus-metric-storage-web-config": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dprometheus-metric-storage-web-config&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37642->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.964367 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"prometheus-metric-storage-web-config\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dprometheus-metric-storage-web-config&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37642->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:18.983883 4758 reflector.go:561] object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Doauth-openshift-dockercfg-znhcc&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37740->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:18.983920 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-znhcc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Doauth-openshift-dockercfg-znhcc&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37740->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:19.004190 4758 reflector.go:561] object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-qcqlv": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dplacement-operator-controller-manager-dockercfg-qcqlv&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37660->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:19.004306 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"placement-operator-controller-manager-dockercfg-qcqlv\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dplacement-operator-controller-manager-dockercfg-qcqlv&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37660->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:19.024549 4758 reflector.go:561] object-"openstack"/"horizon": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dhorizon&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37818->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:19.024628 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"horizon\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dhorizon&resourceVersion=84627\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37818->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:19.044547 4758 reflector.go:561] object-"openstack"/"barbican-barbican-dockercfg-z4pqk": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-barbican-dockercfg-z4pqk&resourceVersion=84193": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37744->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:19.044597 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"barbican-barbican-dockercfg-z4pqk\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-barbican-dockercfg-z4pqk&resourceVersion=84193\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37744->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:19.063888 4758 reflector.go:561] object-"openstack"/"openstackclient-openstackclient-dockercfg-kmlnc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dopenstackclient-openstackclient-dockercfg-kmlnc&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37788->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:19.063973 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstackclient-openstackclient-dockercfg-kmlnc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dopenstackclient-openstackclient-dockercfg-kmlnc&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37788->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:19.084032 4758 reflector.go:561] object-"openstack"/"ceilometer-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dceilometer-config-data&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37804->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:19.084113 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ceilometer-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dceilometer-config-data&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37804->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:19.104190 4758 reflector.go:561] object-"openshift-machine-config-operator"/"proxy-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dproxy-tls&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37786->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:19.104226 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dproxy-tls&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37786->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:19.124064 4758 reflector.go:561] object-"openshift-dns"/"dns-default": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Ddns-default&resourceVersion=84439": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37856->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:19.124113 4758 trace.go:236] Trace[1775989214]: "Reflector ListAndWatch" name:object-"openshift-dns"/"dns-default" (22-Jan-2026 18:00:09.121) (total time: 10003ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1775989214]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Ddns-default&resourceVersion=84439": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37856->38.102.83.223:6443: read: connection reset by peer 10002ms (18:00:19.124) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1775989214]: [10.003006672s] [10.003006672s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:19.124125 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"dns-default\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Ddns-default&resourceVersion=84439\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37856->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:19.144179 4758 reflector.go:561] object-"openstack"/"cert-galera-openstack-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-galera-openstack-svc&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37912->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:19.144226 4758 trace.go:236] Trace[967792687]: "Reflector ListAndWatch" name:object-"openstack"/"cert-galera-openstack-svc" (22-Jan-2026 18:00:09.126) (total time: 10017ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[967792687]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-galera-openstack-svc&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37912->38.102.83.223:6443: read: connection reset by peer 10017ms (18:00:19.144) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[967792687]: [10.0179598s] [10.0179598s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:19.144268 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-galera-openstack-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-galera-openstack-svc&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37912->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:19.164198 4758 reflector.go:561] object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/secrets?fieldSelector=metadata.name%3Dkube-apiserver-operator-serving-cert&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37936->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:19.164294 4758 trace.go:236] Trace[2031372184]: "Reflector ListAndWatch" name:object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" (22-Jan-2026 18:00:09.127) (total time: 10036ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[2031372184]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/secrets?fieldSelector=metadata.name%3Dkube-apiserver-operator-serving-cert&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37936->38.102.83.223:6443: read: connection reset by peer 10036ms (18:00:19.164) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[2031372184]: [10.036706221s] [10.036706221s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:19.164321 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/secrets?fieldSelector=metadata.name%3Dkube-apiserver-operator-serving-cert&resourceVersion=84512\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37936->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:19.184904 4758 reflector.go:561] object-"openstack"/"prometheus-metric-storage-rulefiles-2": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dprometheus-metric-storage-rulefiles-2&resourceVersion=84580": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37894->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:19.184997 4758 trace.go:236] Trace[2023273357]: "Reflector ListAndWatch" name:object-"openstack"/"prometheus-metric-storage-rulefiles-2" (22-Jan-2026 18:00:09.125) (total time: 10059ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[2023273357]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dprometheus-metric-storage-rulefiles-2&resourceVersion=84580": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37894->38.102.83.223:6443: read: connection reset by peer 10059ms (18:00:19.184) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[2023273357]: [10.059261216s] [10.059261216s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:19.185026 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"prometheus-metric-storage-rulefiles-2\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dprometheus-metric-storage-rulefiles-2&resourceVersion=84580\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37894->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:19.204164 4758 reflector.go:561] object-"cert-manager"/"cert-manager-webhook-dockercfg-9xxdc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-webhook-dockercfg-9xxdc&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37918->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:19.204305 4758 trace.go:236] Trace[1545693490]: "Reflector ListAndWatch" name:object-"cert-manager"/"cert-manager-webhook-dockercfg-9xxdc" (22-Jan-2026 18:00:09.127) (total time: 10076ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1545693490]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-webhook-dockercfg-9xxdc&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37918->38.102.83.223:6443: read: connection reset by peer 10076ms (18:00:19.204) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1545693490]: [10.076799123s] [10.076799123s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:19.204340 4758 reflector.go:158] "Unhandled Error" err="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-9xxdc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-webhook-dockercfg-9xxdc&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37918->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:19.215846 4758 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-thgv5 container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.37:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:19.215887 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-thgv5" podUID="e12dec2b-da40-4cad-92b5-91ab59c0e7c2" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.37:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:19.215930 4758 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-thgv5 container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.37:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:19.215943 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-thgv5" podUID="e12dec2b-da40-4cad-92b5-91ab59c0e7c2" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.37:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:19.224206 4758 reflector.go:561] object-"openshift-service-ca-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37956->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:19.224282 4758 trace.go:236] Trace[576735931]: "Reflector ListAndWatch" name:object-"openshift-service-ca-operator"/"kube-root-ca.crt" (22-Jan-2026 18:00:09.131) (total time: 10092ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[576735931]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37956->38.102.83.223:6443: read: connection reset by peer 10092ms (18:00:19.224) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[576735931]: [10.092453081s] [10.092453081s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:19.224297 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37956->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:19.244355 4758 reflector.go:561] object-"openshift-apiserver-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84524": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38024->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:19.244448 4758 trace.go:236] Trace[1562703624]: "Reflector ListAndWatch" name:object-"openshift-apiserver-operator"/"kube-root-ca.crt" (22-Jan-2026 18:00:09.134) (total time: 10110ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1562703624]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84524": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38024->38.102.83.223:6443: read: connection reset by peer 10110ms (18:00:19.244) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1562703624]: [10.110217404s] [10.110217404s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:19.244468 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84524\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38024->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:19.264878 4758 request.go:700] Waited for 4.118160233s, retries: 1, retry-after: 1s - retry-reason: due to retryable error, error: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Denv-overrides&resourceVersion=84276": read tcp 38.102.83.223:38000->38.102.83.223:6443: read: connection reset by peer - request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Denv-overrides&resourceVersion=84276 Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:19.265521 4758 reflector.go:561] object-"openshift-ovn-kubernetes"/"env-overrides": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Denv-overrides&resourceVersion=84276": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38000->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:19.265642 4758 trace.go:236] Trace[2057791141]: "Reflector ListAndWatch" name:object-"openshift-ovn-kubernetes"/"env-overrides" (22-Jan-2026 18:00:09.134) (total time: 10131ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[2057791141]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Denv-overrides&resourceVersion=84276": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38000->38.102.83.223:6443: read: connection reset by peer 10131ms (18:00:19.265) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[2057791141]: [10.131455014s] [10.131455014s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:19.265682 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"env-overrides\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Denv-overrides&resourceVersion=84276\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38000->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:19.284090 4758 reflector.go:561] object-"openshift-operators"/"observability-operator-sa-dockercfg-rdwz2": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobservability-operator-sa-dockercfg-rdwz2&resourceVersion=84193": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38070->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:19.284170 4758 trace.go:236] Trace[47420126]: "Reflector ListAndWatch" name:object-"openshift-operators"/"observability-operator-sa-dockercfg-rdwz2" (22-Jan-2026 18:00:09.136) (total time: 10147ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[47420126]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobservability-operator-sa-dockercfg-rdwz2&resourceVersion=84193": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38070->38.102.83.223:6443: read: connection reset by peer 10147ms (18:00:19.284) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[47420126]: [10.147696557s] [10.147696557s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:19.284184 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-rdwz2\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobservability-operator-sa-dockercfg-rdwz2&resourceVersion=84193\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38070->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:19.303893 4758 reflector.go:561] object-"openshift-authentication-operator"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37986->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:19.303978 4758 trace.go:236] Trace[807678822]: "Reflector ListAndWatch" name:object-"openshift-authentication-operator"/"serving-cert" (22-Jan-2026 18:00:09.132) (total time: 10171ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[807678822]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37986->38.102.83.223:6443: read: connection reset by peer 10170ms (18:00:19.303) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[807678822]: [10.171027862s] [10.171027862s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:19.303991 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:37986->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:19.323566 4758 reflector.go:561] object-"openstack"/"dns-svc": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Ddns-svc&resourceVersion=84173": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38008->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:19.323638 4758 trace.go:236] Trace[2049128854]: "Reflector ListAndWatch" name:object-"openstack"/"dns-svc" (22-Jan-2026 18:00:09.134) (total time: 10189ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[2049128854]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Ddns-svc&resourceVersion=84173": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38008->38.102.83.223:6443: read: connection reset by peer 10189ms (18:00:19.323) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[2049128854]: [10.189518707s] [10.189518707s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:19.323657 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"dns-svc\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Ddns-svc&resourceVersion=84173\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38008->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:19.344462 4758 reflector.go:561] object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84439": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38158->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:19.344511 4758 trace.go:236] Trace[612675466]: "Reflector ListAndWatch" name:object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" (22-Jan-2026 18:00:09.141) (total time: 10202ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[612675466]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84439": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38158->38.102.83.223:6443: read: connection reset by peer 10202ms (18:00:19.344) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[612675466]: [10.202559982s] [10.202559982s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:19.344522 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84439\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38158->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:19.364549 4758 reflector.go:561] object-"openshift-ingress-canary"/"default-dockercfg-2llfx": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-2llfx&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38114->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:19.364626 4758 trace.go:236] Trace[397159155]: "Reflector ListAndWatch" name:object-"openshift-ingress-canary"/"default-dockercfg-2llfx" (22-Jan-2026 18:00:09.138) (total time: 10225ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[397159155]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-2llfx&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38114->38.102.83.223:6443: read: connection reset by peer 10225ms (18:00:19.364) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[397159155]: [10.225936249s] [10.225936249s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:19.364642 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-canary\"/\"default-dockercfg-2llfx\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-2llfx&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38114->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:19.384600 4758 reflector.go:561] object-"openstack-operators"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38188->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:19.384660 4758 trace.go:236] Trace[1859958193]: "Reflector ListAndWatch" name:object-"openstack-operators"/"kube-root-ca.crt" (22-Jan-2026 18:00:09.141) (total time: 10242ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1859958193]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38188->38.102.83.223:6443: read: connection reset by peer 10242ms (18:00:19.384) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1859958193]: [10.242670385s] [10.242670385s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:19.384694 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38188->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:19.404224 4758 reflector.go:561] object-"openstack"/"openstack-scripts": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-scripts&resourceVersion=84524": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38102->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:19.404289 4758 trace.go:236] Trace[419841335]: "Reflector ListAndWatch" name:object-"openstack"/"openstack-scripts" (22-Jan-2026 18:00:09.136) (total time: 10267ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[419841335]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-scripts&resourceVersion=84524": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38102->38.102.83.223:6443: read: connection reset by peer 10267ms (18:00:19.404) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[419841335]: [10.267767719s] [10.267767719s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:19.404302 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-scripts\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-scripts&resourceVersion=84524\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38102->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:19.423834 4758 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dmachine-api-operator-tls&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38134->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:19.423923 4758 trace.go:236] Trace[1495871597]: "Reflector ListAndWatch" name:object-"openshift-machine-api"/"machine-api-operator-tls" (22-Jan-2026 18:00:09.140) (total time: 10283ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1495871597]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dmachine-api-operator-tls&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38134->38.102.83.223:6443: read: connection reset by peer 10282ms (18:00:19.423) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1495871597]: [10.283061907s] [10.283061907s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:19.423944 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dmachine-api-operator-tls&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38134->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:19.445518 4758 reflector.go:561] object-"openshift-ingress"/"service-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dservice-ca-bundle&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38206->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:19.445563 4758 trace.go:236] Trace[718386387]: "Reflector ListAndWatch" name:object-"openshift-ingress"/"service-ca-bundle" (22-Jan-2026 18:00:09.145) (total time: 10299ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[718386387]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dservice-ca-bundle&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38206->38.102.83.223:6443: read: connection reset by peer 10299ms (18:00:19.445) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[718386387]: [10.299915185s] [10.299915185s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:19.445610 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress\"/\"service-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dservice-ca-bundle&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38206->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:19.464134 4758 reflector.go:561] object-"openstack"/"cert-barbican-public-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-barbican-public-svc&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38220->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:19.464178 4758 trace.go:236] Trace[1957759812]: "Reflector ListAndWatch" name:object-"openstack"/"cert-barbican-public-svc" (22-Jan-2026 18:00:09.147) (total time: 10316ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1957759812]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-barbican-public-svc&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38220->38.102.83.223:6443: read: connection reset by peer 10316ms (18:00:19.464) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1957759812]: [10.316389704s] [10.316389704s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:19.464189 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-barbican-public-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-barbican-public-svc&resourceVersion=84512\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38220->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:19.483531 4758 reflector.go:561] object-"openshift-machine-api"/"control-plane-machine-set-operator-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-tls&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38244->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:19.483573 4758 trace.go:236] Trace[723104897]: "Reflector ListAndWatch" name:object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" (22-Jan-2026 18:00:09.152) (total time: 10330ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[723104897]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-tls&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38244->38.102.83.223:6443: read: connection reset by peer 10330ms (18:00:19.483) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[723104897]: [10.330631103s] [10.330631103s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:19.483584 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-tls&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38244->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:19.503620 4758 reflector.go:561] object-"openshift-nmstate"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84524": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38260->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:19.503666 4758 trace.go:236] Trace[1661478994]: "Reflector ListAndWatch" name:object-"openshift-nmstate"/"kube-root-ca.crt" (22-Jan-2026 18:00:09.154) (total time: 10349ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1661478994]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84524": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38260->38.102.83.223:6443: read: connection reset by peer 10349ms (18:00:19.503) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1661478994]: [10.34959428s] [10.34959428s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:19.503679 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84524\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38260->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:19.524305 4758 reflector.go:561] object-"openstack"/"glance-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-scripts&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38274->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:19.524355 4758 trace.go:236] Trace[1522176119]: "Reflector ListAndWatch" name:object-"openstack"/"glance-scripts" (22-Jan-2026 18:00:09.157) (total time: 10366ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1522176119]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-scripts&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38274->38.102.83.223:6443: read: connection reset by peer 10366ms (18:00:19.524) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1522176119]: [10.366909402s] [10.366909402s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:19.524367 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"glance-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-scripts&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38274->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:19.544329 4758 reflector.go:561] object-"openstack"/"ovndbcluster-sb-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-sb-config&resourceVersion=84357": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38430->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:19.544370 4758 trace.go:236] Trace[1183672061]: "Reflector ListAndWatch" name:object-"openstack"/"ovndbcluster-sb-config" (22-Jan-2026 18:00:09.180) (total time: 10363ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1183672061]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-sb-config&resourceVersion=84357": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38430->38.102.83.223:6443: read: connection reset by peer 10363ms (18:00:19.544) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1183672061]: [10.363986942s] [10.363986942s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:19.544414 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovndbcluster-sb-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-sb-config&resourceVersion=84357\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38430->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:19.563964 4758 reflector.go:561] object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/secrets?fieldSelector=metadata.name%3Dopenshift-config-operator-dockercfg-7pc5z&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38312->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:19.564083 4758 trace.go:236] Trace[292428601]: "Reflector ListAndWatch" name:object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" (22-Jan-2026 18:00:09.161) (total time: 10402ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[292428601]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/secrets?fieldSelector=metadata.name%3Dopenshift-config-operator-dockercfg-7pc5z&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38312->38.102.83.223:6443: read: connection reset by peer 10402ms (18:00:19.563) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[292428601]: [10.402697358s] [10.402697358s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:19.564114 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-7pc5z\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/secrets?fieldSelector=metadata.name%3Dopenshift-config-operator-dockercfg-7pc5z&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38312->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:19.584406 4758 reflector.go:561] object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/secrets?fieldSelector=metadata.name%3Dingress-operator-dockercfg-7lnqk&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38548->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:19.584525 4758 trace.go:236] Trace[1710915157]: "Reflector ListAndWatch" name:object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" (22-Jan-2026 18:00:09.212) (total time: 10372ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1710915157]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/secrets?fieldSelector=metadata.name%3Dingress-operator-dockercfg-7lnqk&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38548->38.102.83.223:6443: read: connection reset by peer 10371ms (18:00:19.584) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1710915157]: [10.372071742s] [10.372071742s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:19.584556 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-7lnqk\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/secrets?fieldSelector=metadata.name%3Dingress-operator-dockercfg-7lnqk&resourceVersion=84512\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38548->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:19.604519 4758 reflector.go:561] object-"cert-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84524": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38350->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:19.604595 4758 trace.go:236] Trace[724562788]: "Reflector ListAndWatch" name:object-"cert-manager"/"openshift-service-ca.crt" (22-Jan-2026 18:00:09.163) (total time: 10441ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[724562788]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84524": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38350->38.102.83.223:6443: read: connection reset by peer 10441ms (18:00:19.604) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[724562788]: [10.441338371s] [10.441338371s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:19.604609 4758 reflector.go:158] "Unhandled Error" err="object-\"cert-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84524\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38350->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:19.624948 4758 reflector.go:561] object-"openstack"/"memcached-memcached-dockercfg-2w6nn": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmemcached-memcached-dockercfg-2w6nn&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38364->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:19.625005 4758 trace.go:236] Trace[953461672]: "Reflector ListAndWatch" name:object-"openstack"/"memcached-memcached-dockercfg-2w6nn" (22-Jan-2026 18:00:09.164) (total time: 10460ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[953461672]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmemcached-memcached-dockercfg-2w6nn&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38364->38.102.83.223:6443: read: connection reset by peer 10460ms (18:00:19.624) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[953461672]: [10.460516483s] [10.460516483s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:19.625018 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"memcached-memcached-dockercfg-2w6nn\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmemcached-memcached-dockercfg-2w6nn&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38364->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:19.644402 4758 reflector.go:561] object-"openstack"/"rabbitmq-server-dockercfg-d8jxf": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-server-dockercfg-d8jxf&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38370->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:19.644514 4758 trace.go:236] Trace[195612333]: "Reflector ListAndWatch" name:object-"openstack"/"rabbitmq-server-dockercfg-d8jxf" (22-Jan-2026 18:00:09.167) (total time: 10476ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[195612333]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-server-dockercfg-d8jxf&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38370->38.102.83.223:6443: read: connection reset by peer 10476ms (18:00:19.644) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[195612333]: [10.476999933s] [10.476999933s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:19.644537 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-server-dockercfg-d8jxf\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-server-dockercfg-d8jxf&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38370->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:19.664862 4758 reflector.go:561] object-"openstack"/"galera-openstack-dockercfg-g2jsf": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dgalera-openstack-dockercfg-g2jsf&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38436->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:19.664950 4758 trace.go:236] Trace[1908725401]: "Reflector ListAndWatch" name:object-"openstack"/"galera-openstack-dockercfg-g2jsf" (22-Jan-2026 18:00:09.184) (total time: 10480ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1908725401]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dgalera-openstack-dockercfg-g2jsf&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38436->38.102.83.223:6443: read: connection reset by peer 10480ms (18:00:19.664) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1908725401]: [10.480658263s] [10.480658263s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:19.664982 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"galera-openstack-dockercfg-g2jsf\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dgalera-openstack-dockercfg-g2jsf&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38436->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:19.684765 4758 reflector.go:561] object-"openstack"/"prometheus-metric-storage-rulefiles-1": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dprometheus-metric-storage-rulefiles-1&resourceVersion=84116": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38392->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:19.684878 4758 trace.go:236] Trace[1239143156]: "Reflector ListAndWatch" name:object-"openstack"/"prometheus-metric-storage-rulefiles-1" (22-Jan-2026 18:00:09.171) (total time: 10513ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1239143156]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dprometheus-metric-storage-rulefiles-1&resourceVersion=84116": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38392->38.102.83.223:6443: read: connection reset by peer 10513ms (18:00:19.684) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1239143156]: [10.513122307s] [10.513122307s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:19.684899 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"prometheus-metric-storage-rulefiles-1\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dprometheus-metric-storage-rulefiles-1&resourceVersion=84116\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38392->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:19.704581 4758 reflector.go:561] object-"openshift-marketplace"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38446->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:19.704685 4758 trace.go:236] Trace[117259587]: "Reflector ListAndWatch" name:object-"openshift-marketplace"/"openshift-service-ca.crt" (22-Jan-2026 18:00:09.188) (total time: 10515ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[117259587]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38446->38.102.83.223:6443: read: connection reset by peer 10515ms (18:00:19.704) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[117259587]: [10.515973465s] [10.515973465s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:19.704706 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38446->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:19.724415 4758 reflector.go:561] object-"openstack"/"rabbitmq-config-data": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-config-data&resourceVersion=84485": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38404->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:19.724499 4758 trace.go:236] Trace[1884142264]: "Reflector ListAndWatch" name:object-"openstack"/"rabbitmq-config-data" (22-Jan-2026 18:00:09.179) (total time: 10545ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1884142264]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-config-data&resourceVersion=84485": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38404->38.102.83.223:6443: read: connection reset by peer 10545ms (18:00:19.724) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1884142264]: [10.545237062s] [10.545237062s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:19.724523 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-config-data\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-config-data&resourceVersion=84485\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38404->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:19.744688 4758 reflector.go:561] object-"openstack"/"rabbitmq-erlang-cookie": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-erlang-cookie&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38460->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:19.744811 4758 trace.go:236] Trace[614967391]: "Reflector ListAndWatch" name:object-"openstack"/"rabbitmq-erlang-cookie" (22-Jan-2026 18:00:09.192) (total time: 10551ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[614967391]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-erlang-cookie&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38460->38.102.83.223:6443: read: connection reset by peer 10551ms (18:00:19.744) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[614967391]: [10.551784592s] [10.551784592s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:19.744832 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-erlang-cookie\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-erlang-cookie&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38460->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:19.763978 4758 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-config&resourceVersion=84116": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38514->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:19.764034 4758 trace.go:236] Trace[1264909930]: "Reflector ListAndWatch" name:object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" (22-Jan-2026 18:00:09.203) (total time: 10560ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1264909930]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-config&resourceVersion=84116": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38514->38.102.83.223:6443: read: connection reset by peer 10560ms (18:00:19.763) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1264909930]: [10.56095806s] [10.56095806s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:19.764048 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-config&resourceVersion=84116\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38514->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:19.784603 4758 reflector.go:561] object-"openshift-authentication-operator"/"service-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dservice-ca-bundle&resourceVersion=84276": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38506->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:19.784705 4758 trace.go:236] Trace[693796609]: "Reflector ListAndWatch" name:object-"openshift-authentication-operator"/"service-ca-bundle" (22-Jan-2026 18:00:09.199) (total time: 10585ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[693796609]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dservice-ca-bundle&resourceVersion=84276": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38506->38.102.83.223:6443: read: connection reset by peer 10584ms (18:00:19.784) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[693796609]: [10.585043108s] [10.585043108s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:19.784781 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"service-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dservice-ca-bundle&resourceVersion=84276\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38506->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:19.804605 4758 reflector.go:561] object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-pz96z": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-baremetal-operator-controller-manager-dockercfg-pz96z&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38530->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:19.804803 4758 trace.go:236] Trace[866099983]: "Reflector ListAndWatch" name:object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-pz96z" (22-Jan-2026 18:00:09.209) (total time: 10595ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[866099983]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-baremetal-operator-controller-manager-dockercfg-pz96z&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38530->38.102.83.223:6443: read: connection reset by peer 10595ms (18:00:19.804) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[866099983]: [10.595643917s] [10.595643917s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:19.804839 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"openstack-baremetal-operator-controller-manager-dockercfg-pz96z\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-baremetal-operator-controller-manager-dockercfg-pz96z&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38530->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:19.824613 4758 reflector.go:561] object-"openshift-dns-operator"/"metrics-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38480->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:19.824844 4758 trace.go:236] Trace[477451405]: "Reflector ListAndWatch" name:object-"openshift-dns-operator"/"metrics-tls" (22-Jan-2026 18:00:09.197) (total time: 10627ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[477451405]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38480->38.102.83.223:6443: read: connection reset by peer 10627ms (18:00:19.824) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[477451405]: [10.627402512s] [10.627402512s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:19.824881 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns-operator\"/\"metrics-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38480->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:19.844784 4758 reflector.go:561] object-"metallb-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38566->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:19.844895 4758 trace.go:236] Trace[2139668614]: "Reflector ListAndWatch" name:object-"metallb-system"/"kube-root-ca.crt" (22-Jan-2026 18:00:09.217) (total time: 10627ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[2139668614]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38566->38.102.83.223:6443: read: connection reset by peer 10627ms (18:00:19.844) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[2139668614]: [10.627141225s] [10.627141225s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:19.844927 4758 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38566->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:19.865399 4758 reflector.go:561] object-"openshift-console"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84173": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38358->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:19.865563 4758 trace.go:236] Trace[920976739]: "Reflector ListAndWatch" name:object-"openshift-console"/"openshift-service-ca.crt" (22-Jan-2026 18:00:09.163) (total time: 10702ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[920976739]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84173": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38358->38.102.83.223:6443: read: connection reset by peer 10702ms (18:00:19.865) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[920976739]: [10.702269173s] [10.702269173s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:19.865591 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84173\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38358->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:19.884678 4758 reflector.go:561] object-"openstack"/"ovsdbserver-nb": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovsdbserver-nb&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38304->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:19.884886 4758 trace.go:236] Trace[502048391]: "Reflector ListAndWatch" name:object-"openstack"/"ovsdbserver-nb" (22-Jan-2026 18:00:09.160) (total time: 10724ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[502048391]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovsdbserver-nb&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38304->38.102.83.223:6443: read: connection reset by peer 10723ms (18:00:19.884) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[502048391]: [10.724079447s] [10.724079447s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:19.884922 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovsdbserver-nb\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovsdbserver-nb&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38304->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:19.904905 4758 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-dockercfg-qx5rd&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38328->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:19.905050 4758 trace.go:236] Trace[13708529]: "Reflector ListAndWatch" name:object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" (22-Jan-2026 18:00:09.161) (total time: 10743ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[13708529]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-dockercfg-qx5rd&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38328->38.102.83.223:6443: read: connection reset by peer 10743ms (18:00:19.904) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[13708529]: [10.74360978s] [10.74360978s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:19.905085 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-qx5rd\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-dockercfg-qx5rd&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38328->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:19.924620 4758 reflector.go:561] object-"openshift-kube-storage-version-migrator-operator"/"config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38552->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:19.924785 4758 trace.go:236] Trace[1510150507]: "Reflector ListAndWatch" name:object-"openshift-kube-storage-version-migrator-operator"/"config" (22-Jan-2026 18:00:09.217) (total time: 10707ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1510150507]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38552->38.102.83.223:6443: read: connection reset by peer 10706ms (18:00:19.924) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1510150507]: [10.707053703s] [10.707053703s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:19.924848 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38552->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:19.943995 4758 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-template-login": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-template-login&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38444->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:19.944094 4758 trace.go:236] Trace[1423623257]: "Reflector ListAndWatch" name:object-"openshift-authentication"/"v4-0-config-user-template-login" (22-Jan-2026 18:00:09.184) (total time: 10759ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1423623257]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-template-login&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38444->38.102.83.223:6443: read: connection reset by peer 10759ms (18:00:19.943) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1423623257]: [10.759617147s] [10.759617147s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:19.944117 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-template-login&resourceVersion=84627\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38444->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:19.963954 4758 reflector.go:561] object-"openstack"/"barbican-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-config-data&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38448->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:19.964037 4758 trace.go:236] Trace[689654238]: "Reflector ListAndWatch" name:object-"openstack"/"barbican-config-data" (22-Jan-2026 18:00:09.191) (total time: 10772ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[689654238]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-config-data&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38448->38.102.83.223:6443: read: connection reset by peer 10772ms (18:00:19.963) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[689654238]: [10.772138187s] [10.772138187s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:19.964063 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"barbican-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-config-data&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38448->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:19.984293 4758 reflector.go:561] object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-9zqsl": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dbarbican-operator-controller-manager-dockercfg-9zqsl&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38366->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:19.984396 4758 trace.go:236] Trace[1572916412]: "Reflector ListAndWatch" name:object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-9zqsl" (22-Jan-2026 18:00:09.166) (total time: 10817ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1572916412]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dbarbican-operator-controller-manager-dockercfg-9zqsl&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38366->38.102.83.223:6443: read: connection reset by peer 10817ms (18:00:19.984) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1572916412]: [10.817989257s] [10.817989257s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:19.984423 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"barbican-operator-controller-manager-dockercfg-9zqsl\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dbarbican-operator-controller-manager-dockercfg-9zqsl&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38366->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:20.004139 4758 reflector.go:561] object-"openstack"/"default-dockercfg-d4w66": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-d4w66&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38386->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.004237 4758 trace.go:236] Trace[1472563669]: "Reflector ListAndWatch" name:object-"openstack"/"default-dockercfg-d4w66" (22-Jan-2026 18:00:09.170) (total time: 10833ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1472563669]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-d4w66&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38386->38.102.83.223:6443: read: connection reset by peer 10833ms (18:00:20.004) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1472563669]: [10.833583532s] [10.833583532s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:20.004261 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"default-dockercfg-d4w66\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-d4w66&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38386->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:20.024272 4758 reflector.go:561] object-"openshift-cluster-samples-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84116": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38402->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.024366 4758 trace.go:236] Trace[262702144]: "Reflector ListAndWatch" name:object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" (22-Jan-2026 18:00:09.176) (total time: 10847ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[262702144]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84116": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38402->38.102.83.223:6443: read: connection reset by peer 10847ms (18:00:20.024) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[262702144]: [10.847413009s] [10.847413009s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:20.024388 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84116\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38402->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:20.044421 4758 reflector.go:561] object-"openshift-network-node-identity"/"env-overrides": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Denv-overrides&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38470->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.044544 4758 trace.go:236] Trace[179016736]: "Reflector ListAndWatch" name:object-"openshift-network-node-identity"/"env-overrides" (22-Jan-2026 18:00:09.196) (total time: 10848ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[179016736]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Denv-overrides&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38470->38.102.83.223:6443: read: connection reset by peer 10848ms (18:00:20.044) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[179016736]: [10.8481615s] [10.8481615s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:20.044572 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"env-overrides\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Denv-overrides&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38470->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:20.064305 4758 reflector.go:561] object-"openshift-console"/"console-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Dconsole-serving-cert&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38418->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.064395 4758 trace.go:236] Trace[1722467538]: "Reflector ListAndWatch" name:object-"openshift-console"/"console-serving-cert" (22-Jan-2026 18:00:09.179) (total time: 10885ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1722467538]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Dconsole-serving-cert&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38418->38.102.83.223:6443: read: connection reset by peer 10885ms (18:00:20.064) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1722467538]: [10.885143427s] [10.885143427s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:20.064420 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"console-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Dconsole-serving-cert&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38418->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:20.084187 4758 reflector.go:561] object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/secrets?fieldSelector=metadata.name%3Dauthentication-operator-dockercfg-mz9bj&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38490->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.084311 4758 trace.go:236] Trace[2056457847]: "Reflector ListAndWatch" name:object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" (22-Jan-2026 18:00:09.198) (total time: 10885ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[2056457847]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/secrets?fieldSelector=metadata.name%3Dauthentication-operator-dockercfg-mz9bj&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38490->38.102.83.223:6443: read: connection reset by peer 10885ms (18:00:20.084) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[2056457847]: [10.885740754s] [10.885740754s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:20.084338 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-mz9bj\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/secrets?fieldSelector=metadata.name%3Dauthentication-operator-dockercfg-mz9bj&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38490->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:20.104693 4758 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/secrets?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-serving-cert&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38432->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.104898 4758 trace.go:236] Trace[1610065035]: "Reflector ListAndWatch" name:object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" (22-Jan-2026 18:00:09.183) (total time: 10921ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1610065035]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/secrets?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-serving-cert&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38432->38.102.83.223:6443: read: connection reset by peer 10921ms (18:00:20.104) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1610065035]: [10.921690534s] [10.921690534s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:20.104936 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/secrets?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-serving-cert&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38432->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:20.143978 4758 reflector.go:561] object-"openshift-ingress-canary"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84485": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38512->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.144075 4758 trace.go:236] Trace[753731038]: "Reflector ListAndWatch" name:object-"openshift-ingress-canary"/"kube-root-ca.crt" (22-Jan-2026 18:00:09.201) (total time: 10942ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[753731038]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84485": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38512->38.102.83.223:6443: read: connection reset by peer 10942ms (18:00:20.143) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[753731038]: [10.94209844s] [10.94209844s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:20.144108 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84485\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38512->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:20.144555 4758 reflector.go:561] object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/secrets?fieldSelector=metadata.name%3Ddns-operator-dockercfg-9mqw5&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38528->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.144668 4758 trace.go:236] Trace[1354188464]: "Reflector ListAndWatch" name:object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" (22-Jan-2026 18:00:09.204) (total time: 10939ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1354188464]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/secrets?fieldSelector=metadata.name%3Ddns-operator-dockercfg-9mqw5&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38528->38.102.83.223:6443: read: connection reset by peer 10939ms (18:00:20.144) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1354188464]: [10.939845038s] [10.939845038s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:20.144694 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-9mqw5\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/secrets?fieldSelector=metadata.name%3Ddns-operator-dockercfg-9mqw5&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38528->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:20.164564 4758 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/secrets?fieldSelector=metadata.name%3Dmachine-approver-tls&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38542->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.164669 4758 trace.go:236] Trace[385446518]: "Reflector ListAndWatch" name:object-"openshift-cluster-machine-approver"/"machine-approver-tls" (22-Jan-2026 18:00:09.210) (total time: 10954ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[385446518]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/secrets?fieldSelector=metadata.name%3Dmachine-approver-tls&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38542->38.102.83.223:6443: read: connection reset by peer 10954ms (18:00:20.164) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[385446518]: [10.954336904s] [10.954336904s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:20.164695 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/secrets?fieldSelector=metadata.name%3Dmachine-approver-tls&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38542->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:20.184233 4758 reflector.go:561] object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38580->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.184387 4758 trace.go:236] Trace[1899032083]: "Reflector ListAndWatch" name:object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" (22-Jan-2026 18:00:09.218) (total time: 10965ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1899032083]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38580->38.102.83.223:6443: read: connection reset by peer 10965ms (18:00:20.184) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1899032083]: [10.965464568s] [10.965464568s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:20.184417 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38580->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:20.205156 4758 reflector.go:561] object-"openshift-multus"/"metrics-daemon-secret": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmetrics-daemon-secret&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38594->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.205391 4758 trace.go:236] Trace[420186726]: "Reflector ListAndWatch" name:object-"openshift-multus"/"metrics-daemon-secret" (22-Jan-2026 18:00:09.219) (total time: 10985ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[420186726]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmetrics-daemon-secret&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38594->38.102.83.223:6443: read: connection reset by peer 10985ms (18:00:20.205) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[420186726]: [10.985395791s] [10.985395791s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:20.205443 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"metrics-daemon-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmetrics-daemon-secret&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38594->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:20.224425 4758 reflector.go:561] object-"openshift-multus"/"default-cni-sysctl-allowlist": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Ddefault-cni-sysctl-allowlist&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38598->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.224523 4758 trace.go:236] Trace[1885932029]: "Reflector ListAndWatch" name:object-"openshift-multus"/"default-cni-sysctl-allowlist" (22-Jan-2026 18:00:09.222) (total time: 11002ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1885932029]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Ddefault-cni-sysctl-allowlist&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38598->38.102.83.223:6443: read: connection reset by peer 11002ms (18:00:20.224) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1885932029]: [11.002434684s] [11.002434684s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:20.224546 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Ddefault-cni-sysctl-allowlist&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38598->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:20.244683 4758 reflector.go:561] object-"openstack"/"neutron-httpd-config": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dneutron-httpd-config&resourceVersion=84193": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38604->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.244790 4758 trace.go:236] Trace[210778285]: "Reflector ListAndWatch" name:object-"openstack"/"neutron-httpd-config" (22-Jan-2026 18:00:09.222) (total time: 11022ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[210778285]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dneutron-httpd-config&resourceVersion=84193": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38604->38.102.83.223:6443: read: connection reset by peer 11022ms (18:00:20.244) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[210778285]: [11.022675427s] [11.022675427s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:20.244810 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"neutron-httpd-config\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dneutron-httpd-config&resourceVersion=84193\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38604->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:20.264459 4758 reflector.go:561] object-"openstack"/"keystone-keystone-dockercfg-q7l7k": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone-keystone-dockercfg-q7l7k&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38610->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.264531 4758 trace.go:236] Trace[1541555650]: "Reflector ListAndWatch" name:object-"openstack"/"keystone-keystone-dockercfg-q7l7k" (22-Jan-2026 18:00:09.222) (total time: 11042ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1541555650]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone-keystone-dockercfg-q7l7k&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38610->38.102.83.223:6443: read: connection reset by peer 11042ms (18:00:20.264) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1541555650]: [11.042399195s] [11.042399195s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:20.264549 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"keystone-keystone-dockercfg-q7l7k\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone-keystone-dockercfg-q7l7k&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38610->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.283714 4758 request.go:700] Waited for 5.131677809s, retries: 1, retry-after: 1s - retry-reason: due to retryable error, error: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=84424": read tcp 38.102.83.223:38624->38.102.83.223:6443: read: connection reset by peer - request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=84424 Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:20.284252 4758 reflector.go:561] object-"openshift-console-operator"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38624->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.284343 4758 trace.go:236] Trace[116380931]: "Reflector ListAndWatch" name:object-"openshift-console-operator"/"serving-cert" (22-Jan-2026 18:00:09.222) (total time: 11062ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[116380931]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38624->38.102.83.223:6443: read: connection reset by peer 11062ms (18:00:20.284) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[116380931]: [11.062192504s] [11.062192504s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:20.284364 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38624->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:20.305048 4758 reflector.go:561] object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-baremetal-operator-webhook-server-cert&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38728->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.305318 4758 trace.go:236] Trace[915596893]: "Reflector ListAndWatch" name:object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" (22-Jan-2026 18:00:09.245) (total time: 11059ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[915596893]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-baremetal-operator-webhook-server-cert&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38728->38.102.83.223:6443: read: connection reset by peer 11059ms (18:00:20.305) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[915596893]: [11.059584533s] [11.059584533s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:20.305352 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"openstack-baremetal-operator-webhook-server-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-baremetal-operator-webhook-server-cert&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38728->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:20.324835 4758 reflector.go:561] object-"openshift-nmstate"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38636->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.324980 4758 trace.go:236] Trace[1015289139]: "Reflector ListAndWatch" name:object-"openshift-nmstate"/"openshift-service-ca.crt" (22-Jan-2026 18:00:09.223) (total time: 11101ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1015289139]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38636->38.102.83.223:6443: read: connection reset by peer 11101ms (18:00:20.324) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1015289139]: [11.1016762s] [11.1016762s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:20.325019 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38636->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:20.344331 4758 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"pprof-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpprof-cert&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38664->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.344465 4758 trace.go:236] Trace[336083257]: "Reflector ListAndWatch" name:object-"openshift-operator-lifecycle-manager"/"pprof-cert" (22-Jan-2026 18:00:09.225) (total time: 11119ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[336083257]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpprof-cert&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38664->38.102.83.223:6443: read: connection reset by peer 11119ms (18:00:20.344) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[336083257]: [11.119172667s] [11.119172667s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:20.344496 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpprof-cert&resourceVersion=84627\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38664->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:20.364588 4758 reflector.go:561] object-"metallb-system"/"frr-k8s-webhook-server-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dfrr-k8s-webhook-server-cert&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38682->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.364816 4758 trace.go:236] Trace[418906606]: "Reflector ListAndWatch" name:object-"metallb-system"/"frr-k8s-webhook-server-cert" (22-Jan-2026 18:00:09.228) (total time: 11136ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[418906606]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dfrr-k8s-webhook-server-cert&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38682->38.102.83.223:6443: read: connection reset by peer 11135ms (18:00:20.364) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[418906606]: [11.136099698s] [11.136099698s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:20.364858 4758 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"frr-k8s-webhook-server-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dfrr-k8s-webhook-server-cert&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38682->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:20.384383 4758 reflector.go:561] object-"openshift-marketplace"/"marketplace-trusted-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dmarketplace-trusted-ca&resourceVersion=84116": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38716->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.384465 4758 trace.go:236] Trace[1270047881]: "Reflector ListAndWatch" name:object-"openshift-marketplace"/"marketplace-trusted-ca" (22-Jan-2026 18:00:09.236) (total time: 11148ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1270047881]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dmarketplace-trusted-ca&resourceVersion=84116": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38716->38.102.83.223:6443: read: connection reset by peer 11148ms (18:00:20.384) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1270047881]: [11.148346783s] [11.148346783s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:20.384490 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dmarketplace-trusted-ca&resourceVersion=84116\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38716->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:20.404803 4758 reflector.go:561] object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-f7gls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dinfra-operator-controller-manager-dockercfg-f7gls&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38744->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.404894 4758 trace.go:236] Trace[977170684]: "Reflector ListAndWatch" name:object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-f7gls" (22-Jan-2026 18:00:09.252) (total time: 11152ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[977170684]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dinfra-operator-controller-manager-dockercfg-f7gls&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38744->38.102.83.223:6443: read: connection reset by peer 11152ms (18:00:20.404) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[977170684]: [11.152612808s] [11.152612808s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:20.404920 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"infra-operator-controller-manager-dockercfg-f7gls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dinfra-operator-controller-manager-dockercfg-f7gls&resourceVersion=84627\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38744->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:20.424932 4758 reflector.go:561] object-"openstack"/"dns-swift-storage-0": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Ddns-swift-storage-0&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38732->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.425038 4758 trace.go:236] Trace[1181597885]: "Reflector ListAndWatch" name:object-"openstack"/"dns-swift-storage-0" (22-Jan-2026 18:00:09.245) (total time: 11179ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1181597885]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Ddns-swift-storage-0&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38732->38.102.83.223:6443: read: connection reset by peer 11179ms (18:00:20.424) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1181597885]: [11.179312876s] [11.179312876s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:20.425056 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"dns-swift-storage-0\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Ddns-swift-storage-0&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38732->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:20.444671 4758 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-controller-dockercfg-c2lfx&resourceVersion=84680": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38652->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.444882 4758 trace.go:236] Trace[349314340]: "Reflector ListAndWatch" name:object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" (22-Jan-2026 18:00:09.224) (total time: 11220ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[349314340]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-controller-dockercfg-c2lfx&resourceVersion=84680": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38652->38.102.83.223:6443: read: connection reset by peer 11220ms (18:00:20.444) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[349314340]: [11.220739966s] [11.220739966s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:20.444927 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-c2lfx\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-controller-dockercfg-c2lfx&resourceVersion=84680\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38652->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:20.463979 4758 reflector.go:561] object-"openstack"/"openstack-config-data": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-config-data&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38670->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.464118 4758 trace.go:236] Trace[1398443046]: "Reflector ListAndWatch" name:object-"openstack"/"openstack-config-data" (22-Jan-2026 18:00:09.228) (total time: 11235ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1398443046]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-config-data&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38670->38.102.83.223:6443: read: connection reset by peer 11235ms (18:00:20.463) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1398443046]: [11.235421366s] [11.235421366s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:20.464151 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-config-data\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-config-data&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38670->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:20.484230 4758 reflector.go:561] object-"openshift-apiserver"/"etcd-serving-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Detcd-serving-ca&resourceVersion=84524": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38712->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.484362 4758 trace.go:236] Trace[1647627544]: "Reflector ListAndWatch" name:object-"openshift-apiserver"/"etcd-serving-ca" (22-Jan-2026 18:00:09.229) (total time: 11254ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1647627544]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Detcd-serving-ca&resourceVersion=84524": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38712->38.102.83.223:6443: read: connection reset by peer 11254ms (18:00:20.484) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1647627544]: [11.254561297s] [11.254561297s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:20.484394 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-serving-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Detcd-serving-ca&resourceVersion=84524\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38712->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:20.504313 4758 reflector.go:561] object-"openshift-nmstate"/"nmstate-handler-dockercfg-v97lh": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dnmstate-handler-dockercfg-v97lh&resourceVersion=84193": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38724->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.504403 4758 trace.go:236] Trace[570491876]: "Reflector ListAndWatch" name:object-"openshift-nmstate"/"nmstate-handler-dockercfg-v97lh" (22-Jan-2026 18:00:09.244) (total time: 11260ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[570491876]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dnmstate-handler-dockercfg-v97lh&resourceVersion=84193": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38724->38.102.83.223:6443: read: connection reset by peer 11259ms (18:00:20.504) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[570491876]: [11.260052927s] [11.260052927s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:20.504425 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"nmstate-handler-dockercfg-v97lh\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dnmstate-handler-dockercfg-v97lh&resourceVersion=84193\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38724->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:20.524476 4758 reflector.go:561] object-"openshift-apiserver"/"encryption-config-1": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dencryption-config-1&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38898->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.524601 4758 trace.go:236] Trace[935788829]: "Reflector ListAndWatch" name:object-"openshift-apiserver"/"encryption-config-1" (22-Jan-2026 18:00:09.280) (total time: 11244ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[935788829]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dencryption-config-1&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38898->38.102.83.223:6443: read: connection reset by peer 11244ms (18:00:20.524) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[935788829]: [11.244249466s] [11.244249466s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:20.524631 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"encryption-config-1\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dencryption-config-1&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38898->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:20.544071 4758 reflector.go:561] object-"openstack"/"rabbitmq-notifications-server-dockercfg-8d4mj": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-notifications-server-dockercfg-8d4mj&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38760->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.544174 4758 trace.go:236] Trace[1120574927]: "Reflector ListAndWatch" name:object-"openstack"/"rabbitmq-notifications-server-dockercfg-8d4mj" (22-Jan-2026 18:00:09.252) (total time: 11291ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1120574927]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-notifications-server-dockercfg-8d4mj&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38760->38.102.83.223:6443: read: connection reset by peer 11291ms (18:00:20.544) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1120574927]: [11.291856034s] [11.291856034s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:20.544199 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-notifications-server-dockercfg-8d4mj\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-notifications-server-dockercfg-8d4mj&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38760->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:20.564787 4758 reflector.go:561] object-"openshift-apiserver"/"audit-1": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Daudit-1&resourceVersion=84276": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38784->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.564883 4758 trace.go:236] Trace[1656420464]: "Reflector ListAndWatch" name:object-"openshift-apiserver"/"audit-1" (22-Jan-2026 18:00:09.260) (total time: 11304ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1656420464]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Daudit-1&resourceVersion=84276": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38784->38.102.83.223:6443: read: connection reset by peer 11303ms (18:00:20.564) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1656420464]: [11.304072077s] [11.304072077s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:20.564903 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"audit-1\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Daudit-1&resourceVersion=84276\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38784->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:20.584212 4758 reflector.go:561] object-"openshift-ingress-operator"/"trusted-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&resourceVersion=84524": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38804->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.584334 4758 trace.go:236] Trace[1344151182]: "Reflector ListAndWatch" name:object-"openshift-ingress-operator"/"trusted-ca" (22-Jan-2026 18:00:09.265) (total time: 11319ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1344151182]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&resourceVersion=84524": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38804->38.102.83.223:6443: read: connection reset by peer 11319ms (18:00:20.584) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1344151182]: [11.319126247s] [11.319126247s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:20.584361 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-operator\"/\"trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&resourceVersion=84524\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38804->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:20.604600 4758 reflector.go:561] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38834->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.604717 4758 trace.go:236] Trace[1317162352]: "Reflector ListAndWatch" name:object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" (22-Jan-2026 18:00:09.265) (total time: 11339ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1317162352]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38834->38.102.83.223:6443: read: connection reset by peer 11339ms (18:00:20.604) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1317162352]: [11.3393986s] [11.3393986s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:20.604758 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38834->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:20.624552 4758 reflector.go:561] object-"openshift-oauth-apiserver"/"etcd-client": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Detcd-client&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38854->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.624651 4758 trace.go:236] Trace[308998796]: "Reflector ListAndWatch" name:object-"openshift-oauth-apiserver"/"etcd-client" (22-Jan-2026 18:00:09.270) (total time: 11353ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[308998796]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Detcd-client&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38854->38.102.83.223:6443: read: connection reset by peer 11353ms (18:00:20.624) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[308998796]: [11.35371408s] [11.35371408s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:20.624675 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Detcd-client&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38854->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:20.645243 4758 reflector.go:561] object-"openstack"/"cert-kube-state-metrics-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-kube-state-metrics-svc&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38884->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.645345 4758 trace.go:236] Trace[1238804537]: "Reflector ListAndWatch" name:object-"openstack"/"cert-kube-state-metrics-svc" (22-Jan-2026 18:00:09.276) (total time: 11368ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1238804537]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-kube-state-metrics-svc&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38884->38.102.83.223:6443: read: connection reset by peer 11368ms (18:00:20.645) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1238804537]: [11.368335948s] [11.368335948s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:20.645368 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-kube-state-metrics-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-kube-state-metrics-svc&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38884->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:20.664052 4758 reflector.go:561] object-"openshift-dns-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38872->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.664144 4758 trace.go:236] Trace[1543840287]: "Reflector ListAndWatch" name:object-"openshift-dns-operator"/"kube-root-ca.crt" (22-Jan-2026 18:00:09.273) (total time: 11390ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1543840287]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38872->38.102.83.223:6443: read: connection reset by peer 11390ms (18:00:20.664) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1543840287]: [11.390942325s] [11.390942325s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:20.664168 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38872->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:20.684204 4758 reflector.go:561] object-"openstack"/"glance-default-external-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-default-external-config-data&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38892->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.684311 4758 trace.go:236] Trace[2117409634]: "Reflector ListAndWatch" name:object-"openstack"/"glance-default-external-config-data" (22-Jan-2026 18:00:09.280) (total time: 11404ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[2117409634]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-default-external-config-data&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38892->38.102.83.223:6443: read: connection reset by peer 11403ms (18:00:20.684) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[2117409634]: [11.404023992s] [11.404023992s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:20.684348 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"glance-default-external-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-default-external-config-data&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38892->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:20.705344 4758 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-idp-0-file-data&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38774->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.705426 4758 trace.go:236] Trace[1386985728]: "Reflector ListAndWatch" name:object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" (22-Jan-2026 18:00:09.255) (total time: 11450ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1386985728]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-idp-0-file-data&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38774->38.102.83.223:6443: read: connection reset by peer 11450ms (18:00:20.705) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1386985728]: [11.450082177s] [11.450082177s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:20.705446 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-idp-0-file-data&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38774->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:20.723960 4758 reflector.go:561] object-"openshift-dns"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84524": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38796->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.724052 4758 trace.go:236] Trace[881986675]: "Reflector ListAndWatch" name:object-"openshift-dns"/"kube-root-ca.crt" (22-Jan-2026 18:00:09.260) (total time: 11463ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[881986675]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84524": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38796->38.102.83.223:6443: read: connection reset by peer 11463ms (18:00:20.723) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[881986675]: [11.463210815s] [11.463210815s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:20.724075 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84524\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38796->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:20.743682 4758 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/secrets?fieldSelector=metadata.name%3Dmachine-approver-sa-dockercfg-nl2j4&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38870->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.743787 4758 trace.go:236] Trace[1829319721]: "Reflector ListAndWatch" name:object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" (22-Jan-2026 18:00:09.270) (total time: 11472ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1829319721]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/secrets?fieldSelector=metadata.name%3Dmachine-approver-sa-dockercfg-nl2j4&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38870->38.102.83.223:6443: read: connection reset by peer 11472ms (18:00:20.743) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1829319721]: [11.472795706s] [11.472795706s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:20.743811 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-nl2j4\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/secrets?fieldSelector=metadata.name%3Dmachine-approver-sa-dockercfg-nl2j4&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38870->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:20.764311 4758 reflector.go:561] object-"openshift-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84485": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38874->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.764407 4758 trace.go:236] Trace[1733352883]: "Reflector ListAndWatch" name:object-"openshift-controller-manager"/"openshift-service-ca.crt" (22-Jan-2026 18:00:09.275) (total time: 11488ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1733352883]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84485": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38874->38.102.83.223:6443: read: connection reset by peer 11488ms (18:00:20.764) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1733352883]: [11.488526515s] [11.488526515s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:20.764430 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84485\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38874->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:20.784165 4758 reflector.go:561] object-"openshift-apiserver"/"trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38838->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.784234 4758 trace.go:236] Trace[1199236501]: "Reflector ListAndWatch" name:object-"openshift-apiserver"/"trusted-ca-bundle" (22-Jan-2026 18:00:09.266) (total time: 11517ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1199236501]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38838->38.102.83.223:6443: read: connection reset by peer 11517ms (18:00:20.784) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1199236501]: [11.517620867s] [11.517620867s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:20.784251 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38838->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:20.804404 4758 reflector.go:561] object-"openshift-network-diagnostics"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84485": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38818->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.804521 4758 trace.go:236] Trace[1036229225]: "Reflector ListAndWatch" name:object-"openshift-network-diagnostics"/"kube-root-ca.crt" (22-Jan-2026 18:00:09.265) (total time: 11539ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1036229225]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84485": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38818->38.102.83.223:6443: read: connection reset by peer 11539ms (18:00:20.804) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1036229225]: [11.539256708s] [11.539256708s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:20.804547 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84485\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38818->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:20.824170 4758 reflector.go:561] object-"openshift-multus"/"cni-copy-resources": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dcni-copy-resources&resourceVersion=84173": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38908->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.824263 4758 trace.go:236] Trace[1969747596]: "Reflector ListAndWatch" name:object-"openshift-multus"/"cni-copy-resources" (22-Jan-2026 18:00:09.282) (total time: 11541ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1969747596]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dcni-copy-resources&resourceVersion=84173": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38908->38.102.83.223:6443: read: connection reset by peer 11541ms (18:00:20.824) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1969747596]: [11.541790228s] [11.541790228s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:20.824286 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"cni-copy-resources\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dcni-copy-resources&resourceVersion=84173\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38908->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:20.843833 4758 reflector.go:561] object-"openshift-apiserver"/"image-import-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dimage-import-ca&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39004->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.843917 4758 trace.go:236] Trace[838997937]: "Reflector ListAndWatch" name:object-"openshift-apiserver"/"image-import-ca" (22-Jan-2026 18:00:09.299) (total time: 11544ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[838997937]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dimage-import-ca&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39004->38.102.83.223:6443: read: connection reset by peer 11544ms (18:00:20.843) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[838997937]: [11.544222503s] [11.544222503s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:20.843939 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"image-import-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dimage-import-ca&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39004->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:20.864083 4758 reflector.go:561] object-"openstack"/"cert-ovnnorthd-ovndbs": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-ovnnorthd-ovndbs&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38926->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.864213 4758 trace.go:236] Trace[966069100]: "Reflector ListAndWatch" name:object-"openstack"/"cert-ovnnorthd-ovndbs" (22-Jan-2026 18:00:09.284) (total time: 11579ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[966069100]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-ovnnorthd-ovndbs&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38926->38.102.83.223:6443: read: connection reset by peer 11579ms (18:00:20.864) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[966069100]: [11.579864014s] [11.579864014s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:20.864244 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-ovnnorthd-ovndbs\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-ovnnorthd-ovndbs&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38926->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:20.884078 4758 reflector.go:561] object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/secrets?fieldSelector=metadata.name%3Dkube-apiserver-operator-dockercfg-x57mr&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38952->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.884176 4758 trace.go:236] Trace[1318119752]: "Reflector ListAndWatch" name:object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" (22-Jan-2026 18:00:09.286) (total time: 11597ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1318119752]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/secrets?fieldSelector=metadata.name%3Dkube-apiserver-operator-dockercfg-x57mr&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38952->38.102.83.223:6443: read: connection reset by peer 11597ms (18:00:20.884) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1318119752]: [11.597762482s] [11.597762482s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:20.884201 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-x57mr\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/secrets?fieldSelector=metadata.name%3Dkube-apiserver-operator-dockercfg-x57mr&resourceVersion=84627\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38952->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:20.904476 4758 reflector.go:561] object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84173": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38960->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.904577 4758 trace.go:236] Trace[492584342]: "Reflector ListAndWatch" name:object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" (22-Jan-2026 18:00:09.290) (total time: 11613ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[492584342]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84173": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38960->38.102.83.223:6443: read: connection reset by peer 11613ms (18:00:20.904) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[492584342]: [11.613953394s] [11.613953394s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:20.904601 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84173\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38960->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:20.924796 4758 reflector.go:561] object-"openshift-cluster-version"/"default-dockercfg-gxtc4": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-gxtc4&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38990->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.924899 4758 trace.go:236] Trace[1903141882]: "Reflector ListAndWatch" name:object-"openshift-cluster-version"/"default-dockercfg-gxtc4" (22-Jan-2026 18:00:09.295) (total time: 11629ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1903141882]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-gxtc4&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38990->38.102.83.223:6443: read: connection reset by peer 11629ms (18:00:20.924) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1903141882]: [11.629567249s] [11.629567249s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:20.924936 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-version\"/\"default-dockercfg-gxtc4\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-gxtc4&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38990->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:20.945253 4758 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dcatalog-operator-serving-cert&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38974->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.945392 4758 trace.go:236] Trace[962507397]: "Reflector ListAndWatch" name:object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" (22-Jan-2026 18:00:09.290) (total time: 11654ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[962507397]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dcatalog-operator-serving-cert&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38974->38.102.83.223:6443: read: connection reset by peer 11654ms (18:00:20.945) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[962507397]: [11.654624833s] [11.654624833s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:20.945415 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dcatalog-operator-serving-cert&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38974->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:20.964488 4758 reflector.go:561] object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-d798m": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dmariadb-operator-controller-manager-dockercfg-d798m&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39034->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.964572 4758 trace.go:236] Trace[1381744827]: "Reflector ListAndWatch" name:object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-d798m" (22-Jan-2026 18:00:09.302) (total time: 11662ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1381744827]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dmariadb-operator-controller-manager-dockercfg-d798m&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39034->38.102.83.223:6443: read: connection reset by peer 11662ms (18:00:20.964) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1381744827]: [11.662220649s] [11.662220649s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:20.964594 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"mariadb-operator-controller-manager-dockercfg-d798m\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dmariadb-operator-controller-manager-dockercfg-d798m&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39034->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:20.984285 4758 reflector.go:561] object-"metallb-system"/"speaker-dockercfg-9jfxj": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dspeaker-dockercfg-9jfxj&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39018->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:20.984372 4758 trace.go:236] Trace[1541271917]: "Reflector ListAndWatch" name:object-"metallb-system"/"speaker-dockercfg-9jfxj" (22-Jan-2026 18:00:09.301) (total time: 11682ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1541271917]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dspeaker-dockercfg-9jfxj&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39018->38.102.83.223:6443: read: connection reset by peer 11682ms (18:00:20.984) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1541271917]: [11.68240378s] [11.68240378s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:20.984394 4758 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"speaker-dockercfg-9jfxj\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dspeaker-dockercfg-9jfxj&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39018->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:21.004554 4758 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=84596": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38910->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.004657 4758 trace.go:236] Trace[142157235]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 18:00:09.283) (total time: 11721ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[142157235]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=84596": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38910->38.102.83.223:6443: read: connection reset by peer 11720ms (18:00:21.004) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[142157235]: [11.721033923s] [11.721033923s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.004682 4758 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=84596\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38910->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:21.023773 4758 reflector.go:561] object-"openshift-multus"/"multus-ac-dockercfg-9lkdf": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-ac-dockercfg-9lkdf&resourceVersion=84680": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38942->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.023857 4758 trace.go:236] Trace[1916420563]: "Reflector ListAndWatch" name:object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" (22-Jan-2026 18:00:09.285) (total time: 11738ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1916420563]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-ac-dockercfg-9lkdf&resourceVersion=84680": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38942->38.102.83.223:6443: read: connection reset by peer 11738ms (18:00:21.023) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1916420563]: [11.738571831s] [11.738571831s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.023875 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-ac-dockercfg-9lkdf\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-ac-dockercfg-9lkdf&resourceVersion=84680\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38942->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:21.043872 4758 reflector.go:561] object-"openshift-route-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38956->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.043944 4758 trace.go:236] Trace[1832229059]: "Reflector ListAndWatch" name:object-"openshift-route-controller-manager"/"kube-root-ca.crt" (22-Jan-2026 18:00:09.286) (total time: 11757ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1832229059]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38956->38.102.83.223:6443: read: connection reset by peer 11757ms (18:00:21.043) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1832229059]: [11.757508627s] [11.757508627s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.043967 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38956->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:21.063800 4758 reflector.go:561] object-"openstack"/"glance-default-internal-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-default-internal-config-data&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38966->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.063878 4758 trace.go:236] Trace[1578109402]: "Reflector ListAndWatch" name:object-"openstack"/"glance-default-internal-config-data" (22-Jan-2026 18:00:09.290) (total time: 11773ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1578109402]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-default-internal-config-data&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38966->38.102.83.223:6443: read: connection reset by peer 11773ms (18:00:21.063) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1578109402]: [11.773196155s] [11.773196155s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.063901 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"glance-default-internal-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-default-internal-config-data&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38966->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.083428 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="743945d0-7488-4665-beaf-f2026e10a424" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.1.9:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.083709 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="743945d0-7488-4665-beaf-f2026e10a424" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.1.9:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:21.083786 4758 reflector.go:561] object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-2w6mb": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dmanila-operator-controller-manager-dockercfg-2w6mb&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38986->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.083846 4758 trace.go:236] Trace[241319578]: "Reflector ListAndWatch" name:object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-2w6mb" (22-Jan-2026 18:00:09.293) (total time: 11790ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[241319578]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dmanila-operator-controller-manager-dockercfg-2w6mb&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38986->38.102.83.223:6443: read: connection reset by peer 11790ms (18:00:21.083) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[241319578]: [11.790419784s] [11.790419784s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.083866 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"manila-operator-controller-manager-dockercfg-2w6mb\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dmanila-operator-controller-manager-dockercfg-2w6mb&resourceVersion=84512\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:38986->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:21.104181 4758 reflector.go:561] object-"openstack"/"test-operator-controller-priv-key": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dtest-operator-controller-priv-key&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39002->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.104284 4758 trace.go:236] Trace[793360858]: "Reflector ListAndWatch" name:object-"openstack"/"test-operator-controller-priv-key" (22-Jan-2026 18:00:09.299) (total time: 11804ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[793360858]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dtest-operator-controller-priv-key&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39002->38.102.83.223:6443: read: connection reset by peer 11804ms (18:00:21.104) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[793360858]: [11.80456232s] [11.80456232s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.104307 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"test-operator-controller-priv-key\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dtest-operator-controller-priv-key&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39002->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:21.123667 4758 reflector.go:561] object-"openshift-console-operator"/"trusted-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39040->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.123727 4758 trace.go:236] Trace[1034677993]: "Reflector ListAndWatch" name:object-"openshift-console-operator"/"trusted-ca" (22-Jan-2026 18:00:09.307) (total time: 11816ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1034677993]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39040->38.102.83.223:6443: read: connection reset by peer 11816ms (18:00:21.123) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1034677993]: [11.816374212s] [11.816374212s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.123755 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39040->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:21.143937 4758 reflector.go:561] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84276": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39070->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.144042 4758 trace.go:236] Trace[1883971653]: "Reflector ListAndWatch" name:object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" (22-Jan-2026 18:00:09.308) (total time: 11835ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1883971653]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84276": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39070->38.102.83.223:6443: read: connection reset by peer 11835ms (18:00:21.143) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1883971653]: [11.835201725s] [11.835201725s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.144065 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84276\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39070->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:21.164647 4758 reflector.go:561] object-"openstack"/"cert-glance-default-internal-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-glance-default-internal-svc&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39052->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.164754 4758 trace.go:236] Trace[536076310]: "Reflector ListAndWatch" name:object-"openstack"/"cert-glance-default-internal-svc" (22-Jan-2026 18:00:09.307) (total time: 11857ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[536076310]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-glance-default-internal-svc&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39052->38.102.83.223:6443: read: connection reset by peer 11857ms (18:00:21.164) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[536076310]: [11.857088361s] [11.857088361s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.164785 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-glance-default-internal-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-glance-default-internal-svc&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39052->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:21.184614 4758 reflector.go:561] object-"openshift-console"/"console-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dconsole-config&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39080->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.184699 4758 trace.go:236] Trace[1057666065]: "Reflector ListAndWatch" name:object-"openshift-console"/"console-config" (22-Jan-2026 18:00:09.309) (total time: 11874ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1057666065]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dconsole-config&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39080->38.102.83.223:6443: read: connection reset by peer 11874ms (18:00:21.184) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1057666065]: [11.874967878s] [11.874967878s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.184720 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"console-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dconsole-config&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39080->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:21.204570 4758 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-daemon-dockercfg-r5tcq&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39060->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.204719 4758 trace.go:236] Trace[1590064312]: "Reflector ListAndWatch" name:object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" (22-Jan-2026 18:00:09.307) (total time: 11896ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1590064312]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-daemon-dockercfg-r5tcq&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39060->38.102.83.223:6443: read: connection reset by peer 11896ms (18:00:21.204) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1590064312]: [11.896958828s] [11.896958828s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.204766 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-r5tcq\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-daemon-dockercfg-r5tcq&resourceVersion=84512\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39060->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:21.225165 4758 reflector.go:561] object-"openshift-kube-storage-version-migrator-operator"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39082->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.225279 4758 trace.go:236] Trace[275919636]: "Reflector ListAndWatch" name:object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" (22-Jan-2026 18:00:09.311) (total time: 11913ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[275919636]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39082->38.102.83.223:6443: read: connection reset by peer 11913ms (18:00:21.225) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[275919636]: [11.913250151s] [11.913250151s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.225306 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39082->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:21.244274 4758 reflector.go:561] object-"openstack"/"cert-neutron-public-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-neutron-public-svc&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39102->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.244379 4758 trace.go:236] Trace[1008264494]: "Reflector ListAndWatch" name:object-"openstack"/"cert-neutron-public-svc" (22-Jan-2026 18:00:09.314) (total time: 11929ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1008264494]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-neutron-public-svc&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39102->38.102.83.223:6443: read: connection reset by peer 11929ms (18:00:21.244) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1008264494]: [11.929537476s] [11.929537476s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.244404 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-neutron-public-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-neutron-public-svc&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39102->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:21.264769 4758 reflector.go:561] object-"openshift-cluster-version"/"cluster-version-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/secrets?fieldSelector=metadata.name%3Dcluster-version-operator-serving-cert&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39106->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.264892 4758 trace.go:236] Trace[1219047163]: "Reflector ListAndWatch" name:object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" (22-Jan-2026 18:00:09.315) (total time: 11949ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1219047163]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/secrets?fieldSelector=metadata.name%3Dcluster-version-operator-serving-cert&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39106->38.102.83.223:6443: read: connection reset by peer 11949ms (18:00:21.264) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1219047163]: [11.949407067s] [11.949407067s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.264919 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/secrets?fieldSelector=metadata.name%3Dcluster-version-operator-serving-cert&resourceVersion=84512\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39106->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.284317 4758 request.go:700] Waited for 6.128331097s, retries: 1, retry-after: 1s - retry-reason: due to retryable error, error: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dswift-storage-config-data&resourceVersion=84116": read tcp 38.102.83.223:39092->38.102.83.223:6443: read: connection reset by peer - request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dswift-storage-config-data&resourceVersion=84116 Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:21.284985 4758 reflector.go:561] object-"openstack"/"swift-storage-config-data": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dswift-storage-config-data&resourceVersion=84116": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39092->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.285092 4758 trace.go:236] Trace[1156645236]: "Reflector ListAndWatch" name:object-"openstack"/"swift-storage-config-data" (22-Jan-2026 18:00:09.313) (total time: 11971ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1156645236]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dswift-storage-config-data&resourceVersion=84116": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39092->38.102.83.223:6443: read: connection reset by peer 11971ms (18:00:21.284) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1156645236]: [11.971390947s] [11.971390947s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.285119 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"swift-storage-config-data\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dswift-storage-config-data&resourceVersion=84116\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39092->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:21.304707 4758 reflector.go:561] object-"openstack"/"placement-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dplacement-scripts&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39122->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.304883 4758 trace.go:236] Trace[656969276]: "Reflector ListAndWatch" name:object-"openstack"/"placement-scripts" (22-Jan-2026 18:00:09.316) (total time: 11988ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[656969276]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dplacement-scripts&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39122->38.102.83.223:6443: read: connection reset by peer 11988ms (18:00:21.304) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[656969276]: [11.988231616s] [11.988231616s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.304915 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"placement-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dplacement-scripts&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39122->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:21.324407 4758 reflector.go:561] object-"openshift-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39138->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.324487 4758 trace.go:236] Trace[1797681051]: "Reflector ListAndWatch" name:object-"openshift-controller-manager"/"kube-root-ca.crt" (22-Jan-2026 18:00:09.318) (total time: 12005ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1797681051]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39138->38.102.83.223:6443: read: connection reset by peer 12005ms (18:00:21.324) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1797681051]: [12.005551989s] [12.005551989s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.324509 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39138->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.328547 4758 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" interval="6.4s" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:21.344370 4758 reflector.go:561] object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dcertified-operators-dockercfg-4rs5g&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39140->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.344500 4758 trace.go:236] Trace[83459558]: "Reflector ListAndWatch" name:object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" (22-Jan-2026 18:00:09.320) (total time: 12024ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[83459558]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dcertified-operators-dockercfg-4rs5g&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39140->38.102.83.223:6443: read: connection reset by peer 12024ms (18:00:21.344) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[83459558]: [12.024402841s] [12.024402841s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.344533 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-4rs5g\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dcertified-operators-dockercfg-4rs5g&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39140->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:21.364694 4758 reflector.go:561] object-"openstack"/"cinder-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-config-data&resourceVersion=84193": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39128->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.364840 4758 trace.go:236] Trace[69830893]: "Reflector ListAndWatch" name:object-"openstack"/"cinder-config-data" (22-Jan-2026 18:00:09.317) (total time: 12047ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[69830893]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-config-data&resourceVersion=84193": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39128->38.102.83.223:6443: read: connection reset by peer 12046ms (18:00:21.364) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[69830893]: [12.047036309s] [12.047036309s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.364869 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cinder-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-config-data&resourceVersion=84193\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39128->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:21.384507 4758 reflector.go:561] object-"hostpath-provisioner"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39162->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.384621 4758 trace.go:236] Trace[687772783]: "Reflector ListAndWatch" name:object-"hostpath-provisioner"/"openshift-service-ca.crt" (22-Jan-2026 18:00:09.329) (total time: 12055ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[687772783]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39162->38.102.83.223:6443: read: connection reset by peer 12055ms (18:00:21.384) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[687772783]: [12.055239813s] [12.055239813s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.384649 4758 reflector.go:158] "Unhandled Error" err="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39162->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:21.405073 4758 reflector.go:561] object-"metallb-system"/"frr-k8s-certs-secret": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dfrr-k8s-certs-secret&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39146->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.405198 4758 trace.go:236] Trace[2049709585]: "Reflector ListAndWatch" name:object-"metallb-system"/"frr-k8s-certs-secret" (22-Jan-2026 18:00:09.324) (total time: 12080ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[2049709585]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dfrr-k8s-certs-secret&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39146->38.102.83.223:6443: read: connection reset by peer 12080ms (18:00:21.405) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[2049709585]: [12.080187313s] [12.080187313s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.405230 4758 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"frr-k8s-certs-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dfrr-k8s-certs-secret&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39146->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:21.424847 4758 reflector.go:561] object-"openstack"/"cert-glance-default-public-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-glance-default-public-svc&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39160->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.424947 4758 trace.go:236] Trace[1890470968]: "Reflector ListAndWatch" name:object-"openstack"/"cert-glance-default-public-svc" (22-Jan-2026 18:00:09.326) (total time: 12098ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1890470968]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-glance-default-public-svc&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39160->38.102.83.223:6443: read: connection reset by peer 12098ms (18:00:21.424) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1890470968]: [12.09882708s] [12.09882708s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.424974 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-glance-default-public-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-glance-default-public-svc&resourceVersion=84627\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39160->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:21.444315 4758 reflector.go:561] object-"openshift-ingress-canary"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84400": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39240->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.444419 4758 trace.go:236] Trace[262067285]: "Reflector ListAndWatch" name:object-"openshift-ingress-canary"/"openshift-service-ca.crt" (22-Jan-2026 18:00:09.347) (total time: 12097ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[262067285]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84400": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39240->38.102.83.223:6443: read: connection reset by peer 12097ms (18:00:21.444) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[262067285]: [12.097175495s] [12.097175495s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.444446 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84400\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39240->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:21.464377 4758 reflector.go:561] object-"openstack"/"cert-ceilometer-internal-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-ceilometer-internal-svc&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40260->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.464469 4758 trace.go:236] Trace[1289215576]: "Reflector ListAndWatch" name:object-"openstack"/"cert-ceilometer-internal-svc" (22-Jan-2026 18:00:09.521) (total time: 11942ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1289215576]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-ceilometer-internal-svc&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40260->38.102.83.223:6443: read: connection reset by peer 11942ms (18:00:21.464) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1289215576]: [11.942463209s] [11.942463209s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.464496 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-ceilometer-internal-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-ceilometer-internal-svc&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40260->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:21.484711 4758 reflector.go:561] object-"openstack"/"rabbitmq-notifications-erlang-cookie": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-notifications-erlang-cookie&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39386->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.484815 4758 trace.go:236] Trace[1875045029]: "Reflector ListAndWatch" name:object-"openstack"/"rabbitmq-notifications-erlang-cookie" (22-Jan-2026 18:00:09.371) (total time: 12113ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1875045029]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-notifications-erlang-cookie&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39386->38.102.83.223:6443: read: connection reset by peer 12113ms (18:00:21.484) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1875045029]: [12.113546771s] [12.113546771s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.484837 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-notifications-erlang-cookie\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-notifications-erlang-cookie&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39386->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:21.503796 4758 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-kubernetes-control-plane-dockercfg-gs7dd&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39618->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.503861 4758 trace.go:236] Trace[2016880832]: "Reflector ListAndWatch" name:object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" (22-Jan-2026 18:00:09.406) (total time: 12097ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[2016880832]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-kubernetes-control-plane-dockercfg-gs7dd&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39618->38.102.83.223:6443: read: connection reset by peer 12097ms (18:00:21.503) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[2016880832]: [12.097435432s] [12.097435432s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.503876 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-gs7dd\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-kubernetes-control-plane-dockercfg-gs7dd&resourceVersion=84627\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39618->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:21.525325 4758 reflector.go:561] object-"openstack"/"rabbitmq-cell1-default-user": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-cell1-default-user&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39606->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.525500 4758 trace.go:236] Trace[1096927378]: "Reflector ListAndWatch" name:object-"openstack"/"rabbitmq-cell1-default-user" (22-Jan-2026 18:00:09.406) (total time: 12119ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1096927378]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-cell1-default-user&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39606->38.102.83.223:6443: read: connection reset by peer 12118ms (18:00:21.525) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1096927378]: [12.119048951s] [12.119048951s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.525542 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-cell1-default-user\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-cell1-default-user&resourceVersion=84512\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39606->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:21.544865 4758 reflector.go:561] object-"openshift-nmstate"/"default-dockercfg-ckpvf": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-ckpvf&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39598->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.544991 4758 trace.go:236] Trace[835559417]: "Reflector ListAndWatch" name:object-"openshift-nmstate"/"default-dockercfg-ckpvf" (22-Jan-2026 18:00:09.405) (total time: 12139ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[835559417]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-ckpvf&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39598->38.102.83.223:6443: read: connection reset by peer 12139ms (18:00:21.544) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[835559417]: [12.139725665s] [12.139725665s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.545022 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"default-dockercfg-ckpvf\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-ckpvf&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39598->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:21.564490 4758 reflector.go:561] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-ancillary-tools-dockercfg-vnmsz&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39586->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.564654 4758 trace.go:236] Trace[466052974]: "Reflector ListAndWatch" name:object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" (22-Jan-2026 18:00:09.404) (total time: 12160ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[466052974]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-ancillary-tools-dockercfg-vnmsz&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39586->38.102.83.223:6443: read: connection reset by peer 12160ms (18:00:21.564) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[466052974]: [12.160266245s] [12.160266245s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.564696 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-vnmsz\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-ancillary-tools-dockercfg-vnmsz&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39586->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:21.584651 4758 reflector.go:561] object-"openstack"/"cert-watcher-internal-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-watcher-internal-svc&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39570->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.584809 4758 trace.go:236] Trace[1009243703]: "Reflector ListAndWatch" name:object-"openstack"/"cert-watcher-internal-svc" (22-Jan-2026 18:00:09.401) (total time: 12183ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1009243703]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-watcher-internal-svc&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39570->38.102.83.223:6443: read: connection reset by peer 12183ms (18:00:21.584) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1009243703]: [12.183403326s] [12.183403326s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.584838 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-watcher-internal-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-watcher-internal-svc&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39570->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:21.605181 4758 reflector.go:561] object-"openstack"/"cert-swift-internal-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-swift-internal-svc&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39564->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.605289 4758 trace.go:236] Trace[690286312]: "Reflector ListAndWatch" name:object-"openstack"/"cert-swift-internal-svc" (22-Jan-2026 18:00:09.400) (total time: 12205ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[690286312]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-swift-internal-svc&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39564->38.102.83.223:6443: read: connection reset by peer 12205ms (18:00:21.605) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[690286312]: [12.205111008s] [12.205111008s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.605338 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-swift-internal-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-swift-internal-svc&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39564->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:21.624333 4758 reflector.go:561] object-"openshift-multus"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84400": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39544->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.624449 4758 trace.go:236] Trace[1931142139]: "Reflector ListAndWatch" name:object-"openshift-multus"/"openshift-service-ca.crt" (22-Jan-2026 18:00:09.396) (total time: 12227ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1931142139]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84400": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39544->38.102.83.223:6443: read: connection reset by peer 12227ms (18:00:21.624) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1931142139]: [12.227513839s] [12.227513839s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.624472 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84400\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39544->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:21.645059 4758 reflector.go:561] object-"openshift-etcd-operator"/"etcd-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-operator-serving-cert&resourceVersion=84193": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39540->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.645209 4758 trace.go:236] Trace[1887688148]: "Reflector ListAndWatch" name:object-"openshift-etcd-operator"/"etcd-operator-serving-cert" (22-Jan-2026 18:00:09.394) (total time: 12250ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1887688148]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-operator-serving-cert&resourceVersion=84193": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39540->38.102.83.223:6443: read: connection reset by peer 12250ms (18:00:21.645) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1887688148]: [12.250936366s] [12.250936366s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.645244 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-operator-serving-cert&resourceVersion=84193\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39540->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:21.664134 4758 reflector.go:561] object-"openstack"/"ceilometer-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dceilometer-scripts&resourceVersion=84193": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39528->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.664241 4758 trace.go:236] Trace[2132893552]: "Reflector ListAndWatch" name:object-"openstack"/"ceilometer-scripts" (22-Jan-2026 18:00:09.392) (total time: 12272ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[2132893552]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dceilometer-scripts&resourceVersion=84193": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39528->38.102.83.223:6443: read: connection reset by peer 12272ms (18:00:21.664) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[2132893552]: [12.272109774s] [12.272109774s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.664267 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ceilometer-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dceilometer-scripts&resourceVersion=84193\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39528->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:21.684483 4758 reflector.go:561] object-"openstack"/"horizon-config-data": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dhorizon-config-data&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39412->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.684631 4758 trace.go:236] Trace[162015835]: "Reflector ListAndWatch" name:object-"openstack"/"horizon-config-data" (22-Jan-2026 18:00:09.377) (total time: 12307ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[162015835]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dhorizon-config-data&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39412->38.102.83.223:6443: read: connection reset by peer 12307ms (18:00:21.684) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[162015835]: [12.30718063s] [12.30718063s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.684676 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"horizon-config-data\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dhorizon-config-data&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39412->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:21.707105 4758 reflector.go:561] object-"openstack"/"prometheus-metric-storage-rulefiles-0": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dprometheus-metric-storage-rulefiles-0&resourceVersion=84116": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39396->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.707240 4758 trace.go:236] Trace[1079181172]: "Reflector ListAndWatch" name:object-"openstack"/"prometheus-metric-storage-rulefiles-0" (22-Jan-2026 18:00:09.373) (total time: 12333ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1079181172]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dprometheus-metric-storage-rulefiles-0&resourceVersion=84116": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39396->38.102.83.223:6443: read: connection reset by peer 12333ms (18:00:21.707) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1079181172]: [12.333690283s] [12.333690283s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.707265 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"prometheus-metric-storage-rulefiles-0\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dprometheus-metric-storage-rulefiles-0&resourceVersion=84116\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39396->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:21.724235 4758 reflector.go:561] object-"openshift-authentication-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84173": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39712->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.724320 4758 trace.go:236] Trace[1114767647]: "Reflector ListAndWatch" name:object-"openshift-authentication-operator"/"kube-root-ca.crt" (22-Jan-2026 18:00:09.419) (total time: 12305ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1114767647]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84173": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39712->38.102.83.223:6443: read: connection reset by peer 12305ms (18:00:21.724) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1114767647]: [12.305282899s] [12.305282899s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.724342 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84173\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39712->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:21.744582 4758 reflector.go:561] object-"openshift-machine-config-operator"/"kube-rbac-proxy": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&resourceVersion=84524": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40288->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.744734 4758 trace.go:236] Trace[1817331190]: "Reflector ListAndWatch" name:object-"openshift-machine-config-operator"/"kube-rbac-proxy" (22-Jan-2026 18:00:09.528) (total time: 12215ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1817331190]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&resourceVersion=84524": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40288->38.102.83.223:6443: read: connection reset by peer 12215ms (18:00:21.744) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1817331190]: [12.215984664s] [12.215984664s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.744814 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&resourceVersion=84524\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40288->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:21.764056 4758 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-operator-images": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dmachine-config-operator-images&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39736->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.764214 4758 trace.go:236] Trace[1218829610]: "Reflector ListAndWatch" name:object-"openshift-machine-config-operator"/"machine-config-operator-images" (22-Jan-2026 18:00:09.427) (total time: 12336ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1218829610]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dmachine-config-operator-images&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39736->38.102.83.223:6443: read: connection reset by peer 12336ms (18:00:21.764) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1218829610]: [12.336182381s] [12.336182381s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.764255 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dmachine-config-operator-images&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39736->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:21.784322 4758 reflector.go:561] object-"openstack"/"nova-api-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-api-config-data&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40248->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.784417 4758 trace.go:236] Trace[797350615]: "Reflector ListAndWatch" name:object-"openstack"/"nova-api-config-data" (22-Jan-2026 18:00:09.516) (total time: 12268ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[797350615]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-api-config-data&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40248->38.102.83.223:6443: read: connection reset by peer 12268ms (18:00:21.784) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[797350615]: [12.268152476s] [12.268152476s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.784439 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-api-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-api-config-data&resourceVersion=84627\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40248->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:21.804276 4758 reflector.go:561] object-"openstack"/"cinder-backup-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-backup-config-data&resourceVersion=84680": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40212->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.804403 4758 trace.go:236] Trace[1153020882]: "Reflector ListAndWatch" name:object-"openstack"/"cinder-backup-config-data" (22-Jan-2026 18:00:09.506) (total time: 12297ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1153020882]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-backup-config-data&resourceVersion=84680": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40212->38.102.83.223:6443: read: connection reset by peer 12297ms (18:00:21.804) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1153020882]: [12.297862966s] [12.297862966s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.804439 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cinder-backup-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-backup-config-data&resourceVersion=84680\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40212->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:21.823925 4758 reflector.go:561] object-"openshift-apiserver"/"config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=84400": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39762->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.824041 4758 trace.go:236] Trace[369241364]: "Reflector ListAndWatch" name:object-"openshift-apiserver"/"config" (22-Jan-2026 18:00:09.434) (total time: 12389ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[369241364]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=84400": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39762->38.102.83.223:6443: read: connection reset by peer 12389ms (18:00:21.823) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[369241364]: [12.38973528s] [12.38973528s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.824070 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=84400\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39762->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:21.844376 4758 reflector.go:561] object-"openstack"/"ovncontroller-scripts": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovncontroller-scripts&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39776->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.844452 4758 trace.go:236] Trace[223190895]: "Reflector ListAndWatch" name:object-"openstack"/"ovncontroller-scripts" (22-Jan-2026 18:00:09.435) (total time: 12409ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[223190895]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovncontroller-scripts&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39776->38.102.83.223:6443: read: connection reset by peer 12408ms (18:00:21.844) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[223190895]: [12.409056106s] [12.409056106s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.844471 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovncontroller-scripts\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovncontroller-scripts&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39776->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:21.864509 4758 reflector.go:561] object-"openshift-ingress"/"router-metrics-certs-default": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-metrics-certs-default&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39752->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.864644 4758 trace.go:236] Trace[1316322534]: "Reflector ListAndWatch" name:object-"openshift-ingress"/"router-metrics-certs-default" (22-Jan-2026 18:00:09.428) (total time: 12436ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1316322534]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-metrics-certs-default&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39752->38.102.83.223:6443: read: connection reset by peer 12436ms (18:00:21.864) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1316322534]: [12.436535806s] [12.436535806s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.864667 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress\"/\"router-metrics-certs-default\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-metrics-certs-default&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39752->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:21.884483 4758 reflector.go:561] object-"openshift-oauth-apiserver"/"trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39432->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.884636 4758 trace.go:236] Trace[1717165587]: "Reflector ListAndWatch" name:object-"openshift-oauth-apiserver"/"trusted-ca-bundle" (22-Jan-2026 18:00:09.382) (total time: 12502ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1717165587]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39432->38.102.83.223:6443: read: connection reset by peer 12502ms (18:00:21.884) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1717165587]: [12.502535405s] [12.502535405s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.884672 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39432->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:21.904064 4758 reflector.go:561] object-"openshift-oauth-apiserver"/"audit-1": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Daudit-1&resourceVersion=84276": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40258->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.904146 4758 trace.go:236] Trace[1058530476]: "Reflector ListAndWatch" name:object-"openshift-oauth-apiserver"/"audit-1" (22-Jan-2026 18:00:09.519) (total time: 12384ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1058530476]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Daudit-1&resourceVersion=84276": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40258->38.102.83.223:6443: read: connection reset by peer 12384ms (18:00:21.904) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1058530476]: [12.384329112s] [12.384329112s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.904167 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"audit-1\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Daudit-1&resourceVersion=84276\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40258->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:21.924326 4758 reflector.go:561] object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-p9vjx": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dglance-operator-controller-manager-dockercfg-p9vjx&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39620->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.924429 4758 trace.go:236] Trace[561187766]: "Reflector ListAndWatch" name:object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-p9vjx" (22-Jan-2026 18:00:09.407) (total time: 12516ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[561187766]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dglance-operator-controller-manager-dockercfg-p9vjx&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39620->38.102.83.223:6443: read: connection reset by peer 12516ms (18:00:21.924) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[561187766]: [12.516418562s] [12.516418562s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.924459 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"glance-operator-controller-manager-dockercfg-p9vjx\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dglance-operator-controller-manager-dockercfg-p9vjx&resourceVersion=84512\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39620->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:21.944177 4758 reflector.go:561] object-"metallb-system"/"controller-certs-secret": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dcontroller-certs-secret&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39422->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.944319 4758 trace.go:236] Trace[1250809198]: "Reflector ListAndWatch" name:object-"metallb-system"/"controller-certs-secret" (22-Jan-2026 18:00:09.378) (total time: 12565ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1250809198]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dcontroller-certs-secret&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39422->38.102.83.223:6443: read: connection reset by peer 12565ms (18:00:21.944) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1250809198]: [12.565698357s] [12.565698357s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.944357 4758 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"controller-certs-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dcontroller-certs-secret&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39422->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:21.964307 4758 reflector.go:561] object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dcluster-image-registry-operator-dockercfg-m4qtx&resourceVersion=84680": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39516->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.964440 4758 trace.go:236] Trace[299677152]: "Reflector ListAndWatch" name:object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" (22-Jan-2026 18:00:09.392) (total time: 12572ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[299677152]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dcluster-image-registry-operator-dockercfg-m4qtx&resourceVersion=84680": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39516->38.102.83.223:6443: read: connection reset by peer 12572ms (18:00:21.964) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[299677152]: [12.572349597s] [12.572349597s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.964471 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-m4qtx\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dcluster-image-registry-operator-dockercfg-m4qtx&resourceVersion=84680\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39516->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:21.984382 4758 reflector.go:561] object-"openstack"/"openstack-config-secret": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dopenstack-config-secret&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39632->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:21.984539 4758 trace.go:236] Trace[275823081]: "Reflector ListAndWatch" name:object-"openstack"/"openstack-config-secret" (22-Jan-2026 18:00:09.409) (total time: 12574ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[275823081]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dopenstack-config-secret&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39632->38.102.83.223:6443: read: connection reset by peer 12574ms (18:00:21.984) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[275823081]: [12.574582619s] [12.574582619s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:21.984584 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-config-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dopenstack-config-secret&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39632->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:22.004262 4758 reflector.go:561] object-"openstack"/"nova-nova-dockercfg-r6mc9": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-nova-dockercfg-r6mc9&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39514->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.004353 4758 trace.go:236] Trace[25340622]: "Reflector ListAndWatch" name:object-"openstack"/"nova-nova-dockercfg-r6mc9" (22-Jan-2026 18:00:09.391) (total time: 12612ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[25340622]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-nova-dockercfg-r6mc9&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39514->38.102.83.223:6443: read: connection reset by peer 12612ms (18:00:22.004) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[25340622]: [12.612333838s] [12.612333838s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:22.004375 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-nova-dockercfg-r6mc9\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-nova-dockercfg-r6mc9&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39514->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:22.024541 4758 reflector.go:561] object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39424->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.024946 4758 trace.go:236] Trace[1450810017]: "Reflector ListAndWatch" name:object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" (22-Jan-2026 18:00:09.381) (total time: 12642ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1450810017]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39424->38.102.83.223:6443: read: connection reset by peer 12642ms (18:00:22.024) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1450810017]: [12.642984623s] [12.642984623s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:22.024989 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39424->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:22.044221 4758 reflector.go:561] object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dopenshift-apiserver-sa-dockercfg-djjff&resourceVersion=84193": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39504->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.044354 4758 trace.go:236] Trace[247959853]: "Reflector ListAndWatch" name:object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" (22-Jan-2026 18:00:09.389) (total time: 12654ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[247959853]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dopenshift-apiserver-sa-dockercfg-djjff&resourceVersion=84193": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39504->38.102.83.223:6443: read: connection reset by peer 12654ms (18:00:22.044) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[247959853]: [12.65463599s] [12.65463599s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:22.044388 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-djjff\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dopenshift-apiserver-sa-dockercfg-djjff&resourceVersion=84193\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39504->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:22.064884 4758 reflector.go:561] object-"openstack"/"kube-state-metrics-tls-config": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkube-state-metrics-tls-config&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39698->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.065021 4758 trace.go:236] Trace[46154321]: "Reflector ListAndWatch" name:object-"openstack"/"kube-state-metrics-tls-config" (22-Jan-2026 18:00:09.418) (total time: 12645ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[46154321]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkube-state-metrics-tls-config&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39698->38.102.83.223:6443: read: connection reset by peer 12645ms (18:00:22.064) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[46154321]: [12.645988245s] [12.645988245s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:22.065054 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"kube-state-metrics-tls-config\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkube-state-metrics-tls-config&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39698->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:22.084649 4758 reflector.go:561] object-"openstack"/"galera-openstack-cell1-dockercfg-thg4w": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dgalera-openstack-cell1-dockercfg-thg4w&resourceVersion=84680": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39662->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.084777 4758 trace.go:236] Trace[1692173969]: "Reflector ListAndWatch" name:object-"openstack"/"galera-openstack-cell1-dockercfg-thg4w" (22-Jan-2026 18:00:09.411) (total time: 12672ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1692173969]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dgalera-openstack-cell1-dockercfg-thg4w&resourceVersion=84680": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39662->38.102.83.223:6443: read: connection reset by peer 12672ms (18:00:22.084) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1692173969]: [12.672760854s] [12.672760854s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:22.084800 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"galera-openstack-cell1-dockercfg-thg4w\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dgalera-openstack-cell1-dockercfg-thg4w&resourceVersion=84680\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39662->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:22.104462 4758 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-service-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dv4-0-config-system-service-ca&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39686->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.104604 4758 trace.go:236] Trace[827764112]: "Reflector ListAndWatch" name:object-"openshift-authentication"/"v4-0-config-system-service-ca" (22-Jan-2026 18:00:09.413) (total time: 12690ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[827764112]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dv4-0-config-system-service-ca&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39686->38.102.83.223:6443: read: connection reset by peer 12690ms (18:00:22.104) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[827764112]: [12.690766925s] [12.690766925s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:22.104636 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dv4-0-config-system-service-ca&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39686->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:22.125374 4758 reflector.go:561] object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-pdg6h": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Ddesignate-operator-controller-manager-dockercfg-pdg6h&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39646->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.125491 4758 trace.go:236] Trace[901365935]: "Reflector ListAndWatch" name:object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-pdg6h" (22-Jan-2026 18:00:09.411) (total time: 12714ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[901365935]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Ddesignate-operator-controller-manager-dockercfg-pdg6h&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39646->38.102.83.223:6443: read: connection reset by peer 12714ms (18:00:22.125) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[901365935]: [12.714369339s] [12.714369339s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:22.125520 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"designate-operator-controller-manager-dockercfg-pdg6h\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Ddesignate-operator-controller-manager-dockercfg-pdg6h&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39646->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:22.145003 4758 reflector.go:561] object-"openshift-operators"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39496->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.145195 4758 trace.go:236] Trace[799720251]: "Reflector ListAndWatch" name:object-"openshift-operators"/"kube-root-ca.crt" (22-Jan-2026 18:00:09.389) (total time: 12755ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[799720251]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39496->38.102.83.223:6443: read: connection reset by peer 12755ms (18:00:22.144) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[799720251]: [12.755548871s] [12.755548871s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:22.145258 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39496->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:22.164667 4758 reflector.go:561] object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-s6gv4": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dneutron-operator-controller-manager-dockercfg-s6gv4&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39828->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.164770 4758 trace.go:236] Trace[490288195]: "Reflector ListAndWatch" name:object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-s6gv4" (22-Jan-2026 18:00:09.443) (total time: 12720ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[490288195]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dneutron-operator-controller-manager-dockercfg-s6gv4&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39828->38.102.83.223:6443: read: connection reset by peer 12720ms (18:00:22.164) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[490288195]: [12.720788874s] [12.720788874s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:22.164793 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"neutron-operator-controller-manager-dockercfg-s6gv4\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dneutron-operator-controller-manager-dockercfg-s6gv4&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39828->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.181987 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wxd6d" podUID="cdd1962b-fbf0-480c-b5e2-e28ee6988046" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.91:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.181995 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wxd6d" podUID="cdd1962b-fbf0-480c-b5e2-e28ee6988046" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.91:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:22.184056 4758 reflector.go:561] object-"openshift-ingress"/"router-certs-default": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-certs-default&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39822->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.184139 4758 trace.go:236] Trace[75619946]: "Reflector ListAndWatch" name:object-"openshift-ingress"/"router-certs-default" (22-Jan-2026 18:00:09.443) (total time: 12740ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[75619946]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-certs-default&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39822->38.102.83.223:6443: read: connection reset by peer 12740ms (18:00:22.184) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[75619946]: [12.740268185s] [12.740268185s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:22.184158 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress\"/\"router-certs-default\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-certs-default&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39822->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:22.204091 4758 reflector.go:561] object-"cert-manager"/"cert-manager-cainjector-dockercfg-x4h8f": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-cainjector-dockercfg-x4h8f&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39216->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.204167 4758 trace.go:236] Trace[1842538566]: "Reflector ListAndWatch" name:object-"cert-manager"/"cert-manager-cainjector-dockercfg-x4h8f" (22-Jan-2026 18:00:09.344) (total time: 12859ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1842538566]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-cainjector-dockercfg-x4h8f&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39216->38.102.83.223:6443: read: connection reset by peer 12859ms (18:00:22.204) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1842538566]: [12.859170446s] [12.859170446s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:22.204190 4758 reflector.go:158] "Unhandled Error" err="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-x4h8f\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-cainjector-dockercfg-x4h8f&resourceVersion=84512\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39216->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:22.224416 4758 reflector.go:561] object-"openstack"/"cert-ovndbcluster-nb-ovndbs": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-ovndbcluster-nb-ovndbs&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39358->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.224542 4758 trace.go:236] Trace[926697034]: "Reflector ListAndWatch" name:object-"openstack"/"cert-ovndbcluster-nb-ovndbs" (22-Jan-2026 18:00:09.363) (total time: 12860ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[926697034]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-ovndbcluster-nb-ovndbs&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39358->38.102.83.223:6443: read: connection reset by peer 12860ms (18:00:22.224) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[926697034]: [12.860820621s] [12.860820621s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:22.224574 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-ovndbcluster-nb-ovndbs\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-ovndbcluster-nb-ovndbs&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39358->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:22.244697 4758 reflector.go:561] object-"openshift-network-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84400": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39338->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.244849 4758 trace.go:236] Trace[683155320]: "Reflector ListAndWatch" name:object-"openshift-network-operator"/"openshift-service-ca.crt" (22-Jan-2026 18:00:09.360) (total time: 12884ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[683155320]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84400": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39338->38.102.83.223:6443: read: connection reset by peer 12884ms (18:00:22.244) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[683155320]: [12.884502496s] [12.884502496s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:22.244881 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84400\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39338->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:22.264497 4758 reflector.go:561] object-"openshift-multus"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39326->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.264581 4758 trace.go:236] Trace[2075277287]: "Reflector ListAndWatch" name:object-"openshift-multus"/"kube-root-ca.crt" (22-Jan-2026 18:00:09.360) (total time: 12904ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[2075277287]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39326->38.102.83.223:6443: read: connection reset by peer 12904ms (18:00:22.264) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[2075277287]: [12.904238044s] [12.904238044s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:22.264602 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39326->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:22.284331 4758 reflector.go:561] object-"openshift-network-node-identity"/"ovnkube-identity-cm": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dovnkube-identity-cm&resourceVersion=84439": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39324->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.284433 4758 trace.go:236] Trace[1421326609]: "Reflector ListAndWatch" name:object-"openshift-network-node-identity"/"ovnkube-identity-cm" (22-Jan-2026 18:00:09.360) (total time: 12924ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1421326609]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dovnkube-identity-cm&resourceVersion=84439": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39324->38.102.83.223:6443: read: connection reset by peer 12924ms (18:00:22.284) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1421326609]: [12.924104516s] [12.924104516s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:22.284456 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dovnkube-identity-cm&resourceVersion=84439\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39324->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.304300 4758 request.go:700] Waited for 7.14391398s, retries: 1, retry-after: 1s - retry-reason: due to retryable error, error: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-config&resourceVersion=84357": read tcp 38.102.83.223:39228->38.102.83.223:6443: read: connection reset by peer - request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-config&resourceVersion=84357 Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:22.304827 4758 reflector.go:561] object-"openstack"/"openstack-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-config&resourceVersion=84357": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39228->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.305031 4758 trace.go:236] Trace[366109315]: "Reflector ListAndWatch" name:object-"openstack"/"openstack-config" (22-Jan-2026 18:00:09.346) (total time: 12958ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[366109315]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-config&resourceVersion=84357": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39228->38.102.83.223:6443: read: connection reset by peer 12958ms (18:00:22.304) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[366109315]: [12.958958135s] [12.958958135s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:22.305051 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-config&resourceVersion=84357\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39228->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:22.324756 4758 reflector.go:561] object-"openstack"/"watcher-watcher-dockercfg-bvchw": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dwatcher-watcher-dockercfg-bvchw&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39438->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.324827 4758 trace.go:236] Trace[787764750]: "Reflector ListAndWatch" name:object-"openstack"/"watcher-watcher-dockercfg-bvchw" (22-Jan-2026 18:00:09.384) (total time: 12940ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[787764750]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dwatcher-watcher-dockercfg-bvchw&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39438->38.102.83.223:6443: read: connection reset by peer 12940ms (18:00:22.324) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[787764750]: [12.940442781s] [12.940442781s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:22.324845 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"watcher-watcher-dockercfg-bvchw\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dwatcher-watcher-dockercfg-bvchw&resourceVersion=84512\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39438->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:22.344649 4758 reflector.go:561] object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-tzrkw": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovncluster-ovndbcluster-nb-dockercfg-tzrkw&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40080->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.344834 4758 trace.go:236] Trace[633952]: "Reflector ListAndWatch" name:object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-tzrkw" (22-Jan-2026 18:00:09.490) (total time: 12854ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[633952]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovncluster-ovndbcluster-nb-dockercfg-tzrkw&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40080->38.102.83.223:6443: read: connection reset by peer 12853ms (18:00:22.344) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[633952]: [12.854073588s] [12.854073588s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:22.344867 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovncluster-ovndbcluster-nb-dockercfg-tzrkw\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovncluster-ovndbcluster-nb-dockercfg-tzrkw&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40080->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:22.364102 4758 reflector.go:561] object-"openstack"/"barbican-api-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-api-config-data&resourceVersion=84193": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39266->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.364175 4758 trace.go:236] Trace[305543110]: "Reflector ListAndWatch" name:object-"openstack"/"barbican-api-config-data" (22-Jan-2026 18:00:09.349) (total time: 13014ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[305543110]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-api-config-data&resourceVersion=84193": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39266->38.102.83.223:6443: read: connection reset by peer 13014ms (18:00:22.364) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[305543110]: [13.014644165s] [13.014644165s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:22.364192 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"barbican-api-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-api-config-data&resourceVersion=84193\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39266->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:22.383894 4758 reflector.go:561] object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/secrets?fieldSelector=metadata.name%3Dkube-scheduler-operator-serving-cert&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39308->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.383976 4758 trace.go:236] Trace[108985338]: "Reflector ListAndWatch" name:object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" (22-Jan-2026 18:00:09.360) (total time: 13023ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[108985338]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/secrets?fieldSelector=metadata.name%3Dkube-scheduler-operator-serving-cert&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39308->38.102.83.223:6443: read: connection reset by peer 13023ms (18:00:22.383) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[108985338]: [13.02369022s] [13.02369022s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:22.383996 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/secrets?fieldSelector=metadata.name%3Dkube-scheduler-operator-serving-cert&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39308->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:22.403874 4758 reflector.go:561] object-"openstack-operators"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84400": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39304->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.403950 4758 trace.go:236] Trace[291820523]: "Reflector ListAndWatch" name:object-"openstack-operators"/"openshift-service-ca.crt" (22-Jan-2026 18:00:09.356) (total time: 13047ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[291820523]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84400": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39304->38.102.83.223:6443: read: connection reset by peer 13047ms (18:00:22.403) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[291820523]: [13.04790084s] [13.04790084s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:22.403969 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84400\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39304->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:22.424197 4758 reflector.go:561] object-"openstack"/"rabbitmq-notifications-plugins-conf": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-notifications-plugins-conf&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39278->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.424306 4758 trace.go:236] Trace[1654780502]: "Reflector ListAndWatch" name:object-"openstack"/"rabbitmq-notifications-plugins-conf" (22-Jan-2026 18:00:09.349) (total time: 13074ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1654780502]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-notifications-plugins-conf&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39278->38.102.83.223:6443: read: connection reset by peer 13074ms (18:00:22.424) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1654780502]: [13.074712481s] [13.074712481s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:22.424343 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-notifications-plugins-conf\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-notifications-plugins-conf&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39278->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:22.444870 4758 reflector.go:561] object-"openshift-config-operator"/"config-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/secrets?fieldSelector=metadata.name%3Dconfig-operator-serving-cert&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39252->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.444969 4758 trace.go:236] Trace[91366811]: "Reflector ListAndWatch" name:object-"openshift-config-operator"/"config-operator-serving-cert" (22-Jan-2026 18:00:09.348) (total time: 13096ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[91366811]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/secrets?fieldSelector=metadata.name%3Dconfig-operator-serving-cert&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39252->38.102.83.223:6443: read: connection reset by peer 13096ms (18:00:22.444) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[91366811]: [13.096561787s] [13.096561787s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:22.444993 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"config-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/secrets?fieldSelector=metadata.name%3Dconfig-operator-serving-cert&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39252->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:22.464035 4758 reflector.go:561] object-"openshift-service-ca"/"signing-key": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/secrets?fieldSelector=metadata.name%3Dsigning-key&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40276->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.464158 4758 trace.go:236] Trace[478278936]: "Reflector ListAndWatch" name:object-"openshift-service-ca"/"signing-key" (22-Jan-2026 18:00:09.522) (total time: 12942ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[478278936]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/secrets?fieldSelector=metadata.name%3Dsigning-key&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40276->38.102.83.223:6443: read: connection reset by peer 12942ms (18:00:22.464) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[478278936]: [12.942113727s] [12.942113727s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:22.464191 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca\"/\"signing-key\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/secrets?fieldSelector=metadata.name%3Dsigning-key&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40276->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:22.484535 4758 reflector.go:561] object-"openstack"/"ovncontroller-metrics-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovncontroller-metrics-config&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40240->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.484720 4758 trace.go:236] Trace[1640330208]: "Reflector ListAndWatch" name:object-"openstack"/"ovncontroller-metrics-config" (22-Jan-2026 18:00:09.515) (total time: 12969ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1640330208]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovncontroller-metrics-config&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40240->38.102.83.223:6443: read: connection reset by peer 12969ms (18:00:22.484) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1640330208]: [12.969493663s] [12.969493663s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:22.484795 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovncontroller-metrics-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovncontroller-metrics-config&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40240->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:22.504316 4758 reflector.go:561] object-"openshift-image-registry"/"image-registry-operator-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dimage-registry-operator-tls&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40214->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.504465 4758 trace.go:236] Trace[1222833913]: "Reflector ListAndWatch" name:object-"openshift-image-registry"/"image-registry-operator-tls" (22-Jan-2026 18:00:09.508) (total time: 12996ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1222833913]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dimage-registry-operator-tls&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40214->38.102.83.223:6443: read: connection reset by peer 12996ms (18:00:22.504) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1222833913]: [12.996287393s] [12.996287393s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:22.504502 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"image-registry-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dimage-registry-operator-tls&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40214->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:22.523869 4758 reflector.go:561] object-"openstack"/"rabbitmq-cell1-plugins-conf": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-cell1-plugins-conf&resourceVersion=84357": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40060->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.523982 4758 trace.go:236] Trace[1485491874]: "Reflector ListAndWatch" name:object-"openstack"/"rabbitmq-cell1-plugins-conf" (22-Jan-2026 18:00:09.487) (total time: 13036ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1485491874]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-cell1-plugins-conf&resourceVersion=84357": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40060->38.102.83.223:6443: read: connection reset by peer 13036ms (18:00:22.523) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1485491874]: [13.036722866s] [13.036722866s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:22.524012 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-cell1-plugins-conf\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-cell1-plugins-conf&resourceVersion=84357\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40060->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:22.544265 4758 reflector.go:561] object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/secrets?fieldSelector=metadata.name%3Dcsi-hostpath-provisioner-sa-dockercfg-qd74k&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39480->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.544381 4758 trace.go:236] Trace[899701621]: "Reflector ListAndWatch" name:object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" (22-Jan-2026 18:00:09.386) (total time: 13158ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[899701621]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/secrets?fieldSelector=metadata.name%3Dcsi-hostpath-provisioner-sa-dockercfg-qd74k&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39480->38.102.83.223:6443: read: connection reset by peer 13158ms (18:00:22.544) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[899701621]: [13.158287309s] [13.158287309s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:22.544407 4758 reflector.go:158] "Unhandled Error" err="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-qd74k\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/secrets?fieldSelector=metadata.name%3Dcsi-hostpath-provisioner-sa-dockercfg-qd74k&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39480->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:22.563931 4758 reflector.go:561] object-"openshift-etcd-operator"/"etcd-client": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-client&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40052->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.564052 4758 trace.go:236] Trace[1836579412]: "Reflector ListAndWatch" name:object-"openshift-etcd-operator"/"etcd-client" (22-Jan-2026 18:00:09.486) (total time: 13077ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1836579412]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-client&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40052->38.102.83.223:6443: read: connection reset by peer 13077ms (18:00:22.563) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1836579412]: [13.077890497s] [13.077890497s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:22.564092 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-client&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40052->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:22.583774 4758 reflector.go:561] object-"openstack"/"rabbitmq-notifications-server-conf": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-notifications-server-conf&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40076->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.583907 4758 trace.go:236] Trace[1253755209]: "Reflector ListAndWatch" name:object-"openstack"/"rabbitmq-notifications-server-conf" (22-Jan-2026 18:00:09.490) (total time: 13093ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1253755209]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-notifications-server-conf&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40076->38.102.83.223:6443: read: connection reset by peer 13093ms (18:00:22.583) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1253755209]: [13.093207915s] [13.093207915s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:22.583934 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-notifications-server-conf\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-notifications-server-conf&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40076->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:22.604625 4758 reflector.go:561] object-"openshift-route-controller-manager"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=84680": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40046->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.604727 4758 trace.go:236] Trace[1176956648]: "Reflector ListAndWatch" name:object-"openshift-route-controller-manager"/"serving-cert" (22-Jan-2026 18:00:09.486) (total time: 13118ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1176956648]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=84680": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40046->38.102.83.223:6443: read: connection reset by peer 13118ms (18:00:22.604) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1176956648]: [13.118606517s] [13.118606517s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:22.604765 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=84680\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40046->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:22.624674 4758 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-operator-dockercfg-98p87&resourceVersion=84193": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39166->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.624725 4758 trace.go:236] Trace[2034621839]: "Reflector ListAndWatch" name:object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" (22-Jan-2026 18:00:09.329) (total time: 13295ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[2034621839]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-operator-dockercfg-98p87&resourceVersion=84193": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39166->38.102.83.223:6443: read: connection reset by peer 13295ms (18:00:22.624) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[2034621839]: [13.295392346s] [13.295392346s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:22.624756 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-98p87\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-operator-dockercfg-98p87&resourceVersion=84193\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39166->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:22.643959 4758 reflector.go:561] object-"openstack"/"rabbitmq-plugins-conf": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-plugins-conf&resourceVersion=84116": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39202->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.644090 4758 trace.go:236] Trace[392264078]: "Reflector ListAndWatch" name:object-"openstack"/"rabbitmq-plugins-conf" (22-Jan-2026 18:00:09.341) (total time: 13302ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[392264078]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-plugins-conf&resourceVersion=84116": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39202->38.102.83.223:6443: read: connection reset by peer 13302ms (18:00:22.643) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[392264078]: [13.302795699s] [13.302795699s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:22.644121 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-plugins-conf\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-plugins-conf&resourceVersion=84116\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39202->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:22.664089 4758 reflector.go:561] object-"openstack"/"cert-ovn-metrics": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-ovn-metrics&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39926->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.664183 4758 trace.go:236] Trace[445818062]: "Reflector ListAndWatch" name:object-"openstack"/"cert-ovn-metrics" (22-Jan-2026 18:00:09.464) (total time: 13199ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[445818062]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-ovn-metrics&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39926->38.102.83.223:6443: read: connection reset by peer 13199ms (18:00:22.664) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[445818062]: [13.199477141s] [13.199477141s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:22.664205 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-ovn-metrics\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-ovn-metrics&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39926->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:22.684117 4758 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-images": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dmachine-api-operator-images&resourceVersion=84485": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40174->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.684187 4758 trace.go:236] Trace[988547303]: "Reflector ListAndWatch" name:object-"openshift-machine-api"/"machine-api-operator-images" (22-Jan-2026 18:00:09.500) (total time: 13183ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[988547303]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dmachine-api-operator-images&resourceVersion=84485": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40174->38.102.83.223:6443: read: connection reset by peer 13183ms (18:00:22.684) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[988547303]: [13.183563128s] [13.183563128s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:22.684207 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-images\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dmachine-api-operator-images&resourceVersion=84485\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40174->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:22.704014 4758 reflector.go:561] object-"openstack"/"openstack-cell1-config-data": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-cell1-config-data&resourceVersion=84439": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39200->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.704061 4758 trace.go:236] Trace[2111949392]: "Reflector ListAndWatch" name:object-"openstack"/"openstack-cell1-config-data" (22-Jan-2026 18:00:09.339) (total time: 13364ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[2111949392]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-cell1-config-data&resourceVersion=84439": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39200->38.102.83.223:6443: read: connection reset by peer 13364ms (18:00:22.704) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[2111949392]: [13.364951672s] [13.364951672s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:22.704073 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-cell1-config-data\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-cell1-config-data&resourceVersion=84439\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39200->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:22.723843 4758 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=84485": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39292->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.723914 4758 trace.go:236] Trace[1401098065]: "Reflector ListAndWatch" name:object-"openshift-network-console"/"networking-console-plugin" (22-Jan-2026 18:00:09.353) (total time: 13370ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1401098065]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=84485": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39292->38.102.83.223:6443: read: connection reset by peer 13370ms (18:00:22.723) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1401098065]: [13.370164995s] [13.370164995s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:22.723929 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=84485\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39292->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:22.744663 4758 reflector.go:561] object-"openshift-oauth-apiserver"/"etcd-serving-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Detcd-serving-ca&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40158->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.744756 4758 trace.go:236] Trace[699498100]: "Reflector ListAndWatch" name:object-"openshift-oauth-apiserver"/"etcd-serving-ca" (22-Jan-2026 18:00:09.500) (total time: 13244ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[699498100]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Detcd-serving-ca&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40158->38.102.83.223:6443: read: connection reset by peer 13244ms (18:00:22.744) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[699498100]: [13.24415739s] [13.24415739s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:22.744776 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Detcd-serving-ca&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40158->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:22.763918 4758 reflector.go:561] object-"openstack"/"cert-neutron-ovndbs": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-neutron-ovndbs&resourceVersion=84680": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39190->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.764016 4758 trace.go:236] Trace[1185459768]: "Reflector ListAndWatch" name:object-"openstack"/"cert-neutron-ovndbs" (22-Jan-2026 18:00:09.338) (total time: 13424ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1185459768]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-neutron-ovndbs&resourceVersion=84680": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39190->38.102.83.223:6443: read: connection reset by peer 13424ms (18:00:22.763) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1185459768]: [13.424990409s] [13.424990409s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:22.764037 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-neutron-ovndbs\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-neutron-ovndbs&resourceVersion=84680\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39190->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:22.784641 4758 reflector.go:561] object-"openstack"/"horizon-scripts": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dhorizon-scripts&resourceVersion=84276": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40142->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.784734 4758 trace.go:236] Trace[1259093862]: "Reflector ListAndWatch" name:object-"openstack"/"horizon-scripts" (22-Jan-2026 18:00:09.498) (total time: 13286ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1259093862]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dhorizon-scripts&resourceVersion=84276": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40142->38.102.83.223:6443: read: connection reset by peer 13286ms (18:00:22.784) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1259093862]: [13.286331389s] [13.286331389s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:22.784774 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"horizon-scripts\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dhorizon-scripts&resourceVersion=84276\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40142->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:22.804533 4758 reflector.go:561] object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-sa-dockercfg-msq4c&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39182->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.804608 4758 trace.go:236] Trace[154762820]: "Reflector ListAndWatch" name:object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" (22-Jan-2026 18:00:09.332) (total time: 13471ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[154762820]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-sa-dockercfg-msq4c&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39182->38.102.83.223:6443: read: connection reset by peer 13471ms (18:00:22.804) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[154762820]: [13.471892178s] [13.471892178s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:22.804626 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-msq4c\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-sa-dockercfg-msq4c&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39182->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:22.824522 4758 reflector.go:561] object-"openstack"/"cert-ovncontroller-ovndbs": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-ovncontroller-ovndbs&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39172->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.824588 4758 trace.go:236] Trace[577177173]: "Reflector ListAndWatch" name:object-"openstack"/"cert-ovncontroller-ovndbs" (22-Jan-2026 18:00:09.331) (total time: 13493ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[577177173]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-ovncontroller-ovndbs&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39172->38.102.83.223:6443: read: connection reset by peer 13492ms (18:00:22.824) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[577177173]: [13.493005463s] [13.493005463s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:22.824604 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-ovncontroller-ovndbs\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-ovncontroller-ovndbs&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39172->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:22.844408 4758 reflector.go:561] object-"openstack"/"openstack-edpm-ipam": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-edpm-ipam&resourceVersion=84580": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40130->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.844454 4758 trace.go:236] Trace[304346751]: "Reflector ListAndWatch" name:object-"openstack"/"openstack-edpm-ipam" (22-Jan-2026 18:00:09.496) (total time: 13348ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[304346751]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-edpm-ipam&resourceVersion=84580": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40130->38.102.83.223:6443: read: connection reset by peer 13348ms (18:00:22.844) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[304346751]: [13.348301029s] [13.348301029s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:22.844466 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-edpm-ipam\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-edpm-ipam&resourceVersion=84580\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40130->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:22.864629 4758 reflector.go:561] object-"openshift-ingress"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84485": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40096->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.864688 4758 trace.go:236] Trace[706494146]: "Reflector ListAndWatch" name:object-"openshift-ingress"/"openshift-service-ca.crt" (22-Jan-2026 18:00:09.491) (total time: 13372ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[706494146]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84485": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40096->38.102.83.223:6443: read: connection reset by peer 13372ms (18:00:22.864) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[706494146]: [13.372831228s] [13.372831228s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:22.864701 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84485\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40096->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:22.883635 4758 reflector.go:561] object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/secrets?fieldSelector=metadata.name%3Droute-controller-manager-sa-dockercfg-h2zr2&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40086->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.883703 4758 trace.go:236] Trace[1580195260]: "Reflector ListAndWatch" name:object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" (22-Jan-2026 18:00:09.490) (total time: 13393ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1580195260]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/secrets?fieldSelector=metadata.name%3Droute-controller-manager-sa-dockercfg-h2zr2&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40086->38.102.83.223:6443: read: connection reset by peer 13393ms (18:00:22.883) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1580195260]: [13.39308987s] [13.39308987s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:22.883719 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-h2zr2\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/secrets?fieldSelector=metadata.name%3Droute-controller-manager-sa-dockercfg-h2zr2&resourceVersion=84512\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40086->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:22.904477 4758 reflector.go:561] object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-dockercfg-k9rxt&resourceVersion=84567": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40108->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.904544 4758 trace.go:236] Trace[202175416]: "Reflector ListAndWatch" name:object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" (22-Jan-2026 18:00:09.495) (total time: 13409ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[202175416]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-dockercfg-k9rxt&resourceVersion=84567": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40108->38.102.83.223:6443: read: connection reset by peer 13409ms (18:00:22.904) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[202175416]: [13.409498407s] [13.409498407s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:22.904561 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-k9rxt\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-dockercfg-k9rxt&resourceVersion=84567\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40108->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:22.924389 4758 reflector.go:561] object-"openshift-authentication"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39488->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.924453 4758 trace.go:236] Trace[980269126]: "Reflector ListAndWatch" name:object-"openshift-authentication"/"kube-root-ca.crt" (22-Jan-2026 18:00:09.385) (total time: 13539ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[980269126]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39488->38.102.83.223:6443: read: connection reset by peer 13539ms (18:00:22.924) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[980269126]: [13.539287135s] [13.539287135s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:22.924471 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39488->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:22.944231 4758 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=84133": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39492->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.944341 4758 trace.go:236] Trace[431253403]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 18:00:09.387) (total time: 13557ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[431253403]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=84133": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39492->38.102.83.223:6443: read: connection reset by peer 13556ms (18:00:22.944) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[431253403]: [13.557028499s] [13.557028499s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:22.944371 4758 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=84133\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39492->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:22.964622 4758 reflector.go:561] object-"openstack"/"placement-placement-dockercfg-n4qvk": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dplacement-placement-dockercfg-n4qvk&resourceVersion=84193": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39902->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.964803 4758 trace.go:236] Trace[494079030]: "Reflector ListAndWatch" name:object-"openstack"/"placement-placement-dockercfg-n4qvk" (22-Jan-2026 18:00:09.462) (total time: 13502ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[494079030]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dplacement-placement-dockercfg-n4qvk&resourceVersion=84193": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39902->38.102.83.223:6443: read: connection reset by peer 13502ms (18:00:22.964) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[494079030]: [13.502555673s] [13.502555673s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:22.964857 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"placement-placement-dockercfg-n4qvk\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dplacement-placement-dockercfg-n4qvk&resourceVersion=84193\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39902->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:22.985043 4758 reflector.go:561] object-"openshift-etcd-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84439": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39792->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:22.985211 4758 trace.go:236] Trace[1484747387]: "Reflector ListAndWatch" name:object-"openshift-etcd-operator"/"openshift-service-ca.crt" (22-Jan-2026 18:00:09.441) (total time: 13543ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1484747387]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84439": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39792->38.102.83.223:6443: read: connection reset by peer 13543ms (18:00:22.985) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1484747387]: [13.543578131s] [13.543578131s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:22.985244 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84439\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39792->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:23.006580 4758 reflector.go:561] object-"openshift-console"/"console-oauth-config": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Dconsole-oauth-config&resourceVersion=84680": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39910->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:23.006703 4758 trace.go:236] Trace[1640304220]: "Reflector ListAndWatch" name:object-"openshift-console"/"console-oauth-config" (22-Jan-2026 18:00:09.462) (total time: 13544ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1640304220]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Dconsole-oauth-config&resourceVersion=84680": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39910->38.102.83.223:6443: read: connection reset by peer 13544ms (18:00:23.006) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1640304220]: [13.544459875s] [13.544459875s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:23.006734 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"console-oauth-config\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Dconsole-oauth-config&resourceVersion=84680\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39910->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:23.024610 4758 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&resourceVersion=84625": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40220->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:23.024709 4758 trace.go:236] Trace[1029857874]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 18:00:09.515) (total time: 13509ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1029857874]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&resourceVersion=84625": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40220->38.102.83.223:6443: read: connection reset by peer 13509ms (18:00:23.024) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1029857874]: [13.509541613s] [13.509541613s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:23.024732 4758 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&resourceVersion=84625\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40220->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:23.045282 4758 reflector.go:561] object-"openstack"/"rabbitmq-default-user": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-default-user&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40210->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:23.045444 4758 trace.go:236] Trace[1026533403]: "Reflector ListAndWatch" name:object-"openstack"/"rabbitmq-default-user" (22-Jan-2026 18:00:09.504) (total time: 13541ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1026533403]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-default-user&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40210->38.102.83.223:6443: read: connection reset by peer 13541ms (18:00:23.045) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1026533403]: [13.541272558s] [13.541272558s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:23.045484 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-default-user\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-default-user&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40210->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:23.064629 4758 reflector.go:561] object-"openstack"/"nova-metadata-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-metadata-config-data&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40290->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:23.064815 4758 trace.go:236] Trace[964869296]: "Reflector ListAndWatch" name:object-"openstack"/"nova-metadata-config-data" (22-Jan-2026 18:00:09.528) (total time: 13535ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[964869296]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-metadata-config-data&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40290->38.102.83.223:6443: read: connection reset by peer 13535ms (18:00:23.064) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[964869296]: [13.535957494s] [13.535957494s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:23.064850 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-metadata-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-metadata-config-data&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40290->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:23.083963 4758 reflector.go:561] object-"openshift-config-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39802->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:23.084094 4758 trace.go:236] Trace[679620511]: "Reflector ListAndWatch" name:object-"openshift-config-operator"/"openshift-service-ca.crt" (22-Jan-2026 18:00:09.443) (total time: 13640ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[679620511]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39802->38.102.83.223:6443: read: connection reset by peer 13640ms (18:00:23.083) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[679620511]: [13.640181055s] [13.640181055s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:23.084129 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39802->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:23.104403 4758 reflector.go:561] object-"openstack"/"rabbitmq-cell1-config-data": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-cell1-config-data&resourceVersion=84439": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39864->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:23.104513 4758 trace.go:236] Trace[271020238]: "Reflector ListAndWatch" name:object-"openstack"/"rabbitmq-cell1-config-data" (22-Jan-2026 18:00:09.446) (total time: 13658ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[271020238]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-cell1-config-data&resourceVersion=84439": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39864->38.102.83.223:6443: read: connection reset by peer 13657ms (18:00:23.104) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[271020238]: [13.658048571s] [13.658048571s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:23.104538 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-cell1-config-data\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-cell1-config-data&resourceVersion=84439\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39864->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:23.124621 4758 reflector.go:561] object-"openshift-machine-api"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84400": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39990->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:23.124729 4758 trace.go:236] Trace[1810079229]: "Reflector ListAndWatch" name:object-"openshift-machine-api"/"openshift-service-ca.crt" (22-Jan-2026 18:00:09.478) (total time: 13646ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1810079229]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84400": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39990->38.102.83.223:6443: read: connection reset by peer 13645ms (18:00:23.124) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1810079229]: [13.646033444s] [13.646033444s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:23.124772 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84400\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39990->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:23.144028 4758 reflector.go:561] object-"openshift-service-ca-operator"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39854->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:23.144113 4758 trace.go:236] Trace[1443296812]: "Reflector ListAndWatch" name:object-"openshift-service-ca-operator"/"serving-cert" (22-Jan-2026 18:00:09.446) (total time: 13697ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1443296812]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39854->38.102.83.223:6443: read: connection reset by peer 13697ms (18:00:23.144) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1443296812]: [13.697684092s] [13.697684092s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:23.144136 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39854->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:23.164326 4758 reflector.go:561] object-"openshift-console-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39840->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:23.164462 4758 trace.go:236] Trace[1268370183]: "Reflector ListAndWatch" name:object-"openshift-console-operator"/"kube-root-ca.crt" (22-Jan-2026 18:00:09.443) (total time: 13720ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1268370183]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39840->38.102.83.223:6443: read: connection reset by peer 13720ms (18:00:23.164) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1268370183]: [13.720455472s] [13.720455472s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:23.164494 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39840->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:23.185194 4758 reflector.go:561] object-"openshift-ingress"/"router-stats-default": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-stats-default&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39886->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:23.185317 4758 trace.go:236] Trace[979747385]: "Reflector ListAndWatch" name:object-"openshift-ingress"/"router-stats-default" (22-Jan-2026 18:00:09.460) (total time: 13725ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[979747385]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-stats-default&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39886->38.102.83.223:6443: read: connection reset by peer 13725ms (18:00:23.185) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[979747385]: [13.725261654s] [13.725261654s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:23.185350 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress\"/\"router-stats-default\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-stats-default&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39886->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:23.204279 4758 reflector.go:561] object-"metallb-system"/"metallb-webhook-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmetallb-webhook-cert&resourceVersion=84680": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40036->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:23.204355 4758 trace.go:236] Trace[1995171781]: "Reflector ListAndWatch" name:object-"metallb-system"/"metallb-webhook-cert" (22-Jan-2026 18:00:09.484) (total time: 13719ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1995171781]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmetallb-webhook-cert&resourceVersion=84680": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40036->38.102.83.223:6443: read: connection reset by peer 13719ms (18:00:23.204) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1995171781]: [13.719365834s] [13.719365834s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:23.204372 4758 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"metallb-webhook-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmetallb-webhook-cert&resourceVersion=84680\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40036->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:23.224590 4758 reflector.go:561] object-"openshift-network-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39882->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:23.224689 4758 trace.go:236] Trace[2002868608]: "Reflector ListAndWatch" name:object-"openshift-network-operator"/"kube-root-ca.crt" (22-Jan-2026 18:00:09.451) (total time: 13772ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[2002868608]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39882->38.102.83.223:6443: read: connection reset by peer 13772ms (18:00:23.224) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[2002868608]: [13.772878392s] [13.772878392s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:23.224710 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39882->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:23.244493 4758 reflector.go:561] object-"openshift-console"/"trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39724->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:23.244601 4758 trace.go:236] Trace[1154317136]: "Reflector ListAndWatch" name:object-"openshift-console"/"trusted-ca-bundle" (22-Jan-2026 18:00:09.420) (total time: 13824ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1154317136]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39724->38.102.83.223:6443: read: connection reset by peer 13824ms (18:00:23.244) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1154317136]: [13.824450377s] [13.824450377s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:23.244627 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39724->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:23.264476 4758 reflector.go:561] object-"openshift-network-operator"/"metrics-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40032->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:23.264551 4758 trace.go:236] Trace[79549574]: "Reflector ListAndWatch" name:object-"openshift-network-operator"/"metrics-tls" (22-Jan-2026 18:00:09.484) (total time: 13779ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[79549574]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40032->38.102.83.223:6443: read: connection reset by peer 13779ms (18:00:23.264) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[79549574]: [13.779585465s] [13.779585465s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:23.264567 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-operator\"/\"metrics-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40032->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:23.284569 4758 reflector.go:561] object-"openshift-console"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84439": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39954->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:23.284682 4758 trace.go:236] Trace[1704337336]: "Reflector ListAndWatch" name:object-"openshift-console"/"kube-root-ca.crt" (22-Jan-2026 18:00:09.469) (total time: 13815ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1704337336]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84439": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39954->38.102.83.223:6443: read: connection reset by peer 13815ms (18:00:23.284) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1704337336]: [13.815130193s] [13.815130193s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:23.284710 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84439\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39954->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:23.303928 4758 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=84609": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40012->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:23.304003 4758 trace.go:236] Trace[1148365874]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 18:00:09.483) (total time: 13820ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1148365874]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=84609": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40012->38.102.83.223:6443: read: connection reset by peer 13820ms (18:00:23.303) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1148365874]: [13.820155501s] [13.820155501s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:23.304023 4758 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=84609\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40012->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:23.323687 4758 request.go:700] Waited for 8.163174952s, retries: 1, retry-after: 1s - retry-reason: due to retryable error, error: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dnode-bootstrapper-token&resourceVersion=84343": read tcp 38.102.83.223:39974->38.102.83.223:6443: read: connection reset by peer - request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dnode-bootstrapper-token&resourceVersion=84343 Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:23.324464 4758 reflector.go:561] object-"openshift-machine-config-operator"/"node-bootstrapper-token": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dnode-bootstrapper-token&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39974->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:23.324608 4758 trace.go:236] Trace[1594473468]: "Reflector ListAndWatch" name:object-"openshift-machine-config-operator"/"node-bootstrapper-token" (22-Jan-2026 18:00:09.476) (total time: 13848ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1594473468]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dnode-bootstrapper-token&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39974->38.102.83.223:6443: read: connection reset by peer 13847ms (18:00:23.324) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1594473468]: [13.848078202s] [13.848078202s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:23.324643 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dnode-bootstrapper-token&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39974->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:23.344248 4758 reflector.go:561] object-"openstack"/"swift-conf": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dswift-conf&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39966->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:23.344371 4758 trace.go:236] Trace[944008926]: "Reflector ListAndWatch" name:object-"openstack"/"swift-conf" (22-Jan-2026 18:00:09.475) (total time: 13868ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[944008926]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dswift-conf&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39966->38.102.83.223:6443: read: connection reset by peer 13868ms (18:00:23.344) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[944008926]: [13.868971642s] [13.868971642s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:23.344403 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"swift-conf\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dswift-conf&resourceVersion=84512\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39966->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:23.364717 4758 reflector.go:561] object-"openshift-network-node-identity"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84439": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39876->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:23.364850 4758 trace.go:236] Trace[664326200]: "Reflector ListAndWatch" name:object-"openshift-network-node-identity"/"openshift-service-ca.crt" (22-Jan-2026 18:00:09.448) (total time: 13916ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[664326200]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84439": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39876->38.102.83.223:6443: read: connection reset by peer 13916ms (18:00:23.364) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[664326200]: [13.91626261s] [13.91626261s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:23.364883 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84439\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39876->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:23.384851 4758 reflector.go:561] object-"openshift-oauth-apiserver"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40000->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:23.384949 4758 trace.go:236] Trace[1833395522]: "Reflector ListAndWatch" name:object-"openshift-oauth-apiserver"/"serving-cert" (22-Jan-2026 18:00:09.480) (total time: 13904ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1833395522]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40000->38.102.83.223:6443: read: connection reset by peer 13903ms (18:00:23.384) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1833395522]: [13.904033557s] [13.904033557s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:23.384975 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40000->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:23.403637 4758 reflector.go:561] object-"openstack"/"prometheus-metric-storage-tls-assets-0": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dprometheus-metric-storage-tls-assets-0&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39958->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:23.403697 4758 trace.go:236] Trace[178935595]: "Reflector ListAndWatch" name:object-"openstack"/"prometheus-metric-storage-tls-assets-0" (22-Jan-2026 18:00:09.475) (total time: 13928ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[178935595]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dprometheus-metric-storage-tls-assets-0&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39958->38.102.83.223:6443: read: connection reset by peer 13928ms (18:00:23.403) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[178935595]: [13.928339099s] [13.928339099s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:23.403713 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"prometheus-metric-storage-tls-assets-0\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dprometheus-metric-storage-tls-assets-0&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39958->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:23.423853 4758 reflector.go:561] object-"openshift-console-operator"/"console-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dconsole-operator-config&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39988->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:23.423918 4758 trace.go:236] Trace[427841504]: "Reflector ListAndWatch" name:object-"openshift-console-operator"/"console-operator-config" (22-Jan-2026 18:00:09.477) (total time: 13946ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[427841504]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dconsole-operator-config&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39988->38.102.83.223:6443: read: connection reset by peer 13946ms (18:00:23.423) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[427841504]: [13.946332999s] [13.946332999s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:23.423935 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"console-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dconsole-operator-config&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39988->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:23.443766 4758 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dv4-0-config-system-trusted-ca-bundle&resourceVersion=84400": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39370->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:23.443847 4758 trace.go:236] Trace[1438812971]: "Reflector ListAndWatch" name:object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" (22-Jan-2026 18:00:09.368) (total time: 14075ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1438812971]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dv4-0-config-system-trusted-ca-bundle&resourceVersion=84400": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39370->38.102.83.223:6443: read: connection reset by peer 14075ms (18:00:23.443) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1438812971]: [14.075329067s] [14.075329067s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:23.443866 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dv4-0-config-system-trusted-ca-bundle&resourceVersion=84400\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39370->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:23.464034 4758 reflector.go:561] object-"openstack"/"rabbitmq-notifications-config-data": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-notifications-config-data&resourceVersion=84524": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39940->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:23.464091 4758 trace.go:236] Trace[1906719035]: "Reflector ListAndWatch" name:object-"openstack"/"rabbitmq-notifications-config-data" (22-Jan-2026 18:00:09.466) (total time: 13997ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1906719035]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-notifications-config-data&resourceVersion=84524": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39940->38.102.83.223:6443: read: connection reset by peer 13997ms (18:00:23.464) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1906719035]: [13.997180206s] [13.997180206s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:23.464103 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-notifications-config-data\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-notifications-config-data&resourceVersion=84524\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39940->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:23.484726 4758 reflector.go:561] object-"openshift-ingress-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40300->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:23.484827 4758 trace.go:236] Trace[1178073073]: "Reflector ListAndWatch" name:object-"openshift-ingress-operator"/"kube-root-ca.crt" (22-Jan-2026 18:00:09.529) (total time: 13954ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1178073073]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40300->38.102.83.223:6443: read: connection reset by peer 13954ms (18:00:23.484) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1178073073]: [13.954924434s] [13.954924434s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:23.484849 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40300->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:23.504632 4758 reflector.go:561] object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/secrets?fieldSelector=metadata.name%3Dkube-storage-version-migrator-sa-dockercfg-5xfcg&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40194->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:23.504687 4758 trace.go:236] Trace[668750134]: "Reflector ListAndWatch" name:object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" (22-Jan-2026 18:00:09.503) (total time: 14000ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[668750134]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/secrets?fieldSelector=metadata.name%3Dkube-storage-version-migrator-sa-dockercfg-5xfcg&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40194->38.102.83.223:6443: read: connection reset by peer 14000ms (18:00:23.504) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[668750134]: [14.000721562s] [14.000721562s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:23.504700 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-5xfcg\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/secrets?fieldSelector=metadata.name%3Dkube-storage-version-migrator-sa-dockercfg-5xfcg&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40194->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:23.524234 4758 reflector.go:561] object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-dockercfg-5nsgg&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40188->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:23.524283 4758 trace.go:236] Trace[721555248]: "Reflector ListAndWatch" name:object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" (22-Jan-2026 18:00:09.501) (total time: 14022ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[721555248]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-dockercfg-5nsgg&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40188->38.102.83.223:6443: read: connection reset by peer 14022ms (18:00:23.524) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[721555248]: [14.022561388s] [14.022561388s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:23.524296 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-5nsgg\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-dockercfg-5nsgg&resourceVersion=84627\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:40188->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:23.544276 4758 reflector.go:561] object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-bsxhx": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Drabbitmq-cluster-operator-controller-manager-dockercfg-bsxhx&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39448->38.102.83.223:6443: read: connection reset by peer Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:23.544375 4758 trace.go:236] Trace[1531957367]: "Reflector ListAndWatch" name:object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-bsxhx" (22-Jan-2026 18:00:09.384) (total time: 14159ms): Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1531957367]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Drabbitmq-cluster-operator-controller-manager-dockercfg-bsxhx&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39448->38.102.83.223:6443: read: connection reset by peer 14159ms (18:00:23.544) Jan 22 18:00:29 crc kubenswrapper[4758]: Trace[1531957367]: [14.159458879s] [14.159458879s] END Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:23.544403 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"rabbitmq-cluster-operator-controller-manager-dockercfg-bsxhx\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Drabbitmq-cluster-operator-controller-manager-dockercfg-bsxhx&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.223:39448->38.102.83.223:6443: read: connection reset by peer" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:23.564000 4758 reflector.go:561] object-"openstack-operators"/"webhook-server-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dwebhook-server-cert&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:23.564033 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"webhook-server-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dwebhook-server-cert&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:23.583658 4758 reflector.go:561] pkg/kubelet/config/apiserver.go:66: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dcrc&resourceVersion=84645": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:23.583695 4758 reflector.go:158] "Unhandled Error" err="pkg/kubelet/config/apiserver.go:66: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dcrc&resourceVersion=84645\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:23.604359 4758 reflector.go:561] object-"openstack"/"glance-glance-dockercfg-th7td": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-glance-dockercfg-th7td&resourceVersion=84680": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:23.604385 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"glance-glance-dockercfg-th7td\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-glance-dockercfg-th7td&resourceVersion=84680\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:23.626811 4758 reflector.go:561] object-"openshift-authentication-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84276": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:23.626895 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84276\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:23.645873 4758 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-server-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-tls&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:23.645934 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-tls&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:23.664365 4758 reflector.go:561] object-"openshift-controller-manager"/"client-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&resourceVersion=84524": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:23.664432 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&resourceVersion=84524\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:23.683675 4758 status_manager.go:851] "Failed to get status for pod" podUID="e7fdd2cd-e517-46b5-acb3-22b59b7f132f" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-tlt96" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/cinder-operator-controller-manager-69cf5d4557-tlt96\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:23.704224 4758 reflector.go:561] object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/secrets?fieldSelector=metadata.name%3Dkube-storage-version-migrator-operator-dockercfg-2bh8d&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:23.704277 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2bh8d\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/secrets?fieldSelector=metadata.name%3Dkube-storage-version-migrator-operator-dockercfg-2bh8d&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:23.725046 4758 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openstack/persistence-rabbitmq-server-0: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/persistentvolumeclaims/persistence-rabbitmq-server-0\": dial tcp 38.102.83.223:6443: connect: connection refused" pod="openstack/rabbitmq-server-0" volumeName="persistence" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:23.744421 4758 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:23.744487 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:23.764579 4758 reflector.go:561] object-"openstack"/"nova-cell1-novncproxy-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-cell1-novncproxy-config-data&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:23.764644 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-cell1-novncproxy-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-cell1-novncproxy-config-data&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:23.783639 4758 reflector.go:561] object-"metallb-system"/"controller-dockercfg-qdnhd": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dcontroller-dockercfg-qdnhd&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:23.783681 4758 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"controller-dockercfg-qdnhd\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dcontroller-dockercfg-qdnhd&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:23.804713 4758 reflector.go:561] object-"openstack"/"dnsmasq-dns-dockercfg-w2txv": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Ddnsmasq-dns-dockercfg-w2txv&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:23.804769 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"dnsmasq-dns-dockercfg-w2txv\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Ddnsmasq-dns-dockercfg-w2txv&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:23.808369 4758 scope.go:117] "RemoveContainer" containerID="71d0f9a93a1f198cee3e61be87dac5fd13220229181dc2ee3ad7a9d1aecf76fb" Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:23.808563 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:23.823921 4758 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-kubernetes-node-dockercfg-pwtwl&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:23.823958 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-pwtwl\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-kubernetes-node-dockercfg-pwtwl&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:23.832565 4758 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:23.832621 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:23.844772 4758 reflector.go:561] object-"openstack"/"cinder-api-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-api-config-data&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:23.844855 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cinder-api-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-api-config-data&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:23.864509 4758 reflector.go:561] object-"openstack"/"ovndbcluster-nb-scripts": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-nb-scripts&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:23.864555 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovndbcluster-nb-scripts\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-nb-scripts&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:23.884482 4758 reflector.go:561] object-"openshift-nmstate"/"openshift-nmstate-webhook": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dopenshift-nmstate-webhook&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:23.884537 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"openshift-nmstate-webhook\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dopenshift-nmstate-webhook&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:23.904213 4758 reflector.go:561] object-"openshift-network-node-identity"/"network-node-identity-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/secrets?fieldSelector=metadata.name%3Dnetwork-node-identity-cert&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:23.904258 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/secrets?fieldSelector=metadata.name%3Dnetwork-node-identity-cert&resourceVersion=84512\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:23.924809 4758 reflector.go:561] object-"openstack"/"metric-storage-prometheus-dockercfg-4ftsd": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmetric-storage-prometheus-dockercfg-4ftsd&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:23.924881 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"metric-storage-prometheus-dockercfg-4ftsd\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmetric-storage-prometheus-dockercfg-4ftsd&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:23.943913 4758 reflector.go:561] object-"openstack"/"cinder-scheduler-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-scheduler-config-data&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:23.943964 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cinder-scheduler-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-scheduler-config-data&resourceVersion=84627\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:23.964482 4758 reflector.go:561] object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-4q6rk": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-operator-controller-manager-dockercfg-4q6rk&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:23.964529 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"openstack-operator-controller-manager-dockercfg-4q6rk\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-operator-controller-manager-dockercfg-4q6rk&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:23.984426 4758 reflector.go:561] object-"openstack"/"cert-keystone-public-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-keystone-public-svc&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:23.984498 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-keystone-public-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-keystone-public-svc&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:24.004071 4758 reflector.go:561] object-"openstack"/"nova-cell0-conductor-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-cell0-conductor-config-data&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:24.004122 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-cell0-conductor-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-cell0-conductor-config-data&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:24.023706 4758 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-template-error": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-template-error&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:24.023775 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-template-error&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:24.044251 4758 reflector.go:561] object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-operator-dockercfg-r9srn&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:24.044306 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-r9srn\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-operator-dockercfg-r9srn&resourceVersion=84627\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:24.064401 4758 reflector.go:561] object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-dbtnp": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dtelemetry-operator-controller-manager-dockercfg-dbtnp&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:24.064459 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"telemetry-operator-controller-manager-dockercfg-dbtnp\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dtelemetry-operator-controller-manager-dockercfg-dbtnp&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:24.084734 4758 reflector.go:561] object-"openshift-cluster-samples-operator"/"samples-operator-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/secrets?fieldSelector=metadata.name%3Dsamples-operator-tls&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:24.084870 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/secrets?fieldSelector=metadata.name%3Dsamples-operator-tls&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:24.104436 4758 reflector.go:561] object-"openshift-service-ca"/"signing-cabundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dsigning-cabundle&resourceVersion=84485": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:24.104506 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca\"/\"signing-cabundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dsigning-cabundle&resourceVersion=84485\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:24.124834 4758 reflector.go:561] object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dredhat-operators-dockercfg-ct8rh&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:24.124899 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-ct8rh\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dredhat-operators-dockercfg-ct8rh&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:24.143806 4758 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serving-cert&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:24.143870 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serving-cert&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:24.165104 4758 reflector.go:561] object-"openstack"/"watcher-applier-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dwatcher-applier-config-data&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:24.165156 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"watcher-applier-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dwatcher-applier-config-data&resourceVersion=84627\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:24.184075 4758 reflector.go:561] object-"openstack"/"nova-cell1-conductor-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-cell1-conductor-config-data&resourceVersion=84680": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:24.184125 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-cell1-conductor-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-cell1-conductor-config-data&resourceVersion=84680\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:24.204496 4758 reflector.go:561] object-"openshift-machine-api"/"kube-rbac-proxy": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:24.204550 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:24.223991 4758 reflector.go:561] object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-dockercfg-vw8fw&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:24.224254 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-vw8fw\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-dockercfg-vw8fw&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:24.244401 4758 reflector.go:561] object-"openshift-image-registry"/"installation-pull-secrets": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dinstallation-pull-secrets&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:24.244482 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"installation-pull-secrets\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dinstallation-pull-secrets&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:24.263903 4758 reflector.go:561] object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dredhat-marketplace-dockercfg-x2ctb&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:24.264025 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-x2ctb\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dredhat-marketplace-dockercfg-x2ctb&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:24.284124 4758 reflector.go:561] object-"openshift-oauth-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:24.284172 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:24.304256 4758 reflector.go:561] object-"openshift-ingress-operator"/"metrics-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:24.304288 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-operator\"/\"metrics-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:24.324058 4758 request.go:700] Waited for 7.009152695s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-operator-config&resourceVersion=84313 Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:24.324471 4758 reflector.go:561] object-"openshift-etcd-operator"/"etcd-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-operator-config&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:24.324512 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-operator-config&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:24.343944 4758 reflector.go:561] object-"openstack"/"cert-nova-novncproxy-cell1-public-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-nova-novncproxy-cell1-public-svc&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:24.343993 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-nova-novncproxy-cell1-public-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-nova-novncproxy-cell1-public-svc&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:24.363831 4758 reflector.go:561] object-"openstack"/"prometheus-metric-storage": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dprometheus-metric-storage&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:24.363881 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"prometheus-metric-storage\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dprometheus-metric-storage&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:24.384650 4758 reflector.go:561] object-"openstack"/"swift-swift-dockercfg-xgjlh": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dswift-swift-dockercfg-xgjlh&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:24.384701 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"swift-swift-dockercfg-xgjlh\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dswift-swift-dockercfg-xgjlh&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:24.405783 4758 reflector.go:561] object-"cert-manager"/"cert-manager-dockercfg-qcl9m": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-dockercfg-qcl9m&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:24.405839 4758 reflector.go:158] "Unhandled Error" err="object-\"cert-manager\"/\"cert-manager-dockercfg-qcl9m\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-dockercfg-qcl9m&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:24.424221 4758 reflector.go:561] object-"openstack"/"openstack-cell1-scripts": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-cell1-scripts&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:24.424283 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-cell1-scripts\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-cell1-scripts&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:24.444326 4758 reflector.go:561] object-"openshift-apiserver"/"etcd-client": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Detcd-client&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:24.444377 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Detcd-client&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:24.464249 4758 reflector.go:561] object-"openstack"/"cert-nova-metadata-internal-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-nova-metadata-internal-svc&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:24.464301 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-nova-metadata-internal-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-nova-metadata-internal-svc&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:24.484046 4758 reflector.go:561] object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/secrets?fieldSelector=metadata.name%3Dopenshift-kube-scheduler-operator-dockercfg-qt55r&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:24.484105 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-qt55r\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/secrets?fieldSelector=metadata.name%3Dopenshift-kube-scheduler-operator-dockercfg-qt55r&resourceVersion=84512\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:24.504367 4758 reflector.go:561] object-"openstack-operators"/"metrics-server-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dmetrics-server-cert&resourceVersion=84680": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:24.504408 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"metrics-server-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dmetrics-server-cert&resourceVersion=84680\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:24.524349 4758 reflector.go:561] object-"openshift-authentication-operator"/"authentication-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dauthentication-operator-config&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:24.524387 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"authentication-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dauthentication-operator-config&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:24.543960 4758 reflector.go:561] object-"openshift-multus"/"multus-daemon-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dmultus-daemon-config&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:24.544010 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-daemon-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dmultus-daemon-config&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:24.563824 4758 reflector.go:561] object-"openstack"/"cert-cinder-public-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-cinder-public-svc&resourceVersion=84567": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:24.563857 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-cinder-public-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-cinder-public-svc&resourceVersion=84567\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:24.584362 4758 reflector.go:561] object-"openstack"/"cert-ovndbcluster-sb-ovndbs": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-ovndbcluster-sb-ovndbs&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:24.584420 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-ovndbcluster-sb-ovndbs\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-ovndbcluster-sb-ovndbs&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:24.604702 4758 reflector.go:561] object-"openshift-console"/"oauth-serving-cert": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Doauth-serving-cert&resourceVersion=84485": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:24.604773 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"oauth-serving-cert\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Doauth-serving-cert&resourceVersion=84485\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:24.624142 4758 reflector.go:561] object-"openstack"/"tempest-tests-tempest-custom-data-s0": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dtempest-tests-tempest-custom-data-s0&resourceVersion=84357": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:24.624194 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"tempest-tests-tempest-custom-data-s0\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dtempest-tests-tempest-custom-data-s0&resourceVersion=84357\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:24.644361 4758 reflector.go:561] object-"openstack"/"barbican-worker-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-worker-config-data&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:24.644424 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"barbican-worker-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-worker-config-data&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:24.667818 4758 reflector.go:561] object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-nova-novncproxy-cell1-vencrypt&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:24.667898 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-nova-novncproxy-cell1-vencrypt\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-nova-novncproxy-cell1-vencrypt&resourceVersion=84627\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:24.684193 4758 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=84680": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:24.684256 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=84680\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:24.707349 4758 reflector.go:561] object-"openshift-ingress-canary"/"canary-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Dcanary-serving-cert&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:24.707436 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-canary\"/\"canary-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Dcanary-serving-cert&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:24.723781 4758 reflector.go:561] object-"openstack"/"combined-ca-bundle": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcombined-ca-bundle&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:24.723826 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"combined-ca-bundle\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcombined-ca-bundle&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:24.743787 4758 reflector.go:561] object-"openstack"/"swift-ring-files": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dswift-ring-files&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:24.743834 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"swift-ring-files\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dswift-ring-files&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:24.764312 4758 reflector.go:561] object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-g7xdx": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Doctavia-operator-controller-manager-dockercfg-g7xdx&resourceVersion=84680": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:24.764377 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"octavia-operator-controller-manager-dockercfg-g7xdx\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Doctavia-operator-controller-manager-dockercfg-g7xdx&resourceVersion=84680\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:24.783840 4758 reflector.go:561] object-"metallb-system"/"frr-k8s-daemon-dockercfg-s75rc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dfrr-k8s-daemon-dockercfg-s75rc&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:24.783901 4758 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"frr-k8s-daemon-dockercfg-s75rc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dfrr-k8s-daemon-dockercfg-s75rc&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:24.803956 4758 reflector.go:561] object-"openshift-service-ca"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:24.804058 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:24.823918 4758 reflector.go:561] object-"openshift-console"/"default-dockercfg-chnjx": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-chnjx&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:24.824015 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"default-dockercfg-chnjx\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-chnjx&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:24.843789 4758 reflector.go:561] object-"metallb-system"/"speaker-certs-secret": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dspeaker-certs-secret&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:24.843832 4758 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"speaker-certs-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dspeaker-certs-secret&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:24.864088 4758 reflector.go:561] object-"openshift-marketplace"/"community-operators-dockercfg-dmngl": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dcommunity-operators-dockercfg-dmngl&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:24.864166 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"community-operators-dockercfg-dmngl\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dcommunity-operators-dockercfg-dmngl&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:24.870092 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-58fc8b87c6-qmw5r" podUID="8afd29cc-2dab-460e-ad9d-f17690c15f41" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.52:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:24.883643 4758 reflector.go:561] object-"openstack"/"swift-proxy-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dswift-proxy-config-data&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:24.883691 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"swift-proxy-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dswift-proxy-config-data&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:24.903926 4758 reflector.go:561] object-"openstack"/"cert-watcher-public-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-watcher-public-svc&resourceVersion=84193": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:24.903987 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-watcher-public-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-watcher-public-svc&resourceVersion=84193\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:24.925028 4758 reflector.go:561] object-"openshift-apiserver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84357": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:24.925083 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84357\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:24.937019 4758 patch_prober.go:28] interesting pod/controller-manager-5b46f89db7-56qr2 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:24.937069 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-5b46f89db7-56qr2" podUID="11e5039c-273e-4208-9295-329a27e6d22b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:24.937115 4758 patch_prober.go:28] interesting pod/controller-manager-5b46f89db7-56qr2 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:24.937128 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5b46f89db7-56qr2" podUID="11e5039c-273e-4208-9295-329a27e6d22b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:24.943835 4758 reflector.go:561] object-"openshift-image-registry"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:24.943893 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:24.963924 4758 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-template-provider-selection": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-template-provider-selection&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:24.963980 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-template-provider-selection&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:24.985655 4758 reflector.go:561] object-"openstack"/"cert-rabbitmq-notifications-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-rabbitmq-notifications-svc&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:24.985700 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-rabbitmq-notifications-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-rabbitmq-notifications-svc&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:25.003902 4758 reflector.go:561] object-"openshift-config-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:25.003932 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:25.024884 4758 reflector.go:561] object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-brw4q": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dcinder-operator-controller-manager-dockercfg-brw4q&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:25.024935 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"cinder-operator-controller-manager-dockercfg-brw4q\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dcinder-operator-controller-manager-dockercfg-brw4q&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:25.044778 4758 reflector.go:561] object-"openshift-operators"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:25.044827 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:25.063639 4758 reflector.go:561] object-"openstack"/"cert-rabbitmq-cell1-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-rabbitmq-cell1-svc&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:25.063667 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-rabbitmq-cell1-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-rabbitmq-cell1-svc&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:25.084456 4758 reflector.go:561] object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-2fs5z": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dovn-operator-controller-manager-dockercfg-2fs5z&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:25.084507 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"ovn-operator-controller-manager-dockercfg-2fs5z\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dovn-operator-controller-manager-dockercfg-2fs5z&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:25.104099 4758 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:25.104150 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:25.124550 4758 reflector.go:561] object-"openshift-operators"/"obo-prometheus-operator-dockercfg-4jql8": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobo-prometheus-operator-dockercfg-4jql8&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:25.124609 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-4jql8\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobo-prometheus-operator-dockercfg-4jql8&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:25.144188 4758 reflector.go:561] object-"openstack"/"rabbitmq-server-conf": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-server-conf&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:25.144253 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-server-conf\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-server-conf&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:25.180696 4758 reflector.go:561] object-"metallb-system"/"metallb-excludel2": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dmetallb-excludel2&resourceVersion=84400": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:25.180772 4758 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"metallb-excludel2\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dmetallb-excludel2&resourceVersion=84400\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:25.183789 4758 reflector.go:561] object-"openshift-operators"/"perses-operator-dockercfg-c658k": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dperses-operator-dockercfg-c658k&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:25.183847 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"perses-operator-dockercfg-c658k\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dperses-operator-dockercfg-c658k&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:25.203880 4758 reflector.go:561] object-"openshift-etcd-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84439": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:25.203929 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84439\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:25.224486 4758 reflector.go:561] object-"openshift-image-registry"/"image-registry-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dimage-registry-tls&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:25.224538 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"image-registry-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dimage-registry-tls&resourceVersion=84512\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:25.244695 4758 reflector.go:561] object-"openshift-ingress-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:25.244769 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:25.264159 4758 reflector.go:561] object-"openstack-operators"/"test-operator-controller-manager-dockercfg-nwvvt": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dtest-operator-controller-manager-dockercfg-nwvvt&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:25.264209 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"test-operator-controller-manager-dockercfg-nwvvt\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dtest-operator-controller-manager-dockercfg-nwvvt&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:25.284209 4758 reflector.go:561] object-"openstack"/"watcher-decision-engine-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dwatcher-decision-engine-config-data&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:25.284288 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"watcher-decision-engine-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dwatcher-decision-engine-config-data&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:25.304711 4758 reflector.go:561] object-"openstack"/"keystone-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone-scripts&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:25.304803 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"keystone-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone-scripts&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:25.322952 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-tlt96" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:25.324346 4758 reflector.go:561] object-"openshift-service-ca"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84173": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:25.324392 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84173\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:25.343559 4758 request.go:700] Waited for 7.011711725s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-service-ca-bundle&resourceVersion=84647 Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:25.344157 4758 reflector.go:561] object-"openshift-etcd-operator"/"etcd-service-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-service-ca-bundle&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:25.344206 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-service-ca-bundle&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:25.364439 4758 reflector.go:561] object-"openshift-nmstate"/"plugin-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dplugin-serving-cert&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:25.364497 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"plugin-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dplugin-serving-cert&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:25.384657 4758 reflector.go:561] object-"openstack"/"ovnnorthd-scripts": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovnnorthd-scripts&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:25.384717 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovnnorthd-scripts\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovnnorthd-scripts&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:25.404414 4758 reflector.go:561] object-"openshift-ingress"/"router-dockercfg-zdk86": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-dockercfg-zdk86&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:25.404465 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress\"/\"router-dockercfg-zdk86\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-dockercfg-zdk86&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:25.423868 4758 reflector.go:561] object-"openstack"/"cert-galera-openstack-cell1-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-galera-openstack-cell1-svc&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:25.423952 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-galera-openstack-cell1-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-galera-openstack-cell1-svc&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:25.444646 4758 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/secrets?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-dockercfg-xtcjv&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:25.444733 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-xtcjv\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/secrets?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-dockercfg-xtcjv&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:25.464585 4758 reflector.go:561] object-"openstack"/"horizon-horizon-dockercfg-n2vxv": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dhorizon-horizon-dockercfg-n2vxv&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:25.464641 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"horizon-horizon-dockercfg-n2vxv\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dhorizon-horizon-dockercfg-n2vxv&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:25.484900 4758 reflector.go:561] object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-serving-cert&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:25.484981 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-serving-cert&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:25.504144 4758 reflector.go:561] object-"openstack"/"ovsdbserver-sb": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovsdbserver-sb&resourceVersion=84580": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:25.504206 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovsdbserver-sb\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovsdbserver-sb&resourceVersion=84580\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:25.524355 4758 reflector.go:561] object-"openshift-image-registry"/"node-ca-dockercfg-4777p": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dnode-ca-dockercfg-4777p&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:25.524414 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"node-ca-dockercfg-4777p\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dnode-ca-dockercfg-4777p&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:25.544657 4758 reflector.go:561] object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmetrics-daemon-sa-dockercfg-d427c&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:25.544862 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-d427c\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmetrics-daemon-sa-dockercfg-d427c&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:25.564318 4758 reflector.go:561] object-"openshift-route-controller-manager"/"client-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:25.564428 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:25.585041 4758 reflector.go:561] object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84524": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:25.585109 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84524\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:25.603953 4758 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackageserver-service-cert&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:25.604005 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackageserver-service-cert&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:25.619920 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-2mr2s" podUID="901f347a-3b10-4392-8247-41a859112544" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:25.620043 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-2mr2s" podUID="901f347a-3b10-4392-8247-41a859112544" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:25.623993 4758 reflector.go:561] object-"openstack-operators"/"openstack-operator-index-dockercfg-ck689": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-operator-index-dockercfg-ck689&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:25.624070 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"openstack-operator-index-dockercfg-ck689\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-operator-index-dockercfg-ck689&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:25.643990 4758 reflector.go:561] object-"openstack"/"cert-swift-public-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-swift-public-svc&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:25.644038 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-swift-public-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-swift-public-svc&resourceVersion=84627\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:25.664640 4758 reflector.go:561] object-"openshift-marketplace"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84400": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:25.664704 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84400\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:25.683874 4758 reflector.go:561] object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-apiserver-operator-config&resourceVersion=84116": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:25.683935 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-apiserver-operator-config&resourceVersion=84116\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:25.704019 4758 reflector.go:561] object-"openshift-machine-config-operator"/"mcc-proxy-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmcc-proxy-tls&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:25.704099 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmcc-proxy-tls&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:25.725975 4758 reflector.go:561] object-"openstack"/"neutron-neutron-dockercfg-zvr2k": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dneutron-neutron-dockercfg-zvr2k&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:25.726053 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"neutron-neutron-dockercfg-zvr2k\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dneutron-neutron-dockercfg-zvr2k&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:25.744215 4758 reflector.go:561] object-"openstack"/"cert-rabbitmq-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-rabbitmq-svc&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:25.744267 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-rabbitmq-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-rabbitmq-svc&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:25.763630 4758 reflector.go:561] object-"openstack"/"cinder-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-scripts&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:25.763692 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cinder-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-scripts&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:25.782872 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-zkfzz" podUID="25848d11-6830-45f8-aff0-0082594b5f3f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:25.783290 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-zkfzz" podUID="25848d11-6830-45f8-aff0-0082594b5f3f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:25.783621 4758 reflector.go:561] object-"openshift-oauth-apiserver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:25.783676 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:25.803917 4758 reflector.go:561] object-"openshift-ingress"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:25.804374 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:25.824232 4758 reflector.go:561] object-"openstack"/"placement-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dplacement-config-data&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:25.824296 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"placement-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dplacement-config-data&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:25.844497 4758 reflector.go:561] object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-2zlds": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovnnorthd-ovnnorthd-dockercfg-2zlds&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:25.844572 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovnnorthd-ovnnorthd-dockercfg-2zlds\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovnnorthd-ovnnorthd-dockercfg-2zlds&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:25.864091 4758 reflector.go:561] object-"openshift-cluster-version"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84439": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:25.864158 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84439\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:25.883935 4758 reflector.go:561] object-"openstack"/"cert-horizon-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-horizon-svc&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:25.883979 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-horizon-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-horizon-svc&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:25.904324 4758 reflector.go:561] object-"openstack"/"neutron-config": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dneutron-config&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:25.904368 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"neutron-config\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dneutron-config&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:25.924567 4758 reflector.go:561] object-"openstack"/"cinder-volume-nfs-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-volume-nfs-config-data&resourceVersion=84680": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:25.924647 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cinder-volume-nfs-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-volume-nfs-config-data&resourceVersion=84680\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:25.944527 4758 reflector.go:561] object-"openstack"/"rabbitmq-notifications-default-user": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-notifications-default-user&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:25.944587 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-notifications-default-user\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-notifications-default-user&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:25.964675 4758 reflector.go:561] object-"openshift-service-ca"/"service-ca-dockercfg-pn86c": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/secrets?fieldSelector=metadata.name%3Dservice-ca-dockercfg-pn86c&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:25.964778 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca\"/\"service-ca-dockercfg-pn86c\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/secrets?fieldSelector=metadata.name%3Dservice-ca-dockercfg-pn86c&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:25.984070 4758 reflector.go:561] object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dkube-controller-manager-operator-dockercfg-gkqpw&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:25.984183 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-gkqpw\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dkube-controller-manager-operator-dockercfg-gkqpw&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:26.004047 4758 reflector.go:561] object-"openshift-route-controller-manager"/"config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:26.004120 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:26.024237 4758 reflector.go:561] object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-zfvmv": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dironic-operator-controller-manager-dockercfg-zfvmv&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:26.024338 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"ironic-operator-controller-manager-dockercfg-zfvmv\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dironic-operator-controller-manager-dockercfg-zfvmv&resourceVersion=84512\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:26.025013 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2fkhp" podUID="659f7d3e-5518-4d19-bb54-e39295a667d2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:26.044794 4758 reflector.go:561] object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84116": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:26.044873 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84116\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:26.064097 4758 reflector.go:561] object-"openshift-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:26.064168 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:26.084353 4758 reflector.go:561] object-"openstack"/"memcached-config-data": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dmemcached-config-data&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:26.084467 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"memcached-config-data\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dmemcached-config-data&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:26.104228 4758 reflector.go:561] object-"openshift-controller-manager"/"openshift-global-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-global-ca&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:26.104310 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-global-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-global-ca&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:26.108921 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2fkhp" podUID="659f7d3e-5518-4d19-bb54-e39295a667d2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:26.108978 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2fkhp" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:26.109170 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-dfb5n" podUID="78689fee-3fe7-47d2-866d-6465d23378ea" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:26.109222 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="743945d0-7488-4665-beaf-f2026e10a424" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.1.9:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:26.108924 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-dfb5n" podUID="78689fee-3fe7-47d2-866d-6465d23378ea" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:26.109363 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="743945d0-7488-4665-beaf-f2026e10a424" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.1.9:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:26.109385 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:26.109475 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-dfb5n" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:26.109803 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-85b8fd6746-9vvd6" podUID="71c16ac1-3276-4096-93c5-d10765320713" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.96:8081/readyz\": dial tcp 10.217.0.96:8081: connect: connection refused" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:26.109893 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-85b8fd6746-9vvd6" podUID="71c16ac1-3276-4096-93c5-d10765320713" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.96:8081/healthz\": dial tcp 10.217.0.96:8081: connect: connection refused" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:26.110598 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="manager" containerStatusID={"Type":"cri-o","ID":"0d34a0000f5fcdb9c5200fca3bbdfa6438c3dfb190ac5b100564f735cb276bbe"} pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-dfb5n" containerMessage="Container manager failed liveness probe, will be restarted" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:26.110633 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-dfb5n" podUID="78689fee-3fe7-47d2-866d-6465d23378ea" containerName="manager" containerID="cri-o://0d34a0000f5fcdb9c5200fca3bbdfa6438c3dfb190ac5b100564f735cb276bbe" gracePeriod=10 Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:26.110854 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="manager" containerStatusID={"Type":"cri-o","ID":"5e4cfe8dee549f90ddd7da44b917a696b4ad8b9811a62376b4463b33d409636a"} pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2fkhp" containerMessage="Container manager failed liveness probe, will be restarted" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:26.110901 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2fkhp" podUID="659f7d3e-5518-4d19-bb54-e39295a667d2" containerName="manager" containerID="cri-o://5e4cfe8dee549f90ddd7da44b917a696b4ad8b9811a62376b4463b33d409636a" gracePeriod=10 Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:26.124357 4758 reflector.go:561] object-"openshift-multus"/"default-dockercfg-2q5b6": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-2q5b6&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:26.124444 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-dockercfg-2q5b6\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-2q5b6&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:26.144235 4758 reflector.go:561] object-"openshift-controller-manager-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84400": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:26.144296 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84400\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:26.165049 4758 reflector.go:561] object-"openstack"/"keystone-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone-config-data&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:26.165124 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"keystone-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone-config-data&resourceVersion=84627\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:26.184196 4758 reflector.go:561] object-"openstack"/"telemetry-ceilometer-dockercfg-kvpw9": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dtelemetry-ceilometer-dockercfg-kvpw9&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:26.184254 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"telemetry-ceilometer-dockercfg-kvpw9\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dtelemetry-ceilometer-dockercfg-kvpw9&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:26.202967 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-d2nmz" podUID="d67bb459-81fe-48a2-ac8a-cb4441bb35bb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:26.202988 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-d2nmz" podUID="d67bb459-81fe-48a2-ac8a-cb4441bb35bb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:26.203905 4758 reflector.go:561] object-"openshift-network-node-identity"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84439": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:26.203978 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84439\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:26.212033 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-59n7w" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:26.224480 4758 reflector.go:561] object-"openshift-machine-api"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84439": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:26.224549 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84439\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:26.231851 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2fkhp" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:26.242342 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-np2j4" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:26.249040 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2xj52" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:26.250680 4758 reflector.go:561] object-"openshift-controller-manager-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84485": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:26.250769 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84485\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:26.285248 4758 reflector.go:561] object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/secrets?fieldSelector=metadata.name%3Dcluster-samples-operator-dockercfg-xpp9w&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:26.285341 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-xpp9w\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/secrets?fieldSelector=metadata.name%3Dcluster-samples-operator-dockercfg-xpp9w&resourceVersion=84627\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:26.285579 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-4jthc" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:26.287143 4758 reflector.go:561] object-"openstack"/"cinder-volume-nfs-2-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-volume-nfs-2-config-data&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:26.287230 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cinder-volume-nfs-2-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-volume-nfs-2-config-data&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:26.299710 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:26.305168 4758 reflector.go:561] object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:26.305251 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:26.324038 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-dfb5n" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:26.324512 4758 reflector.go:561] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Dnode-resolver-dockercfg-kz9s7&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:26.324582 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"node-resolver-dockercfg-kz9s7\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Dnode-resolver-dockercfg-kz9s7&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:26.344307 4758 request.go:700] Waited for 7.064721161s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-admission-controller-secret&resourceVersion=84143 Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:26.345004 4758 reflector.go:561] object-"openshift-multus"/"multus-admission-controller-secret": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-admission-controller-secret&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:26.345109 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-admission-controller-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-admission-controller-secret&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:26.365024 4758 reflector.go:561] object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84400": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:26.365111 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84400\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:26.384860 4758 reflector.go:561] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:26.384938 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:26.403758 4758 reflector.go:561] object-"metallb-system"/"metallb-operator-webhook-server-service-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmetallb-operator-webhook-server-service-cert&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:26.403834 4758 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"metallb-operator-webhook-server-service-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmetallb-operator-webhook-server-service-cert&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:26.429878 4758 reflector.go:561] object-"openshift-dns-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84691": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:26.430269 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84691\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:26.443932 4758 reflector.go:561] object-"openshift-controller-manager"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=84680": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:26.444042 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=84680\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:26.464856 4758 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-session": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-session&resourceVersion=84193": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:26.464950 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-session\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-session&resourceVersion=84193\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:26.484692 4758 reflector.go:561] object-"openshift-cluster-machine-approver"/"kube-rbac-proxy": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&resourceVersion=84400": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:26.484790 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&resourceVersion=84400\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:26.504670 4758 reflector.go:561] object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-config&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:26.504772 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-config&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:26.524669 4758 reflector.go:561] object-"openshift-cluster-machine-approver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84439": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:26.524764 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84439\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:26.544026 4758 reflector.go:561] object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dkube-controller-manager-operator-serving-cert&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:26.544095 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dkube-controller-manager-operator-serving-cert&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:26.563793 4758 reflector.go:561] object-"openstack"/"rabbitmq-cell1-server-conf": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-cell1-server-conf&resourceVersion=84439": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:26.563886 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-cell1-server-conf\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-cell1-server-conf&resourceVersion=84439\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:26.575498 4758 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": http2: server sent GOAWAY and closed the connection; LastStreamID=1, ErrCode=NO_ERROR, debug=\"\"" start-of-body= Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:26.575577 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": http2: server sent GOAWAY and closed the connection; LastStreamID=1, ErrCode=NO_ERROR, debug=\"\"" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:26.584284 4758 reflector.go:561] object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-8x67n": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dheat-operator-controller-manager-dockercfg-8x67n&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:26.584373 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"heat-operator-controller-manager-dockercfg-8x67n\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dheat-operator-controller-manager-dockercfg-8x67n&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:26.604182 4758 reflector.go:561] object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-8t2s8": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-operator-controller-init-dockercfg-8t2s8&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:26.604297 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"openstack-operator-controller-init-dockercfg-8t2s8\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-operator-controller-init-dockercfg-8t2s8&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:26.624524 4758 reflector.go:561] object-"openshift-image-registry"/"registry-dockercfg-kzzsd": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dregistry-dockercfg-kzzsd&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:26.624611 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"registry-dockercfg-kzzsd\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dregistry-dockercfg-kzzsd&resourceVersion=84627\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:26.644127 4758 reflector.go:561] object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/secrets?fieldSelector=metadata.name%3Dconsole-operator-dockercfg-4xjcr&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:26.644210 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"console-operator-dockercfg-4xjcr\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/secrets?fieldSelector=metadata.name%3Dconsole-operator-dockercfg-4xjcr&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:26.663960 4758 reflector.go:561] object-"openstack"/"cert-placement-internal-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-placement-internal-svc&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:26.664042 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-placement-internal-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-placement-internal-svc&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:26.683711 4758 reflector.go:561] object-"openshift-network-diagnostics"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84116": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:26.683791 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84116\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:26.703980 4758 reflector.go:561] object-"openshift-cluster-version"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84400": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:26.704021 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84400\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:26.724000 4758 reflector.go:561] object-"openstack"/"cert-placement-public-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-placement-public-svc&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:26.724050 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-placement-public-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-placement-public-svc&resourceVersion=84512\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:26.744392 4758 reflector.go:561] object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dprometheus-metric-storage-thanos-prometheus-http-client-file&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:26.744494 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"prometheus-metric-storage-thanos-prometheus-http-client-file\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dprometheus-metric-storage-thanos-prometheus-http-client-file&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:26.758096 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-sb974" podUID="35a3fafd-45ea-465d-90ef-36148a60685e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": dial tcp 10.217.0.82:8081: connect: connection refused" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:26.764332 4758 reflector.go:561] object-"openshift-nmstate"/"nmstate-operator-dockercfg-2sf4f": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dnmstate-operator-dockercfg-2sf4f&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:26.764425 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"nmstate-operator-dockercfg-2sf4f\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dnmstate-operator-dockercfg-2sf4f&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:26.785111 4758 reflector.go:561] object-"openshift-oauth-apiserver"/"encryption-config-1": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Dencryption-config-1&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:26.785214 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Dencryption-config-1&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:26.804325 4758 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-serving-cert&resourceVersion=84193": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:26.804435 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-serving-cert&resourceVersion=84193\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:26.824000 4758 reflector.go:561] object-"openstack"/"cert-nova-internal-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-nova-internal-svc&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:26.824088 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-nova-internal-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-nova-internal-svc&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:26.844195 4758 reflector.go:561] object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-s6bn2": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dwatcher-operator-controller-manager-dockercfg-s6bn2&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:26.844270 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"watcher-operator-controller-manager-dockercfg-s6bn2\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dwatcher-operator-controller-manager-dockercfg-s6bn2&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:26.864043 4758 reflector.go:561] object-"openstack"/"rabbitmq-cell1-erlang-cookie": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-cell1-erlang-cookie&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:26.864126 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-cell1-erlang-cookie\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-cell1-erlang-cookie&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:26.884491 4758 reflector.go:561] object-"openstack"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84357": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:26.884567 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84357\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:26.904152 4758 reflector.go:561] object-"openstack"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84524": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:26.904190 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84524\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:26.923799 4758 reflector.go:561] object-"openstack"/"cert-cinder-internal-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-cinder-internal-svc&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:26.923865 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-cinder-internal-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-cinder-internal-svc&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:26.944042 4758 reflector.go:561] object-"metallb-system"/"frr-startup": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dfrr-startup&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:26.944150 4758 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"frr-startup\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dfrr-startup&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:26.954915 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-lpprz" podUID="cc433179-ae5b-4250-80c2-97af371fdfed" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": dial tcp [::1]:29150: connect: connection refused" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:26.964925 4758 reflector.go:561] object-"openshift-console"/"service-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dservice-ca&resourceVersion=84173": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:26.964996 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"service-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dservice-ca&resourceVersion=84173\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:26.984054 4758 reflector.go:561] object-"openshift-service-ca-operator"/"service-ca-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dservice-ca-operator-config&resourceVersion=84485": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:26.984140 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dservice-ca-operator-config&resourceVersion=84485\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:27.005128 4758 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.005209 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:27.024946 4758 reflector.go:561] object-"openstack"/"cert-barbican-internal-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-barbican-internal-svc&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.025040 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-barbican-internal-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-barbican-internal-svc&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:27.044011 4758 reflector.go:561] object-"openstack"/"barbican-keystone-listener-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-keystone-listener-config-data&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.044089 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"barbican-keystone-listener-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-keystone-listener-config-data&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:27.063989 4758 reflector.go:561] object-"openstack"/"cert-nova-public-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-nova-public-svc&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.064084 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-nova-public-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-nova-public-svc&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:27.083693 4758 reflector.go:561] object-"openshift-network-operator"/"iptables-alerter-script": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Diptables-alerter-script&resourceVersion=84485": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.083774 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-operator\"/\"iptables-alerter-script\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Diptables-alerter-script&resourceVersion=84485\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:27.104202 4758 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serviceaccount-dockercfg-rq7zk&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.104250 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-rq7zk\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serviceaccount-dockercfg-rq7zk&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:27.123804 4758 reflector.go:561] object-"openshift-authentication"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.123850 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:27.144669 4758 reflector.go:561] object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/secrets?fieldSelector=metadata.name%3Dservice-ca-operator-dockercfg-rg9jl&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.144736 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-rg9jl\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/secrets?fieldSelector=metadata.name%3Dservice-ca-operator-dockercfg-rg9jl&resourceVersion=84627\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:27.164324 4758 reflector.go:561] object-"openshift-machine-config-operator"/"mco-proxy-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmco-proxy-tls&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.164406 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmco-proxy-tls&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:27.184736 4758 reflector.go:561] object-"metallb-system"/"metallb-operator-controller-manager-service-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmetallb-operator-controller-manager-service-cert&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.184868 4758 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"metallb-operator-controller-manager-service-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmetallb-operator-controller-manager-service-cert&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:27.204152 4758 reflector.go:561] object-"openshift-nmstate"/"nginx-conf": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/configmaps?fieldSelector=metadata.name%3Dnginx-conf&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.204256 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"nginx-conf\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/configmaps?fieldSelector=metadata.name%3Dnginx-conf&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:27.225109 4758 reflector.go:561] object-"openshift-console-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.225216 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:27.244378 4758 reflector.go:561] object-"openshift-image-registry"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.244458 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:27.263904 4758 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-node-metrics-cert&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.264022 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-node-metrics-cert&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:27.284070 4758 reflector.go:561] object-"openshift-etcd-operator"/"etcd-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-ca-bundle&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.284375 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-ca-bundle&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:27.304362 4758 reflector.go:561] object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-gmg82": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dnova-operator-controller-manager-dockercfg-gmg82&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.304483 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"nova-operator-controller-manager-dockercfg-gmg82\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dnova-operator-controller-manager-dockercfg-gmg82&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:27.324213 4758 reflector.go:561] object-"openshift-dns"/"dns-default-metrics-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-default-metrics-tls&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.324341 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"dns-default-metrics-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-default-metrics-tls&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:27.343853 4758 reflector.go:561] object-"openshift-dns"/"dns-dockercfg-jwfmh": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-dockercfg-jwfmh&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.343920 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"dns-dockercfg-jwfmh\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-dockercfg-jwfmh&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:27.363896 4758 request.go:700] Waited for 7.09479571s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackage-server-manager-serving-cert&resourceVersion=84627 Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:27.364500 4758 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackage-server-manager-serving-cert&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.364567 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackage-server-manager-serving-cert&resourceVersion=84627\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:27.384431 4758 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-control-plane-metrics-cert&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.384537 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-control-plane-metrics-cert&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:27.404870 4758 reflector.go:561] object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-zpd54": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dhorizon-operator-controller-manager-dockercfg-zpd54&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.404973 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"horizon-operator-controller-manager-dockercfg-zpd54\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dhorizon-operator-controller-manager-dockercfg-zpd54&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:27.423897 4758 reflector.go:561] object-"metallb-system"/"metallb-memberlist": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmetallb-memberlist&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.423951 4758 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"metallb-memberlist\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmetallb-memberlist&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:27.443794 4758 reflector.go:561] object-"openstack"/"cert-metric-storage-prometheus-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-metric-storage-prometheus-svc&resourceVersion=84193": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.443850 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-metric-storage-prometheus-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-metric-storage-prometheus-svc&resourceVersion=84193\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:27.464290 4758 reflector.go:561] object-"openstack"/"ovnnorthd-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovnnorthd-config&resourceVersion=84276": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.464351 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovnnorthd-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovnnorthd-config&resourceVersion=84276\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:27.484325 4758 reflector.go:561] object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-z2sxt": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobo-prometheus-operator-admission-webhook-dockercfg-z2sxt&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.484406 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-z2sxt\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobo-prometheus-operator-admission-webhook-dockercfg-z2sxt&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:27.504146 4758 reflector.go:561] object-"openstack"/"keystone": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.504244 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"keystone\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone&resourceVersion=84512\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:27.524120 4758 reflector.go:561] object-"openstack"/"cert-keystone-internal-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-keystone-internal-svc&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.524177 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-keystone-internal-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-keystone-internal-svc&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:27.543709 4758 reflector.go:561] object-"openshift-authentication-operator"/"trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=84580": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.543815 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=84580\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:27.563836 4758 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dovnkube-config&resourceVersion=84439": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.563944 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dovnkube-config&resourceVersion=84439\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:27.584455 4758 reflector.go:561] object-"openshift-marketplace"/"marketplace-operator-metrics": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-metrics&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.584527 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-metrics&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:27.604561 4758 reflector.go:561] object-"openstack-operators"/"infra-operator-webhook-server-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dinfra-operator-webhook-server-cert&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.604639 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"infra-operator-webhook-server-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dinfra-operator-webhook-server-cert&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:27.624680 4758 reflector.go:561] object-"metallb-system"/"manager-account-dockercfg-q7gzx": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmanager-account-dockercfg-q7gzx&resourceVersion=84680": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.624788 4758 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"manager-account-dockercfg-q7gzx\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmanager-account-dockercfg-q7gzx&resourceVersion=84680\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.632688 4758 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/events/ceilometer-0.188d1f633c8ab212\": dial tcp 38.102.83.223:6443: connect: connection refused" event="&Event{ObjectMeta:{ceilometer-0.188d1f633c8ab212 openstack 84476 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openstack,Name:ceilometer-0,UID:93923998-0016-4db9-adff-a433c7a8d57c,APIVersion:v1,ResourceVersion:49775,FieldPath:spec.containers{ceilometer-notification-agent},},Reason:Unhealthy,Message:Liveness probe failed: command timed out,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 17:58:59 +0000 UTC,LastTimestamp:2026-01-22 17:59:29.70354374 +0000 UTC m=+5391.186883055,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:27.644264 4758 reflector.go:561] object-"openstack"/"watcher-api-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dwatcher-api-config-data&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.644312 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"watcher-api-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dwatcher-api-config-data&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:27.664575 4758 reflector.go:561] object-"openshift-machine-config-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84485": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.664665 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84485\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:27.684803 4758 reflector.go:561] object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-controller-manager-operator-config&resourceVersion=84173": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.684922 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-controller-manager-operator-config&resourceVersion=84173\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:27.704262 4758 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-ocp-branding-template&resourceVersion=84193": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.704382 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-ocp-branding-template&resourceVersion=84193\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:27.724455 4758 reflector.go:561] object-"openshift-image-registry"/"image-registry-certificates": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dimage-registry-certificates&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.724548 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"image-registry-certificates\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dimage-registry-certificates&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.731623 4758 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" interval="7s" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:27.744699 4758 reflector.go:561] object-"openshift-service-ca-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84485": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.744783 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84485\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:27.764857 4758 reflector.go:561] object-"hostpath-provisioner"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.764955 4758 reflector.go:158] "Unhandled Error" err="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:27.783989 4758 reflector.go:561] object-"openstack"/"ovncontroller-ovncontroller-dockercfg-pxl5h": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovncontroller-ovncontroller-dockercfg-pxl5h&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.784055 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovncontroller-ovncontroller-dockercfg-pxl5h\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovncontroller-ovncontroller-dockercfg-pxl5h&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:27.804153 4758 reflector.go:561] object-"openshift-image-registry"/"trusted-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&resourceVersion=84439": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.804207 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&resourceVersion=84439\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:27.823911 4758 reflector.go:561] object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-lk2r2": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dkeystone-operator-controller-manager-dockercfg-lk2r2&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.823951 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"keystone-operator-controller-manager-dockercfg-lk2r2\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dkeystone-operator-controller-manager-dockercfg-lk2r2&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:27.843795 4758 reflector.go:561] object-"openstack"/"horizon": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dhorizon&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.843829 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"horizon\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dhorizon&resourceVersion=84627\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:27.863988 4758 reflector.go:561] object-"openstack"/"ceilometer-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dceilometer-config-data&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.864023 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ceilometer-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dceilometer-config-data&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:27.885075 4758 reflector.go:561] object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Doauth-apiserver-sa-dockercfg-6r2bq&resourceVersion=84193": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.885116 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-6r2bq\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Doauth-apiserver-sa-dockercfg-6r2bq&resourceVersion=84193\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:27.904347 4758 reflector.go:561] object-"openstack"/"nova-scheduler-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-scheduler-config-data&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.904445 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-scheduler-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-scheduler-config-data&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:27.924378 4758 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-router-certs": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-router-certs&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.924475 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-router-certs&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:27.944545 4758 reflector.go:561] object-"openstack"/"barbican-barbican-dockercfg-z4pqk": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-barbican-dockercfg-z4pqk&resourceVersion=84193": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.944591 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"barbican-barbican-dockercfg-z4pqk\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-barbican-dockercfg-z4pqk&resourceVersion=84193\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:27.963992 4758 reflector.go:561] object-"cert-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.964086 4758 reflector.go:158] "Unhandled Error" err="object-\"cert-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:27.983642 4758 reflector.go:561] object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobo-prometheus-operator-admission-webhook-service-cert&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:27.983691 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobo-prometheus-operator-admission-webhook-service-cert&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:28.004455 4758 reflector.go:561] object-"openstack"/"openstackclient-openstackclient-dockercfg-kmlnc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dopenstackclient-openstackclient-dockercfg-kmlnc&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.004501 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstackclient-openstackclient-dockercfg-kmlnc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dopenstackclient-openstackclient-dockercfg-kmlnc&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:28.024583 4758 reflector.go:561] object-"cert-manager"/"cert-manager-webhook-dockercfg-9xxdc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-webhook-dockercfg-9xxdc&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.024651 4758 reflector.go:158] "Unhandled Error" err="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-9xxdc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-webhook-dockercfg-9xxdc&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:28.044415 4758 reflector.go:561] object-"openshift-ovn-kubernetes"/"env-overrides": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Denv-overrides&resourceVersion=84276": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.044464 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"env-overrides\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Denv-overrides&resourceVersion=84276\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:28.064605 4758 reflector.go:561] object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84524": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.064661 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84524\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:28.084309 4758 reflector.go:561] object-"openshift-route-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.084363 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:28.091883 4758 patch_prober.go:28] interesting pod/router-default-5444994796-7jtcn container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:28.091935 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-7jtcn" podUID="1d2c5bee-e237-4043-9f8a-73bb67ebf355" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:28.104818 4758 reflector.go:561] object-"openstack"/"cert-neutron-internal-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-neutron-internal-svc&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.104895 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-neutron-internal-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-neutron-internal-svc&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:28.125110 4758 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dmachine-api-operator-dockercfg-mfbb7&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.125204 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-mfbb7\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dmachine-api-operator-dockercfg-mfbb7&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:28.144037 4758 reflector.go:561] object-"metallb-system"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.144122 4758 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:28.164185 4758 reflector.go:561] object-"openshift-machine-api"/"control-plane-machine-set-operator-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-tls&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.164297 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-tls&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:28.184725 4758 reflector.go:561] object-"openshift-controller-manager"/"config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=84524": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.184830 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=84524\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:28.203793 4758 reflector.go:561] object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Doauth-openshift-dockercfg-znhcc&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.203833 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-znhcc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Doauth-openshift-dockercfg-znhcc&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:28.224114 4758 reflector.go:561] object-"openstack"/"prometheus-metric-storage-rulefiles-2": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dprometheus-metric-storage-rulefiles-2&resourceVersion=84580": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.224167 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"prometheus-metric-storage-rulefiles-2\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dprometheus-metric-storage-rulefiles-2&resourceVersion=84580\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:28.244326 4758 reflector.go:561] object-"openshift-apiserver"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.244420 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=84512\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:28.264654 4758 reflector.go:561] object-"openstack"/"cert-memcached-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-memcached-svc&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.264771 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-memcached-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-memcached-svc&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:28.267181 4758 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:28.267217 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:28.289003 4758 reflector.go:561] object-"openshift-operators"/"observability-operator-sa-dockercfg-rdwz2": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobservability-operator-sa-dockercfg-rdwz2&resourceVersion=84193": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.289059 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-rdwz2\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobservability-operator-sa-dockercfg-rdwz2&resourceVersion=84193\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:28.304247 4758 reflector.go:561] object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-nzrzh": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dswift-operator-controller-manager-dockercfg-nzrzh&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.304343 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"swift-operator-controller-manager-dockercfg-nzrzh\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dswift-operator-controller-manager-dockercfg-nzrzh&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:28.323977 4758 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dovnkube-script-lib&resourceVersion=84400": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.324036 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dovnkube-script-lib&resourceVersion=84400\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:28.344574 4758 reflector.go:561] object-"openstack-operators"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.344833 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:28.366871 4758 request.go:700] Waited for 7.104687639s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-nb-config&resourceVersion=84173 Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:28.367282 4758 reflector.go:561] object-"openstack"/"ovndbcluster-nb-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-nb-config&resourceVersion=84173": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.367325 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovndbcluster-nb-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-nb-config&resourceVersion=84173\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:28.384589 4758 reflector.go:561] object-"openstack"/"dns": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Ddns&resourceVersion=84485": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.384654 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"dns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Ddns&resourceVersion=84485\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:28.397694 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="d5a7a812-eaba-4ae7-8d97-e80ae4f70d78" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.221:8081/readyz\": dial tcp 10.217.0.221:8081: connect: connection refused" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:28.404331 4758 reflector.go:561] object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-x59mw": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovncluster-ovndbcluster-sb-dockercfg-x59mw&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.404414 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovncluster-ovndbcluster-sb-dockercfg-x59mw\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovncluster-ovndbcluster-sb-dockercfg-x59mw&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:28.424392 4758 reflector.go:561] object-"openstack"/"rabbitmq-cell1-server-dockercfg-5sdkn": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-cell1-server-dockercfg-5sdkn&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.424443 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-cell1-server-dockercfg-5sdkn\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-cell1-server-dockercfg-5sdkn&resourceVersion=84627\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:28.444144 4758 reflector.go:561] object-"openshift-dns"/"dns-default": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Ddns-default&resourceVersion=84439": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.444193 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"dns-default\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Ddns-default&resourceVersion=84439\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:28.464465 4758 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-cliconfig": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dv4-0-config-system-cliconfig&resourceVersion=84485": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.464521 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dv4-0-config-system-cliconfig&resourceVersion=84485\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:28.485093 4758 reflector.go:561] object-"openstack"/"glance-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-scripts&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.485177 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"glance-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-scripts&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:28.504883 4758 reflector.go:561] object-"openshift-service-ca-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.504947 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:28.524120 4758 reflector.go:561] object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/secrets?fieldSelector=metadata.name%3Dkube-apiserver-operator-serving-cert&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.524188 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/secrets?fieldSelector=metadata.name%3Dkube-apiserver-operator-serving-cert&resourceVersion=84512\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:28.544249 4758 reflector.go:561] object-"openstack"/"ovndbcluster-sb-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-sb-config&resourceVersion=84357": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.544320 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovndbcluster-sb-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-sb-config&resourceVersion=84357\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:28.564335 4758 reflector.go:561] object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-kube-scheduler-operator-config&resourceVersion=84439": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.564426 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-kube-scheduler-operator-config&resourceVersion=84439\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:28.584557 4758 reflector.go:561] object-"openshift-operators"/"observability-operator-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobservability-operator-tls&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.584623 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"observability-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobservability-operator-tls&resourceVersion=84627\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:28.604108 4758 reflector.go:561] object-"openstack"/"ovndbcluster-sb-scripts": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-sb-scripts&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.604159 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovndbcluster-sb-scripts\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-sb-scripts&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.613524 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T18:00:28Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T18:00:28Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T18:00:28Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T18:00:28Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.613855 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.614067 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.614264 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.614457 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.614471 4758 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:28.623721 4758 reflector.go:561] object-"openshift-authentication"/"audit": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Daudit&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.623793 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"audit\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Daudit&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:28.644656 4758 reflector.go:561] object-"metallb-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.644714 4758 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:28.664718 4758 reflector.go:561] object-"openshift-console"/"console-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Dconsole-serving-cert&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.664791 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"console-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Dconsole-serving-cert&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:28.684498 4758 reflector.go:561] object-"openstack"/"galera-openstack-dockercfg-g2jsf": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dgalera-openstack-dockercfg-g2jsf&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.684554 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"galera-openstack-dockercfg-g2jsf\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dgalera-openstack-dockercfg-g2jsf&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:28.704362 4758 reflector.go:561] object-"openshift-nmstate"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84524": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.704410 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84524\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:28.724257 4758 reflector.go:561] object-"openstack"/"rabbitmq-erlang-cookie": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-erlang-cookie&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.724324 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-erlang-cookie\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-erlang-cookie&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:28.743816 4758 reflector.go:561] object-"openshift-authentication-operator"/"service-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dservice-ca-bundle&resourceVersion=84276": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.743859 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"service-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dservice-ca-bundle&resourceVersion=84276\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:28.764557 4758 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dmachine-approver-config&resourceVersion=84580": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.764605 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dmachine-approver-config&resourceVersion=84580\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:28.784684 4758 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dmachine-api-operator-tls&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.784752 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dmachine-api-operator-tls&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:28.809451 4758 reflector.go:561] object-"openstack"/"cert-galera-openstack-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-galera-openstack-svc&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.809513 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-galera-openstack-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-galera-openstack-svc&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:28.823868 4758 reflector.go:561] object-"openshift-console"/"console-dockercfg-f62pw": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Dconsole-dockercfg-f62pw&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.823936 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"console-dockercfg-f62pw\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Dconsole-dockercfg-f62pw&resourceVersion=84627\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:28.832679 4758 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:28.832733 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:28.844354 4758 reflector.go:561] object-"openshift-dns"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84580": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.844397 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84580\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:28.864361 4758 reflector.go:561] object-"openstack"/"openstack-scripts": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-scripts&resourceVersion=84524": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.864412 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-scripts\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-scripts&resourceVersion=84524\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:28.883956 4758 reflector.go:561] object-"openstack"/"prometheus-metric-storage-web-config": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dprometheus-metric-storage-web-config&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.884006 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"prometheus-metric-storage-web-config\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dprometheus-metric-storage-web-config&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:28.904290 4758 reflector.go:561] object-"openstack"/"memcached-memcached-dockercfg-2w6nn": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmemcached-memcached-dockercfg-2w6nn&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.904345 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"memcached-memcached-dockercfg-2w6nn\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmemcached-memcached-dockercfg-2w6nn&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:28.924238 4758 reflector.go:561] object-"cert-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84524": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.924285 4758 reflector.go:158] "Unhandled Error" err="object-\"cert-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84524\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:28.945173 4758 reflector.go:561] object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84439": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.945232 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84439\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:28.964047 4758 reflector.go:561] object-"openstack"/"cinder-cinder-dockercfg-85hcg": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-cinder-dockercfg-85hcg&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.964134 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cinder-cinder-dockercfg-85hcg\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-cinder-dockercfg-85hcg&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:28.984182 4758 reflector.go:561] object-"openshift-multus"/"metrics-daemon-secret": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmetrics-daemon-secret&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:28.984237 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"metrics-daemon-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmetrics-daemon-secret&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:29.004477 4758 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/secrets?fieldSelector=metadata.name%3Dmachine-approver-tls&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:29.004538 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/secrets?fieldSelector=metadata.name%3Dmachine-approver-tls&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:29.048504 4758 reflector.go:561] object-"openshift-dns-operator"/"metrics-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:29.048574 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns-operator\"/\"metrics-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:29.048711 4758 reflector.go:561] object-"openshift-machine-config-operator"/"proxy-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dproxy-tls&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:29.049041 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dproxy-tls&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:29.064151 4758 reflector.go:561] object-"openshift-apiserver"/"etcd-serving-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Detcd-serving-ca&resourceVersion=84524": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:29.064493 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-serving-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Detcd-serving-ca&resourceVersion=84524\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:29.084806 4758 reflector.go:561] object-"openshift-apiserver-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84524": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:29.084878 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84524\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:29.104312 4758 reflector.go:561] object-"openstack"/"default-dockercfg-d4w66": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-d4w66&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:29.104371 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"default-dockercfg-d4w66\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-d4w66&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:29.124126 4758 reflector.go:561] object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:29.124187 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:29.145037 4758 reflector.go:561] object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-qcqlv": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dplacement-operator-controller-manager-dockercfg-qcqlv&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:29.145111 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"placement-operator-controller-manager-dockercfg-qcqlv\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dplacement-operator-controller-manager-dockercfg-qcqlv&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:29.164084 4758 reflector.go:561] object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/secrets?fieldSelector=metadata.name%3Dauthentication-operator-dockercfg-mz9bj&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:29.164145 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-mz9bj\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/secrets?fieldSelector=metadata.name%3Dauthentication-operator-dockercfg-mz9bj&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:29.186338 4758 reflector.go:561] object-"openshift-ingress-canary"/"default-dockercfg-2llfx": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-2llfx&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:29.186417 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-canary\"/\"default-dockercfg-2llfx\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-2llfx&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:29.204237 4758 reflector.go:561] object-"openshift-network-node-identity"/"env-overrides": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Denv-overrides&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:29.204303 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"env-overrides\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Denv-overrides&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:29.224203 4758 reflector.go:561] object-"openshift-ingress"/"service-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dservice-ca-bundle&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:29.224278 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress\"/\"service-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dservice-ca-bundle&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:29.244713 4758 reflector.go:561] object-"openshift-console-operator"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:29.244806 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:29.264644 4758 reflector.go:561] object-"openshift-authentication-operator"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:29.264702 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:29.284041 4758 reflector.go:561] object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-pz96z": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-baremetal-operator-controller-manager-dockercfg-pz96z&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:29.284105 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"openstack-baremetal-operator-controller-manager-dockercfg-pz96z\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-baremetal-operator-controller-manager-dockercfg-pz96z&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:29.304333 4758 reflector.go:561] object-"openshift-cluster-samples-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84116": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:29.304396 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84116\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:29.325537 4758 reflector.go:561] object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/secrets?fieldSelector=metadata.name%3Dopenshift-config-operator-dockercfg-7pc5z&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:29.325610 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-7pc5z\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/secrets?fieldSelector=metadata.name%3Dopenshift-config-operator-dockercfg-7pc5z&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:29.344787 4758 reflector.go:561] object-"openstack"/"cert-barbican-public-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-barbican-public-svc&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:29.344837 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-barbican-public-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-barbican-public-svc&resourceVersion=84512\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:29.364713 4758 reflector.go:561] object-"openshift-marketplace"/"marketplace-trusted-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dmarketplace-trusted-ca&resourceVersion=84116": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:29.364800 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dmarketplace-trusted-ca&resourceVersion=84116\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.383703 4758 request.go:700] Waited for 6.888552856s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Ddns-svc&resourceVersion=84173 Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:29.384298 4758 reflector.go:561] object-"openstack"/"dns-svc": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Ddns-svc&resourceVersion=84173": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:29.384359 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"dns-svc\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Ddns-svc&resourceVersion=84173\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.413325 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-gd568" event={"ID":"e8d5a5c6-b15b-490d-aab9-7fc63e9f30f7","Type":"ContainerDied","Data":"240edd2f680249409c003b4f15a98966b1e1d8f25dbe8d8d91e622618a7b238d"} Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.413660 4758 generic.go:334] "Generic (PLEG): container finished" podID="e8d5a5c6-b15b-490d-aab9-7fc63e9f30f7" containerID="240edd2f680249409c003b4f15a98966b1e1d8f25dbe8d8d91e622618a7b238d" exitCode=1 Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:29.414451 4758 reflector.go:561] object-"openshift-network-diagnostics"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84485": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:29.414575 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84485\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.415379 4758 scope.go:117] "RemoveContainer" containerID="240edd2f680249409c003b4f15a98966b1e1d8f25dbe8d8d91e622618a7b238d" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.422497 4758 generic.go:334] "Generic (PLEG): container finished" podID="7d2439ad-1ca6-4c24-9d15-e04f0f89aedf" containerID="f3cef0682a195659f7b5e3123741938c84f23055a202fd57fcc714b2d9d731c7" exitCode=1 Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.427084 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-zfcl5" event={"ID":"7d2439ad-1ca6-4c24-9d15-e04f0f89aedf","Type":"ContainerDied","Data":"f3cef0682a195659f7b5e3123741938c84f23055a202fd57fcc714b2d9d731c7"} Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.428041 4758 scope.go:117] "RemoveContainer" containerID="f3cef0682a195659f7b5e3123741938c84f23055a202fd57fcc714b2d9d731c7" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:29.430201 4758 reflector.go:561] object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/secrets?fieldSelector=metadata.name%3Dingress-operator-dockercfg-7lnqk&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:29.430260 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-7lnqk\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/secrets?fieldSelector=metadata.name%3Dingress-operator-dockercfg-7lnqk&resourceVersion=84512\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:29.445093 4758 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-config&resourceVersion=84116": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:29.445161 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-config&resourceVersion=84116\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.451223 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.451326 4758 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="087a29a92b87397845777f3d37268935361fbcdc0080c0ed7d757240b78974bb" exitCode=0 Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.451769 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"087a29a92b87397845777f3d37268935361fbcdc0080c0ed7d757240b78974bb"} Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.451875 4758 scope.go:117] "RemoveContainer" containerID="9aacb0bb9a3bcb2aa8424102cf4fd83df93c8f5f5e530a92298a469153caeb7b" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.455262 4758 generic.go:334] "Generic (PLEG): container finished" podID="4801e5d3-a66d-4856-bfc2-95dfebf6f442" containerID="34e5ed2937b7a59087b73abe476686d3020b33f60f84c8d5a883a13c7960304d" exitCode=1 Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.455335 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-b7565899b-vlqs7" event={"ID":"4801e5d3-a66d-4856-bfc2-95dfebf6f442","Type":"ContainerDied","Data":"34e5ed2937b7a59087b73abe476686d3020b33f60f84c8d5a883a13c7960304d"} Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.456664 4758 scope.go:117] "RemoveContainer" containerID="34e5ed2937b7a59087b73abe476686d3020b33f60f84c8d5a883a13c7960304d" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.457537 4758 generic.go:334] "Generic (PLEG): container finished" podID="d67bb459-81fe-48a2-ac8a-cb4441bb35bb" containerID="95d524686bf752428f84ea0aeeb170f883fe48d942e5469121e60914ddd0df88" exitCode=1 Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.457598 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-d2nmz" event={"ID":"d67bb459-81fe-48a2-ac8a-cb4441bb35bb","Type":"ContainerDied","Data":"95d524686bf752428f84ea0aeeb170f883fe48d942e5469121e60914ddd0df88"} Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.458295 4758 scope.go:117] "RemoveContainer" containerID="95d524686bf752428f84ea0aeeb170f883fe48d942e5469121e60914ddd0df88" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.461429 4758 generic.go:334] "Generic (PLEG): container finished" podID="4612798c-6ae6-4a07-afe6-3f3574ee467b" containerID="a86ae74b37544ab164be41ebf400131e9e7d915da894679621c4bbdc42ef92f9" exitCode=137 Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.461596 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-np2j4" event={"ID":"4612798c-6ae6-4a07-afe6-3f3574ee467b","Type":"ContainerDied","Data":"a86ae74b37544ab164be41ebf400131e9e7d915da894679621c4bbdc42ef92f9"} Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:29.464307 4758 reflector.go:561] object-"openstack"/"openstack-config-data": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-config-data&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:29.464362 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-config-data\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-config-data&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.471572 4758 generic.go:334] "Generic (PLEG): container finished" podID="8afd29cc-2dab-460e-ad9d-f17690c15f41" containerID="c62d76911da0f5713e9e27fb9411fcce83f728d29a3f1dfcd100c7f9a1299640" exitCode=1 Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.471690 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-58fc8b87c6-qmw5r" event={"ID":"8afd29cc-2dab-460e-ad9d-f17690c15f41","Type":"ContainerDied","Data":"c62d76911da0f5713e9e27fb9411fcce83f728d29a3f1dfcd100c7f9a1299640"} Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.472551 4758 scope.go:117] "RemoveContainer" containerID="c62d76911da0f5713e9e27fb9411fcce83f728d29a3f1dfcd100c7f9a1299640" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.482884 4758 generic.go:334] "Generic (PLEG): container finished" podID="25848d11-6830-45f8-aff0-0082594b5f3f" containerID="76577c5221b29a65d8db3dbdf6da6b58ef6868ad173ac9ff49414491ca910328" exitCode=1 Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.483101 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-zkfzz" event={"ID":"25848d11-6830-45f8-aff0-0082594b5f3f","Type":"ContainerDied","Data":"76577c5221b29a65d8db3dbdf6da6b58ef6868ad173ac9ff49414491ca910328"} Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:29.483939 4758 reflector.go:561] object-"openstack"/"glance-default-internal-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-default-internal-config-data&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:29.483991 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"glance-default-internal-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-default-internal-config-data&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.484431 4758 scope.go:117] "RemoveContainer" containerID="76577c5221b29a65d8db3dbdf6da6b58ef6868ad173ac9ff49414491ca910328" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.487478 4758 generic.go:334] "Generic (PLEG): container finished" podID="c73a71b4-f1fd-4a6c-9832-ce9b48a5f220" containerID="98763afcc5b175076c7ccd2ff919e441b44b7eef4344c4bb01c274b2de476b81" exitCode=1 Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.487579 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-7tzm4" event={"ID":"c73a71b4-f1fd-4a6c-9832-ce9b48a5f220","Type":"ContainerDied","Data":"98763afcc5b175076c7ccd2ff919e441b44b7eef4344c4bb01c274b2de476b81"} Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.488599 4758 scope.go:117] "RemoveContainer" containerID="98763afcc5b175076c7ccd2ff919e441b44b7eef4344c4bb01c274b2de476b81" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.492925 4758 generic.go:334] "Generic (PLEG): container finished" podID="659f7d3e-5518-4d19-bb54-e39295a667d2" containerID="5e4cfe8dee549f90ddd7da44b917a696b4ad8b9811a62376b4463b33d409636a" exitCode=0 Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.492986 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2fkhp" event={"ID":"659f7d3e-5518-4d19-bb54-e39295a667d2","Type":"ContainerDied","Data":"5e4cfe8dee549f90ddd7da44b917a696b4ad8b9811a62376b4463b33d409636a"} Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.500230 4758 generic.go:334] "Generic (PLEG): container finished" podID="40845474-36a2-48c0-a0df-af5deb2a31fd" containerID="a7809f27497752a919b6754cb12a9a6bab28418e529fc85219c6af1b2b6e0687" exitCode=1 Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.500717 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4rlkk" event={"ID":"40845474-36a2-48c0-a0df-af5deb2a31fd","Type":"ContainerDied","Data":"a7809f27497752a919b6754cb12a9a6bab28418e529fc85219c6af1b2b6e0687"} Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.502361 4758 scope.go:117] "RemoveContainer" containerID="a7809f27497752a919b6754cb12a9a6bab28418e529fc85219c6af1b2b6e0687" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:29.505237 4758 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-template-login": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-template-login&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:29.505312 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-template-login&resourceVersion=84627\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.513871 4758 generic.go:334] "Generic (PLEG): container finished" podID="901f347a-3b10-4392-8247-41a859112544" containerID="f795c930a8e12fda9c2045dccf29f2f5cfba9ae856a5150b6b7f51bce50b4ae6" exitCode=1 Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.514075 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-2mr2s" event={"ID":"901f347a-3b10-4392-8247-41a859112544","Type":"ContainerDied","Data":"f795c930a8e12fda9c2045dccf29f2f5cfba9ae856a5150b6b7f51bce50b4ae6"} Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.514668 4758 scope.go:117] "RemoveContainer" containerID="f795c930a8e12fda9c2045dccf29f2f5cfba9ae856a5150b6b7f51bce50b4ae6" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:29.524015 4758 reflector.go:561] object-"openstack"/"barbican-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-config-data&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:29.524100 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"barbican-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-config-data&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.525674 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_71bb4a3aecc4ba5b26c4b7318770ce13/kube-apiserver/0.log" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.527329 4758 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="8e33eb125ab84769bb47bfb5bbf4c3643562a9ae950fe7f4a6f3ddde4057d86b" exitCode=137 Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.527441 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"8e33eb125ab84769bb47bfb5bbf4c3643562a9ae950fe7f4a6f3ddde4057d86b"} Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.532177 4758 generic.go:334] "Generic (PLEG): container finished" podID="e7fdd2cd-e517-46b5-acb3-22b59b7f132f" containerID="810588e7840d9ff4f9a2fccf0bebff7066b6141e074eff2931aa110dff601661" exitCode=1 Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.532295 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-tlt96" event={"ID":"e7fdd2cd-e517-46b5-acb3-22b59b7f132f","Type":"ContainerDied","Data":"810588e7840d9ff4f9a2fccf0bebff7066b6141e074eff2931aa110dff601661"} Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.533189 4758 scope.go:117] "RemoveContainer" containerID="810588e7840d9ff4f9a2fccf0bebff7066b6141e074eff2931aa110dff601661" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:29.543834 4758 reflector.go:561] object-"openstack"/"rabbitmq-config-data": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-config-data&resourceVersion=84485": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:29.543889 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-config-data\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-config-data&resourceVersion=84485\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.544810 4758 generic.go:334] "Generic (PLEG): container finished" podID="cc433179-ae5b-4250-80c2-97af371fdfed" containerID="94d80fab259bbdba24e6cb6f6b906c1c7fc7544cc57f0cf0de9ee3c67a648b6c" exitCode=137 Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.544926 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-lpprz" event={"ID":"cc433179-ae5b-4250-80c2-97af371fdfed","Type":"ContainerDied","Data":"94d80fab259bbdba24e6cb6f6b906c1c7fc7544cc57f0cf0de9ee3c67a648b6c"} Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.548849 4758 generic.go:334] "Generic (PLEG): container finished" podID="644142ed-c937-406d-9fd5-3fe856879a92" containerID="59d62e800ffc23ef90c6cb957fb818dce6dce732562db83f9c8eba85e2739440" exitCode=1 Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.548919 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2xj52" event={"ID":"644142ed-c937-406d-9fd5-3fe856879a92","Type":"ContainerDied","Data":"59d62e800ffc23ef90c6cb957fb818dce6dce732562db83f9c8eba85e2739440"} Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.549841 4758 scope.go:117] "RemoveContainer" containerID="59d62e800ffc23ef90c6cb957fb818dce6dce732562db83f9c8eba85e2739440" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.554004 4758 generic.go:334] "Generic (PLEG): container finished" podID="26d5529a-b270-40fc-9faa-037435dd2f80" containerID="fc8ff14bdec8806608a8a75f3794ae87e47866f8eec743c5d6cec4f1daefb700" exitCode=1 Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.554105 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cb5t8" event={"ID":"26d5529a-b270-40fc-9faa-037435dd2f80","Type":"ContainerDied","Data":"fc8ff14bdec8806608a8a75f3794ae87e47866f8eec743c5d6cec4f1daefb700"} Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.555210 4758 scope.go:117] "RemoveContainer" containerID="fc8ff14bdec8806608a8a75f3794ae87e47866f8eec743c5d6cec4f1daefb700" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.557774 4758 generic.go:334] "Generic (PLEG): container finished" podID="c3e0f5c7-10cb-441c-9516-f6de8fe29757" containerID="1489735902e42f8c37aa85aefd23353e063ce3e0f78177639e0df6c46ddeb829" exitCode=1 Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.557876 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-s8q8p" event={"ID":"c3e0f5c7-10cb-441c-9516-f6de8fe29757","Type":"ContainerDied","Data":"1489735902e42f8c37aa85aefd23353e063ce3e0f78177639e0df6c46ddeb829"} Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.558923 4758 scope.go:117] "RemoveContainer" containerID="1489735902e42f8c37aa85aefd23353e063ce3e0f78177639e0df6c46ddeb829" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.561331 4758 generic.go:334] "Generic (PLEG): container finished" podID="d4c5d14c-33e9-4cb0-9ff4-9747c2cd3c13" containerID="cf45c93385b847cb95046805b7d0579501a8fde4e96aec554951f00da0293ebc" exitCode=1 Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.561392 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-59n7w" event={"ID":"d4c5d14c-33e9-4cb0-9ff4-9747c2cd3c13","Type":"ContainerDied","Data":"cf45c93385b847cb95046805b7d0579501a8fde4e96aec554951f00da0293ebc"} Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.561757 4758 scope.go:117] "RemoveContainer" containerID="cf45c93385b847cb95046805b7d0579501a8fde4e96aec554951f00da0293ebc" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:29.564309 4758 reflector.go:561] object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/secrets?fieldSelector=metadata.name%3Ddns-operator-dockercfg-9mqw5&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:29.564376 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-9mqw5\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/secrets?fieldSelector=metadata.name%3Ddns-operator-dockercfg-9mqw5&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.565805 4758 generic.go:334] "Generic (PLEG): container finished" podID="fa976a5e-7cd9-402f-9792-015ca1488d1f" containerID="ce5016f114838dcaca7cc66b44c49904276b6456085e1179fe6e8e2419474ace" exitCode=1 Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.565891 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-skwtp" event={"ID":"fa976a5e-7cd9-402f-9792-015ca1488d1f","Type":"ContainerDied","Data":"ce5016f114838dcaca7cc66b44c49904276b6456085e1179fe6e8e2419474ace"} Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.566375 4758 scope.go:117] "RemoveContainer" containerID="ce5016f114838dcaca7cc66b44c49904276b6456085e1179fe6e8e2419474ace" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.569169 4758 generic.go:334] "Generic (PLEG): container finished" podID="c4847ca7-5057-4d6d-80c5-f74c7d633e83" containerID="7308af29c5d418456639ec19b8ae89b374cefa9362e0fb4e0f7a39c32ed934c0" exitCode=1 Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.569208 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-675f79667-vjvtj" event={"ID":"c4847ca7-5057-4d6d-80c5-f74c7d633e83","Type":"ContainerDied","Data":"7308af29c5d418456639ec19b8ae89b374cefa9362e0fb4e0f7a39c32ed934c0"} Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.570109 4758 scope.go:117] "RemoveContainer" containerID="7308af29c5d418456639ec19b8ae89b374cefa9362e0fb4e0f7a39c32ed934c0" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.572567 4758 generic.go:334] "Generic (PLEG): container finished" podID="f5135718-a42b-4089-922b-9fba781fe6db" containerID="09f5beedb93e30a4b68e826f33ffdbcfe408d643e4a6667b28b1a56cfbd08bc2" exitCode=1 Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.572656 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lb8mx" event={"ID":"f5135718-a42b-4089-922b-9fba781fe6db","Type":"ContainerDied","Data":"09f5beedb93e30a4b68e826f33ffdbcfe408d643e4a6667b28b1a56cfbd08bc2"} Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.573454 4758 scope.go:117] "RemoveContainer" containerID="09f5beedb93e30a4b68e826f33ffdbcfe408d643e4a6667b28b1a56cfbd08bc2" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.575958 4758 generic.go:334] "Generic (PLEG): container finished" podID="86017532-da20-4917-8f8b-34190218edc2" containerID="f4e1ecc33b122dfeea31b64b121de90bd388c7aeb97dc5736a98282952aea0bb" exitCode=1 Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.576050 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-qg57g" event={"ID":"86017532-da20-4917-8f8b-34190218edc2","Type":"ContainerDied","Data":"f4e1ecc33b122dfeea31b64b121de90bd388c7aeb97dc5736a98282952aea0bb"} Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.577261 4758 scope.go:117] "RemoveContainer" containerID="f4e1ecc33b122dfeea31b64b121de90bd388c7aeb97dc5736a98282952aea0bb" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.579132 4758 generic.go:334] "Generic (PLEG): container finished" podID="cdd1962b-fbf0-480c-b5e2-e28ee6988046" containerID="ac2fce5f5864d1bf8541cf8f20b6f471fb03b7883c80af965fc653333bc7afd4" exitCode=1 Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.579215 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wxd6d" event={"ID":"cdd1962b-fbf0-480c-b5e2-e28ee6988046","Type":"ContainerDied","Data":"ac2fce5f5864d1bf8541cf8f20b6f471fb03b7883c80af965fc653333bc7afd4"} Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.579657 4758 scope.go:117] "RemoveContainer" containerID="ac2fce5f5864d1bf8541cf8f20b6f471fb03b7883c80af965fc653333bc7afd4" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.583276 4758 generic.go:334] "Generic (PLEG): container finished" podID="93923998-0016-4db9-adff-a433c7a8d57c" containerID="fafb5d2fa75b2b190a38003bc6cece90b275597f24e157d6ae4d1a4780c75472" exitCode=0 Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.583342 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93923998-0016-4db9-adff-a433c7a8d57c","Type":"ContainerDied","Data":"fafb5d2fa75b2b190a38003bc6cece90b275597f24e157d6ae4d1a4780c75472"} Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:29.583905 4758 reflector.go:561] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:29.583946 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.586476 4758 generic.go:334] "Generic (PLEG): container finished" podID="71c16ac1-3276-4096-93c5-d10765320713" containerID="b7623be75913161b201b9b3a55bc1959c9b6136ccdad6e64a3461f0147694c7c" exitCode=1 Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.586592 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-85b8fd6746-9vvd6" event={"ID":"71c16ac1-3276-4096-93c5-d10765320713","Type":"ContainerDied","Data":"b7623be75913161b201b9b3a55bc1959c9b6136ccdad6e64a3461f0147694c7c"} Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.587639 4758 scope.go:117] "RemoveContainer" containerID="b7623be75913161b201b9b3a55bc1959c9b6136ccdad6e64a3461f0147694c7c" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.592259 4758 generic.go:334] "Generic (PLEG): container finished" podID="5ade5af9-f79e-4285-841c-0f08e88cca47" containerID="c8dfde0b29e3dd16bd35e249a70593762e6ce0947cfa9be7442ca7bc4007ffe6" exitCode=1 Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.592413 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-2qp8f" event={"ID":"5ade5af9-f79e-4285-841c-0f08e88cca47","Type":"ContainerDied","Data":"c8dfde0b29e3dd16bd35e249a70593762e6ce0947cfa9be7442ca7bc4007ffe6"} Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.593398 4758 scope.go:117] "RemoveContainer" containerID="c8dfde0b29e3dd16bd35e249a70593762e6ce0947cfa9be7442ca7bc4007ffe6" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.596312 4758 generic.go:334] "Generic (PLEG): container finished" podID="78689fee-3fe7-47d2-866d-6465d23378ea" containerID="0d34a0000f5fcdb9c5200fca3bbdfa6438c3dfb190ac5b100564f735cb276bbe" exitCode=0 Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.596513 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-dfb5n" event={"ID":"78689fee-3fe7-47d2-866d-6465d23378ea","Type":"ContainerDied","Data":"0d34a0000f5fcdb9c5200fca3bbdfa6438c3dfb190ac5b100564f735cb276bbe"} Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.599357 4758 generic.go:334] "Generic (PLEG): container finished" podID="36cf0be1-e796-4c9e-b232-2a0c0ceaaa79" containerID="c37297ebe88579c5a107fed428fe55697fdd51c3ee150191378695cbde38f831" exitCode=1 Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.599446 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-bpw4j" event={"ID":"36cf0be1-e796-4c9e-b232-2a0c0ceaaa79","Type":"ContainerDied","Data":"c37297ebe88579c5a107fed428fe55697fdd51c3ee150191378695cbde38f831"} Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.600322 4758 scope.go:117] "RemoveContainer" containerID="c37297ebe88579c5a107fed428fe55697fdd51c3ee150191378695cbde38f831" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.601784 4758 generic.go:334] "Generic (PLEG): container finished" podID="19b4b900-d90f-4e59-b082-61f058f5882b" containerID="d51fb1ad15f929a23ca45418e301aaa67b68ac4fdfe0dfa8eb39fcbdb4b8a0f6" exitCode=1 Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.601890 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-4jthc" event={"ID":"19b4b900-d90f-4e59-b082-61f058f5882b","Type":"ContainerDied","Data":"d51fb1ad15f929a23ca45418e301aaa67b68ac4fdfe0dfa8eb39fcbdb4b8a0f6"} Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.603325 4758 scope.go:117] "RemoveContainer" containerID="d51fb1ad15f929a23ca45418e301aaa67b68ac4fdfe0dfa8eb39fcbdb4b8a0f6" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:29.604611 4758 reflector.go:561] object-"openstack"/"rabbitmq-server-dockercfg-d8jxf": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-server-dockercfg-d8jxf&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:29.604665 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-server-dockercfg-d8jxf\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-server-dockercfg-d8jxf&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.605937 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-vrzqb_ef543e1b-8068-4ea3-b32a-61027b32e95d/approver/0.log" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.606491 4758 generic.go:334] "Generic (PLEG): container finished" podID="ef543e1b-8068-4ea3-b32a-61027b32e95d" containerID="bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5" exitCode=1 Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.606592 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerDied","Data":"bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5"} Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.607512 4758 scope.go:117] "RemoveContainer" containerID="bd8a572669e3b65b8c0d5e6a53c4db204ac70fd39fc809c8390f8613506e3ef5" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.610841 4758 generic.go:334] "Generic (PLEG): container finished" podID="16d19f40-45e9-4f1a-b953-e5c68ca014f3" containerID="1d61b57ea732060a674fca3da40faafd12a801a2feede3f87bc0a9c8194f85bb" exitCode=1 Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.610961 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jr994" event={"ID":"16d19f40-45e9-4f1a-b953-e5c68ca014f3","Type":"ContainerDied","Data":"1d61b57ea732060a674fca3da40faafd12a801a2feede3f87bc0a9c8194f85bb"} Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.611897 4758 scope.go:117] "RemoveContainer" containerID="1d61b57ea732060a674fca3da40faafd12a801a2feede3f87bc0a9c8194f85bb" Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.616183 4758 generic.go:334] "Generic (PLEG): container finished" podID="35a3fafd-45ea-465d-90ef-36148a60685e" containerID="cd55dc9adc842248637987f9b3fb3f590baf4dde9075a2f9fba7f513cf9fe363" exitCode=1 Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.616242 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-sb974" event={"ID":"35a3fafd-45ea-465d-90ef-36148a60685e","Type":"ContainerDied","Data":"cd55dc9adc842248637987f9b3fb3f590baf4dde9075a2f9fba7f513cf9fe363"} Jan 22 18:00:29 crc kubenswrapper[4758]: I0122 18:00:29.617101 4758 scope.go:117] "RemoveContainer" containerID="cd55dc9adc842248637987f9b3fb3f590baf4dde9075a2f9fba7f513cf9fe363" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:29.624706 4758 reflector.go:561] object-"openstack"/"test-operator-controller-priv-key": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dtest-operator-controller-priv-key&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:29.624850 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"test-operator-controller-priv-key\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dtest-operator-controller-priv-key&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:29.648695 4758 reflector.go:561] object-"openstack"/"neutron-httpd-config": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dneutron-httpd-config&resourceVersion=84193": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:29.648774 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"neutron-httpd-config\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dneutron-httpd-config&resourceVersion=84193\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:29.664891 4758 reflector.go:561] object-"openshift-marketplace"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:29.664963 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:29.684048 4758 reflector.go:561] object-"openstack"/"prometheus-metric-storage-rulefiles-1": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dprometheus-metric-storage-rulefiles-1&resourceVersion=84116": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:29.684147 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"prometheus-metric-storage-rulefiles-1\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dprometheus-metric-storage-rulefiles-1&resourceVersion=84116\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:29.704766 4758 reflector.go:561] object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-f7gls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dinfra-operator-controller-manager-dockercfg-f7gls&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:29.704858 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"infra-operator-controller-manager-dockercfg-f7gls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dinfra-operator-controller-manager-dockercfg-f7gls&resourceVersion=84627\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:29.723876 4758 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-controller-dockercfg-c2lfx&resourceVersion=84680": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:29.723956 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-c2lfx\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-controller-dockercfg-c2lfx&resourceVersion=84680\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:29.744570 4758 reflector.go:561] object-"openshift-kube-storage-version-migrator-operator"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:29.744633 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:29.766065 4758 reflector.go:561] object-"openshift-multus"/"default-cni-sysctl-allowlist": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Ddefault-cni-sysctl-allowlist&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:29.766167 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Ddefault-cni-sysctl-allowlist&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:29.784952 4758 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-dockercfg-qx5rd&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:29.785008 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-qx5rd\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-dockercfg-qx5rd&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:29.804545 4758 reflector.go:561] object-"openshift-oauth-apiserver"/"etcd-client": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Detcd-client&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:29.804627 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Detcd-client&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:29.824907 4758 reflector.go:561] object-"openshift-cluster-version"/"cluster-version-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/secrets?fieldSelector=metadata.name%3Dcluster-version-operator-serving-cert&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:29.824971 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/secrets?fieldSelector=metadata.name%3Dcluster-version-operator-serving-cert&resourceVersion=84512\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:29.844514 4758 reflector.go:561] object-"openshift-dns-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:29.844598 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:29.866222 4758 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/secrets?fieldSelector=metadata.name%3Dmachine-approver-sa-dockercfg-nl2j4&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:29.866337 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-nl2j4\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/secrets?fieldSelector=metadata.name%3Dmachine-approver-sa-dockercfg-nl2j4&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:29.885364 4758 reflector.go:561] object-"openstack"/"keystone-keystone-dockercfg-q7l7k": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone-keystone-dockercfg-q7l7k&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:29.885448 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"keystone-keystone-dockercfg-q7l7k\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone-keystone-dockercfg-q7l7k&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:29.904601 4758 reflector.go:561] object-"openstack"/"cert-kube-state-metrics-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-kube-state-metrics-svc&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:29.904678 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-kube-state-metrics-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-kube-state-metrics-svc&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:29.925238 4758 reflector.go:561] object-"openstack"/"ovsdbserver-nb": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovsdbserver-nb&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:29.925313 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovsdbserver-nb\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovsdbserver-nb&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:29.944238 4758 reflector.go:561] object-"openshift-console"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84173": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:29.944289 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84173\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:29.964029 4758 reflector.go:561] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84276": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:29.964103 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84276\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:29 crc kubenswrapper[4758]: W0122 18:00:29.984444 4758 reflector.go:561] object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-2w6mb": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dmanila-operator-controller-manager-dockercfg-2w6mb&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:29 crc kubenswrapper[4758]: E0122 18:00:29.984512 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"manila-operator-controller-manager-dockercfg-2w6mb\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dmanila-operator-controller-manager-dockercfg-2w6mb&resourceVersion=84512\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:30 crc kubenswrapper[4758]: W0122 18:00:30.005313 4758 reflector.go:561] object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-baremetal-operator-webhook-server-cert&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:30 crc kubenswrapper[4758]: E0122 18:00:30.005380 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"openstack-baremetal-operator-webhook-server-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-baremetal-operator-webhook-server-cert&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:30 crc kubenswrapper[4758]: W0122 18:00:30.024925 4758 reflector.go:561] object-"openshift-kube-storage-version-migrator-operator"/"config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:30 crc kubenswrapper[4758]: E0122 18:00:30.025000 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:30 crc kubenswrapper[4758]: W0122 18:00:30.046063 4758 reflector.go:561] object-"metallb-system"/"frr-k8s-webhook-server-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dfrr-k8s-webhook-server-cert&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:30 crc kubenswrapper[4758]: E0122 18:00:30.046168 4758 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"frr-k8s-webhook-server-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dfrr-k8s-webhook-server-cert&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:30 crc kubenswrapper[4758]: W0122 18:00:30.064417 4758 reflector.go:561] object-"openshift-ingress-operator"/"trusted-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&resourceVersion=84524": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:30 crc kubenswrapper[4758]: E0122 18:00:30.064856 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-operator\"/\"trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&resourceVersion=84524\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:30 crc kubenswrapper[4758]: W0122 18:00:30.084639 4758 reflector.go:561] object-"openshift-cluster-version"/"default-dockercfg-gxtc4": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-gxtc4&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:30 crc kubenswrapper[4758]: E0122 18:00:30.084713 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-version\"/\"default-dockercfg-gxtc4\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-gxtc4&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:30 crc kubenswrapper[4758]: W0122 18:00:30.104806 4758 reflector.go:561] object-"openshift-apiserver"/"image-import-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dimage-import-ca&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:30 crc kubenswrapper[4758]: E0122 18:00:30.104885 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"image-import-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dimage-import-ca&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:30 crc kubenswrapper[4758]: W0122 18:00:30.124772 4758 reflector.go:561] object-"openshift-ingress-canary"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84485": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:30 crc kubenswrapper[4758]: E0122 18:00:30.124907 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84485\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:30 crc kubenswrapper[4758]: W0122 18:00:30.145143 4758 reflector.go:561] object-"openstack"/"placement-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dplacement-scripts&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:30 crc kubenswrapper[4758]: E0122 18:00:30.145209 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"placement-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dplacement-scripts&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:30 crc kubenswrapper[4758]: W0122 18:00:30.164274 4758 reflector.go:561] object-"openshift-ingress-canary"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84400": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:30 crc kubenswrapper[4758]: E0122 18:00:30.164333 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84400\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:30 crc kubenswrapper[4758]: W0122 18:00:30.185097 4758 reflector.go:561] object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-9zqsl": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dbarbican-operator-controller-manager-dockercfg-9zqsl&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:30 crc kubenswrapper[4758]: E0122 18:00:30.185154 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"barbican-operator-controller-manager-dockercfg-9zqsl\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dbarbican-operator-controller-manager-dockercfg-9zqsl&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:30 crc kubenswrapper[4758]: W0122 18:00:30.203812 4758 reflector.go:561] object-"openshift-nmstate"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:30 crc kubenswrapper[4758]: E0122 18:00:30.203880 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:30 crc kubenswrapper[4758]: W0122 18:00:30.224769 4758 reflector.go:561] object-"openstack"/"cert-glance-default-internal-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-glance-default-internal-svc&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:30 crc kubenswrapper[4758]: E0122 18:00:30.224828 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-glance-default-internal-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-glance-default-internal-svc&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:30 crc kubenswrapper[4758]: W0122 18:00:30.244203 4758 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/secrets?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-serving-cert&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:30 crc kubenswrapper[4758]: E0122 18:00:30.244267 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/secrets?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-serving-cert&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:30 crc kubenswrapper[4758]: W0122 18:00:30.264400 4758 reflector.go:561] object-"metallb-system"/"speaker-dockercfg-9jfxj": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dspeaker-dockercfg-9jfxj&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:30 crc kubenswrapper[4758]: E0122 18:00:30.264463 4758 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"speaker-dockercfg-9jfxj\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dspeaker-dockercfg-9jfxj&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:30 crc kubenswrapper[4758]: W0122 18:00:30.284567 4758 reflector.go:561] object-"openshift-apiserver"/"encryption-config-1": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dencryption-config-1&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:30 crc kubenswrapper[4758]: E0122 18:00:30.284642 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"encryption-config-1\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dencryption-config-1&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:30 crc kubenswrapper[4758]: W0122 18:00:30.305015 4758 reflector.go:561] object-"openstack"/"dns-swift-storage-0": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Ddns-swift-storage-0&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:30 crc kubenswrapper[4758]: E0122 18:00:30.305093 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"dns-swift-storage-0\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Ddns-swift-storage-0&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:30 crc kubenswrapper[4758]: W0122 18:00:30.324496 4758 reflector.go:561] object-"openstack"/"cert-ceilometer-internal-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-ceilometer-internal-svc&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:30 crc kubenswrapper[4758]: E0122 18:00:30.324589 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-ceilometer-internal-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-ceilometer-internal-svc&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:30 crc kubenswrapper[4758]: W0122 18:00:30.344665 4758 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-kubernetes-control-plane-dockercfg-gs7dd&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:30 crc kubenswrapper[4758]: E0122 18:00:30.344752 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-gs7dd\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-kubernetes-control-plane-dockercfg-gs7dd&resourceVersion=84627\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:30 crc kubenswrapper[4758]: W0122 18:00:30.364253 4758 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"pprof-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpprof-cert&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:30 crc kubenswrapper[4758]: E0122 18:00:30.364320 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpprof-cert&resourceVersion=84627\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:30 crc kubenswrapper[4758]: I0122 18:00:30.383953 4758 request.go:700] Waited for 7.041857053s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84313 Jan 22 18:00:30 crc kubenswrapper[4758]: W0122 18:00:30.384540 4758 reflector.go:561] object-"hostpath-provisioner"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:30 crc kubenswrapper[4758]: E0122 18:00:30.384600 4758 reflector.go:158] "Unhandled Error" err="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:30 crc kubenswrapper[4758]: W0122 18:00:30.404775 4758 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-idp-0-file-data&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:30 crc kubenswrapper[4758]: E0122 18:00:30.404846 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-idp-0-file-data&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:30 crc kubenswrapper[4758]: W0122 18:00:30.425356 4758 reflector.go:561] object-"openshift-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84485": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:30 crc kubenswrapper[4758]: E0122 18:00:30.425437 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84485\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:30 crc kubenswrapper[4758]: W0122 18:00:30.445193 4758 reflector.go:561] object-"openstack"/"ceilometer-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dceilometer-scripts&resourceVersion=84193": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:30 crc kubenswrapper[4758]: E0122 18:00:30.445277 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ceilometer-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dceilometer-scripts&resourceVersion=84193\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:30 crc kubenswrapper[4758]: W0122 18:00:30.464356 4758 reflector.go:561] object-"openstack"/"cert-ovnnorthd-ovndbs": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-ovnnorthd-ovndbs&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:30 crc kubenswrapper[4758]: E0122 18:00:30.464437 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-ovnnorthd-ovndbs\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-ovnnorthd-ovndbs&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:30 crc kubenswrapper[4758]: W0122 18:00:30.483879 4758 reflector.go:561] object-"openshift-authentication-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84173": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:30 crc kubenswrapper[4758]: E0122 18:00:30.484539 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84173\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:30 crc kubenswrapper[4758]: W0122 18:00:30.504180 4758 reflector.go:561] object-"openstack"/"glance-default-external-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-default-external-config-data&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:30 crc kubenswrapper[4758]: E0122 18:00:30.504249 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"glance-default-external-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-default-external-config-data&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:30 crc kubenswrapper[4758]: W0122 18:00:30.524551 4758 reflector.go:561] object-"openshift-etcd-operator"/"etcd-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-operator-serving-cert&resourceVersion=84193": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:30 crc kubenswrapper[4758]: E0122 18:00:30.524634 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-operator-serving-cert&resourceVersion=84193\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:30 crc kubenswrapper[4758]: W0122 18:00:30.544831 4758 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-operator-images": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dmachine-config-operator-images&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:30 crc kubenswrapper[4758]: E0122 18:00:30.544916 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dmachine-config-operator-images&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:30 crc kubenswrapper[4758]: W0122 18:00:30.564059 4758 reflector.go:561] object-"openshift-nmstate"/"nmstate-handler-dockercfg-v97lh": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dnmstate-handler-dockercfg-v97lh&resourceVersion=84193": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:30 crc kubenswrapper[4758]: E0122 18:00:30.564138 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"nmstate-handler-dockercfg-v97lh\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dnmstate-handler-dockercfg-v97lh&resourceVersion=84193\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:30 crc kubenswrapper[4758]: W0122 18:00:30.585046 4758 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dcatalog-operator-serving-cert&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:30 crc kubenswrapper[4758]: E0122 18:00:30.585408 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dcatalog-operator-serving-cert&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:30 crc kubenswrapper[4758]: W0122 18:00:30.605070 4758 reflector.go:561] object-"openshift-apiserver"/"audit-1": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Daudit-1&resourceVersion=84276": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:30 crc kubenswrapper[4758]: E0122 18:00:30.606354 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"audit-1\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Daudit-1&resourceVersion=84276\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:30 crc kubenswrapper[4758]: W0122 18:00:30.629988 4758 reflector.go:561] object-"openshift-route-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:30 crc kubenswrapper[4758]: E0122 18:00:30.630089 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:30 crc kubenswrapper[4758]: W0122 18:00:30.644568 4758 reflector.go:561] object-"openstack"/"rabbitmq-notifications-server-dockercfg-8d4mj": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-notifications-server-dockercfg-8d4mj&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:30 crc kubenswrapper[4758]: E0122 18:00:30.644646 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-notifications-server-dockercfg-8d4mj\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-notifications-server-dockercfg-8d4mj&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:30 crc kubenswrapper[4758]: I0122 18:00:30.648315 4758 generic.go:334] "Generic (PLEG): container finished" podID="6daa1231-490e-4ff7-9157-f49cdec96a5e" containerID="ad4303b386c6e21f3904b24f988068646e3106398b796a612dade9432bc95cd7" exitCode=0 Jan 22 18:00:30 crc kubenswrapper[4758]: I0122 18:00:30.648368 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-f2gvw" event={"ID":"6daa1231-490e-4ff7-9157-f49cdec96a5e","Type":"ContainerDied","Data":"ad4303b386c6e21f3904b24f988068646e3106398b796a612dade9432bc95cd7"} Jan 22 18:00:30 crc kubenswrapper[4758]: I0122 18:00:30.650538 4758 scope.go:117] "RemoveContainer" containerID="ad4303b386c6e21f3904b24f988068646e3106398b796a612dade9432bc95cd7" Jan 22 18:00:30 crc kubenswrapper[4758]: W0122 18:00:30.665457 4758 reflector.go:561] object-"openshift-dns"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84524": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:30 crc kubenswrapper[4758]: E0122 18:00:30.665531 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84524\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:30 crc kubenswrapper[4758]: W0122 18:00:30.684541 4758 reflector.go:561] object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:30 crc kubenswrapper[4758]: E0122 18:00:30.684612 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:30 crc kubenswrapper[4758]: W0122 18:00:30.705372 4758 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=84596": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:30 crc kubenswrapper[4758]: E0122 18:00:30.705457 4758 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=84596\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:30 crc kubenswrapper[4758]: I0122 18:00:30.725192 4758 status_manager.go:851] "Failed to get status for pod" podUID="644142ed-c937-406d-9fd5-3fe856879a92" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2xj52" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/test-operator-controller-manager-69797bbcbd-2xj52\": dial tcp 38.102.83.223:6443: connect: connection refused" Jan 22 18:00:30 crc kubenswrapper[4758]: W0122 18:00:30.744114 4758 reflector.go:561] object-"openshift-console-operator"/"trusted-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:30 crc kubenswrapper[4758]: E0122 18:00:30.744201 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:30 crc kubenswrapper[4758]: W0122 18:00:30.764275 4758 reflector.go:561] object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-d798m": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dmariadb-operator-controller-manager-dockercfg-d798m&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:30 crc kubenswrapper[4758]: E0122 18:00:30.764354 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"mariadb-operator-controller-manager-dockercfg-d798m\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dmariadb-operator-controller-manager-dockercfg-d798m&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:30 crc kubenswrapper[4758]: W0122 18:00:30.786091 4758 reflector.go:561] object-"openshift-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:30 crc kubenswrapper[4758]: E0122 18:00:30.786177 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:30 crc kubenswrapper[4758]: E0122 18:00:30.804309 4758 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openstack/ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/persistentvolumeclaims/ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0\": dial tcp 38.102.83.223:6443: connect: connection refused" pod="openstack/ovsdbserver-sb-0" volumeName="ovndbcluster-sb-etc-ovn" Jan 22 18:00:30 crc kubenswrapper[4758]: W0122 18:00:30.825085 4758 reflector.go:561] object-"openstack"/"cinder-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-config-data&resourceVersion=84193": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:30 crc kubenswrapper[4758]: E0122 18:00:30.825159 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cinder-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-config-data&resourceVersion=84193\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:30 crc kubenswrapper[4758]: W0122 18:00:30.844823 4758 reflector.go:561] object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dcluster-image-registry-operator-dockercfg-m4qtx&resourceVersion=84680": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:30 crc kubenswrapper[4758]: E0122 18:00:30.844904 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-m4qtx\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dcluster-image-registry-operator-dockercfg-m4qtx&resourceVersion=84680\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:30 crc kubenswrapper[4758]: W0122 18:00:30.864141 4758 reflector.go:561] object-"openstack"/"galera-openstack-cell1-dockercfg-thg4w": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dgalera-openstack-cell1-dockercfg-thg4w&resourceVersion=84680": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:30 crc kubenswrapper[4758]: E0122 18:00:30.864196 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"galera-openstack-cell1-dockercfg-thg4w\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dgalera-openstack-cell1-dockercfg-thg4w&resourceVersion=84680\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:30 crc kubenswrapper[4758]: W0122 18:00:30.884221 4758 reflector.go:561] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-ancillary-tools-dockercfg-vnmsz&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:30 crc kubenswrapper[4758]: E0122 18:00:30.884284 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-vnmsz\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-ancillary-tools-dockercfg-vnmsz&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:30 crc kubenswrapper[4758]: W0122 18:00:30.905943 4758 reflector.go:561] object-"openshift-multus"/"cni-copy-resources": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dcni-copy-resources&resourceVersion=84173": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:30 crc kubenswrapper[4758]: E0122 18:00:30.906235 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"cni-copy-resources\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dcni-copy-resources&resourceVersion=84173\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:30 crc kubenswrapper[4758]: W0122 18:00:30.924975 4758 reflector.go:561] object-"openstack"/"swift-storage-config-data": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dswift-storage-config-data&resourceVersion=84116": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:30 crc kubenswrapper[4758]: E0122 18:00:30.925056 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"swift-storage-config-data\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dswift-storage-config-data&resourceVersion=84116\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:30 crc kubenswrapper[4758]: W0122 18:00:30.944493 4758 reflector.go:561] object-"openstack"/"cert-ovndbcluster-nb-ovndbs": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-ovndbcluster-nb-ovndbs&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:30 crc kubenswrapper[4758]: E0122 18:00:30.944577 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-ovndbcluster-nb-ovndbs\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-ovndbcluster-nb-ovndbs&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:30 crc kubenswrapper[4758]: W0122 18:00:30.964217 4758 reflector.go:561] object-"openshift-apiserver"/"trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:30 crc kubenswrapper[4758]: E0122 18:00:30.964292 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:30 crc kubenswrapper[4758]: W0122 18:00:30.984808 4758 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-daemon-dockercfg-r5tcq&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:30 crc kubenswrapper[4758]: E0122 18:00:30.984893 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-r5tcq\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-daemon-dockercfg-r5tcq&resourceVersion=84512\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:31 crc kubenswrapper[4758]: W0122 18:00:31.004191 4758 reflector.go:561] object-"openshift-network-node-identity"/"ovnkube-identity-cm": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dovnkube-identity-cm&resourceVersion=84439": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:31 crc kubenswrapper[4758]: E0122 18:00:31.004265 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dovnkube-identity-cm&resourceVersion=84439\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:31 crc kubenswrapper[4758]: W0122 18:00:31.024714 4758 reflector.go:561] object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-pdg6h": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Ddesignate-operator-controller-manager-dockercfg-pdg6h&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:31 crc kubenswrapper[4758]: E0122 18:00:31.024834 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"designate-operator-controller-manager-dockercfg-pdg6h\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Ddesignate-operator-controller-manager-dockercfg-pdg6h&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:31 crc kubenswrapper[4758]: W0122 18:00:31.044592 4758 reflector.go:561] object-"openstack"/"nova-nova-dockercfg-r6mc9": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-nova-dockercfg-r6mc9&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:31 crc kubenswrapper[4758]: E0122 18:00:31.044697 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-nova-dockercfg-r6mc9\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-nova-dockercfg-r6mc9&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:31 crc kubenswrapper[4758]: W0122 18:00:31.064474 4758 reflector.go:561] object-"openstack"/"cert-swift-internal-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-swift-internal-svc&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:31 crc kubenswrapper[4758]: E0122 18:00:31.064545 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-swift-internal-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-swift-internal-svc&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:31 crc kubenswrapper[4758]: W0122 18:00:31.084321 4758 reflector.go:561] object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84173": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:31 crc kubenswrapper[4758]: E0122 18:00:31.084391 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84173\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:31 crc kubenswrapper[4758]: I0122 18:00:31.098847 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wxd6d" Jan 22 18:00:31 crc kubenswrapper[4758]: W0122 18:00:31.104807 4758 reflector.go:561] object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dcertified-operators-dockercfg-4rs5g&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:31 crc kubenswrapper[4758]: E0122 18:00:31.104884 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-4rs5g\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dcertified-operators-dockercfg-4rs5g&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:31 crc kubenswrapper[4758]: W0122 18:00:31.124333 4758 reflector.go:561] object-"openshift-network-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84400": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:31 crc kubenswrapper[4758]: E0122 18:00:31.124418 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84400\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:31 crc kubenswrapper[4758]: W0122 18:00:31.144301 4758 reflector.go:561] object-"openstack"/"kube-state-metrics-tls-config": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkube-state-metrics-tls-config&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:31 crc kubenswrapper[4758]: E0122 18:00:31.144356 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"kube-state-metrics-tls-config\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkube-state-metrics-tls-config&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:31 crc kubenswrapper[4758]: W0122 18:00:31.164698 4758 reflector.go:561] object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/secrets?fieldSelector=metadata.name%3Dkube-apiserver-operator-dockercfg-x57mr&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:31 crc kubenswrapper[4758]: E0122 18:00:31.165226 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-x57mr\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/secrets?fieldSelector=metadata.name%3Dkube-apiserver-operator-dockercfg-x57mr&resourceVersion=84627\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:31 crc kubenswrapper[4758]: W0122 18:00:31.184544 4758 reflector.go:561] object-"openstack"/"prometheus-metric-storage-rulefiles-0": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dprometheus-metric-storage-rulefiles-0&resourceVersion=84116": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:31 crc kubenswrapper[4758]: E0122 18:00:31.184618 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"prometheus-metric-storage-rulefiles-0\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dprometheus-metric-storage-rulefiles-0&resourceVersion=84116\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:31 crc kubenswrapper[4758]: W0122 18:00:31.204690 4758 reflector.go:561] object-"openstack-operators"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84400": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:31 crc kubenswrapper[4758]: E0122 18:00:31.204772 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84400\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:31 crc kubenswrapper[4758]: W0122 18:00:31.224489 4758 reflector.go:561] object-"openshift-ingress"/"router-metrics-certs-default": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-metrics-certs-default&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:31 crc kubenswrapper[4758]: E0122 18:00:31.224566 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress\"/\"router-metrics-certs-default\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-metrics-certs-default&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:31 crc kubenswrapper[4758]: W0122 18:00:31.244896 4758 reflector.go:561] object-"openshift-machine-config-operator"/"kube-rbac-proxy": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&resourceVersion=84524": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:31 crc kubenswrapper[4758]: E0122 18:00:31.244962 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&resourceVersion=84524\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:31 crc kubenswrapper[4758]: W0122 18:00:31.264147 4758 reflector.go:561] object-"openshift-multus"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:31 crc kubenswrapper[4758]: E0122 18:00:31.264215 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:31 crc kubenswrapper[4758]: W0122 18:00:31.283996 4758 reflector.go:561] object-"openshift-multus"/"multus-ac-dockercfg-9lkdf": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-ac-dockercfg-9lkdf&resourceVersion=84680": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:31 crc kubenswrapper[4758]: E0122 18:00:31.284048 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-ac-dockercfg-9lkdf\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-ac-dockercfg-9lkdf&resourceVersion=84680\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:31 crc kubenswrapper[4758]: W0122 18:00:31.304073 4758 reflector.go:561] object-"openstack"/"watcher-watcher-dockercfg-bvchw": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dwatcher-watcher-dockercfg-bvchw&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:31 crc kubenswrapper[4758]: E0122 18:00:31.304142 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"watcher-watcher-dockercfg-bvchw\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dwatcher-watcher-dockercfg-bvchw&resourceVersion=84512\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:31 crc kubenswrapper[4758]: W0122 18:00:31.323864 4758 reflector.go:561] object-"openstack"/"rabbitmq-notifications-plugins-conf": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-notifications-plugins-conf&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:31 crc kubenswrapper[4758]: E0122 18:00:31.323920 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-notifications-plugins-conf\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-notifications-plugins-conf&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:31 crc kubenswrapper[4758]: W0122 18:00:31.344239 4758 reflector.go:561] object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/secrets?fieldSelector=metadata.name%3Dkube-scheduler-operator-serving-cert&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:31 crc kubenswrapper[4758]: E0122 18:00:31.344294 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/secrets?fieldSelector=metadata.name%3Dkube-scheduler-operator-serving-cert&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:31 crc kubenswrapper[4758]: W0122 18:00:31.364155 4758 reflector.go:561] object-"openstack"/"rabbitmq-notifications-erlang-cookie": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-notifications-erlang-cookie&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:31 crc kubenswrapper[4758]: E0122 18:00:31.364228 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-notifications-erlang-cookie\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-notifications-erlang-cookie&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:31 crc kubenswrapper[4758]: W0122 18:00:31.384696 4758 reflector.go:561] object-"openstack"/"rabbitmq-notifications-server-conf": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-notifications-server-conf&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:31 crc kubenswrapper[4758]: E0122 18:00:31.384790 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-notifications-server-conf\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-notifications-server-conf&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:31 crc kubenswrapper[4758]: I0122 18:00:31.403489 4758 request.go:700] Waited for 7.17159075s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-client&resourceVersion=84343 Jan 22 18:00:31 crc kubenswrapper[4758]: W0122 18:00:31.404200 4758 reflector.go:561] object-"openshift-etcd-operator"/"etcd-client": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-client&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:31 crc kubenswrapper[4758]: E0122 18:00:31.404258 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-client&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:31 crc kubenswrapper[4758]: W0122 18:00:31.426477 4758 reflector.go:561] object-"openstack"/"cert-neutron-public-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-neutron-public-svc&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:31 crc kubenswrapper[4758]: E0122 18:00:31.426546 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-neutron-public-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-neutron-public-svc&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:31 crc kubenswrapper[4758]: W0122 18:00:31.445141 4758 reflector.go:561] object-"openshift-console"/"console-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dconsole-config&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:31 crc kubenswrapper[4758]: E0122 18:00:31.445212 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"console-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dconsole-config&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:31 crc kubenswrapper[4758]: W0122 18:00:31.464369 4758 reflector.go:561] object-"openstack"/"cert-glance-default-public-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-glance-default-public-svc&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:31 crc kubenswrapper[4758]: E0122 18:00:31.464431 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-glance-default-public-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-glance-default-public-svc&resourceVersion=84627\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:31 crc kubenswrapper[4758]: W0122 18:00:31.484602 4758 reflector.go:561] object-"openstack"/"horizon-config-data": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dhorizon-config-data&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:31 crc kubenswrapper[4758]: E0122 18:00:31.484667 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"horizon-config-data\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dhorizon-config-data&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:31 crc kubenswrapper[4758]: W0122 18:00:31.504002 4758 reflector.go:561] object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-p9vjx": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dglance-operator-controller-manager-dockercfg-p9vjx&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:31 crc kubenswrapper[4758]: E0122 18:00:31.504223 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"glance-operator-controller-manager-dockercfg-p9vjx\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dglance-operator-controller-manager-dockercfg-p9vjx&resourceVersion=84512\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:31 crc kubenswrapper[4758]: W0122 18:00:31.524825 4758 reflector.go:561] object-"openshift-image-registry"/"image-registry-operator-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dimage-registry-operator-tls&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:31 crc kubenswrapper[4758]: E0122 18:00:31.524910 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"image-registry-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dimage-registry-operator-tls&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:31 crc kubenswrapper[4758]: W0122 18:00:31.543683 4758 reflector.go:561] object-"openshift-route-controller-manager"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=84680": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:31 crc kubenswrapper[4758]: E0122 18:00:31.543734 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=84680\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:31 crc kubenswrapper[4758]: W0122 18:00:31.563878 4758 reflector.go:561] object-"openshift-operators"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:31 crc kubenswrapper[4758]: E0122 18:00:31.563942 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:31 crc kubenswrapper[4758]: W0122 18:00:31.583916 4758 reflector.go:561] object-"openstack"/"nova-api-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-api-config-data&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:31 crc kubenswrapper[4758]: E0122 18:00:31.583982 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-api-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-api-config-data&resourceVersion=84627\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:31 crc kubenswrapper[4758]: W0122 18:00:31.604796 4758 reflector.go:561] object-"openshift-config-operator"/"config-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/secrets?fieldSelector=metadata.name%3Dconfig-operator-serving-cert&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:31 crc kubenswrapper[4758]: E0122 18:00:31.604887 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"config-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/secrets?fieldSelector=metadata.name%3Dconfig-operator-serving-cert&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:31 crc kubenswrapper[4758]: W0122 18:00:31.624519 4758 reflector.go:561] object-"openstack"/"cert-watcher-internal-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-watcher-internal-svc&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:31 crc kubenswrapper[4758]: E0122 18:00:31.624592 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-watcher-internal-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-watcher-internal-svc&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:31 crc kubenswrapper[4758]: W0122 18:00:31.643795 4758 reflector.go:561] object-"openshift-multus"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84400": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:31 crc kubenswrapper[4758]: E0122 18:00:31.643874 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84400\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:31 crc kubenswrapper[4758]: W0122 18:00:31.664347 4758 reflector.go:561] object-"metallb-system"/"frr-k8s-certs-secret": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dfrr-k8s-certs-secret&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:31 crc kubenswrapper[4758]: E0122 18:00:31.666221 4758 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"frr-k8s-certs-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dfrr-k8s-certs-secret&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:31 crc kubenswrapper[4758]: W0122 18:00:31.688018 4758 reflector.go:561] object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-s6gv4": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dneutron-operator-controller-manager-dockercfg-s6gv4&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:31 crc kubenswrapper[4758]: E0122 18:00:31.688085 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"neutron-operator-controller-manager-dockercfg-s6gv4\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dneutron-operator-controller-manager-dockercfg-s6gv4&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:31 crc kubenswrapper[4758]: W0122 18:00:31.703840 4758 reflector.go:561] object-"openshift-nmstate"/"default-dockercfg-ckpvf": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-ckpvf&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:31 crc kubenswrapper[4758]: E0122 18:00:31.703902 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"default-dockercfg-ckpvf\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-ckpvf&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:31 crc kubenswrapper[4758]: W0122 18:00:31.725234 4758 reflector.go:561] object-"openshift-oauth-apiserver"/"trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:31 crc kubenswrapper[4758]: E0122 18:00:31.725303 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:31 crc kubenswrapper[4758]: W0122 18:00:31.744652 4758 reflector.go:561] object-"openshift-etcd-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84439": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:31 crc kubenswrapper[4758]: E0122 18:00:31.745307 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84439\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:31 crc kubenswrapper[4758]: W0122 18:00:31.765193 4758 reflector.go:561] object-"openstack"/"openstack-cell1-config-data": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-cell1-config-data&resourceVersion=84439": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:31 crc kubenswrapper[4758]: E0122 18:00:31.765254 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-cell1-config-data\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-cell1-config-data&resourceVersion=84439\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:31 crc kubenswrapper[4758]: W0122 18:00:31.784440 4758 reflector.go:561] object-"openshift-ingress"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84485": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:31 crc kubenswrapper[4758]: E0122 18:00:31.784506 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84485\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:31 crc kubenswrapper[4758]: W0122 18:00:31.804734 4758 reflector.go:561] object-"openshift-apiserver"/"config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=84400": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:31 crc kubenswrapper[4758]: E0122 18:00:31.804801 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=84400\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:31 crc kubenswrapper[4758]: W0122 18:00:31.825185 4758 reflector.go:561] object-"openshift-oauth-apiserver"/"audit-1": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Daudit-1&resourceVersion=84276": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:31 crc kubenswrapper[4758]: E0122 18:00:31.825271 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"audit-1\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Daudit-1&resourceVersion=84276\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:31 crc kubenswrapper[4758]: W0122 18:00:31.847200 4758 reflector.go:561] object-"openshift-ingress"/"router-certs-default": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-certs-default&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:31 crc kubenswrapper[4758]: E0122 18:00:31.847290 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress\"/\"router-certs-default\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-certs-default&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:31 crc kubenswrapper[4758]: W0122 18:00:31.865264 4758 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-service-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dv4-0-config-system-service-ca&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:31 crc kubenswrapper[4758]: E0122 18:00:31.865390 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dv4-0-config-system-service-ca&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:31 crc kubenswrapper[4758]: W0122 18:00:31.886166 4758 reflector.go:561] object-"openstack"/"ovncontroller-scripts": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovncontroller-scripts&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:31 crc kubenswrapper[4758]: E0122 18:00:31.886272 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovncontroller-scripts\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovncontroller-scripts&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:31 crc kubenswrapper[4758]: W0122 18:00:31.904626 4758 reflector.go:561] object-"openstack"/"rabbitmq-cell1-default-user": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-cell1-default-user&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:31 crc kubenswrapper[4758]: E0122 18:00:31.904700 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-cell1-default-user\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-cell1-default-user&resourceVersion=84512\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:31 crc kubenswrapper[4758]: W0122 18:00:31.924165 4758 reflector.go:561] object-"openshift-oauth-apiserver"/"etcd-serving-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Detcd-serving-ca&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:31 crc kubenswrapper[4758]: E0122 18:00:31.924257 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Detcd-serving-ca&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:31 crc kubenswrapper[4758]: W0122 18:00:31.944977 4758 reflector.go:561] object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dopenshift-apiserver-sa-dockercfg-djjff&resourceVersion=84193": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:31 crc kubenswrapper[4758]: E0122 18:00:31.945075 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-djjff\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dopenshift-apiserver-sa-dockercfg-djjff&resourceVersion=84193\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:31 crc kubenswrapper[4758]: W0122 18:00:31.964763 4758 reflector.go:561] object-"openshift-network-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:31 crc kubenswrapper[4758]: E0122 18:00:31.964845 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:31 crc kubenswrapper[4758]: W0122 18:00:31.984859 4758 reflector.go:561] object-"openshift-console"/"trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:31 crc kubenswrapper[4758]: E0122 18:00:31.984948 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:32 crc kubenswrapper[4758]: W0122 18:00:32.004288 4758 reflector.go:561] object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/secrets?fieldSelector=metadata.name%3Dcsi-hostpath-provisioner-sa-dockercfg-qd74k&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:32 crc kubenswrapper[4758]: E0122 18:00:32.004373 4758 reflector.go:158] "Unhandled Error" err="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-qd74k\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/secrets?fieldSelector=metadata.name%3Dcsi-hostpath-provisioner-sa-dockercfg-qd74k&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:32 crc kubenswrapper[4758]: W0122 18:00:32.024634 4758 reflector.go:561] object-"openshift-service-ca"/"signing-key": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/secrets?fieldSelector=metadata.name%3Dsigning-key&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:32 crc kubenswrapper[4758]: E0122 18:00:32.024735 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca\"/\"signing-key\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/secrets?fieldSelector=metadata.name%3Dsigning-key&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:32 crc kubenswrapper[4758]: W0122 18:00:32.045664 4758 reflector.go:561] object-"openstack"/"cinder-backup-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-backup-config-data&resourceVersion=84680": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:32 crc kubenswrapper[4758]: E0122 18:00:32.045775 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cinder-backup-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-backup-config-data&resourceVersion=84680\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:32 crc kubenswrapper[4758]: W0122 18:00:32.064477 4758 reflector.go:561] object-"openstack"/"openstack-config-secret": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dopenstack-config-secret&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:32 crc kubenswrapper[4758]: E0122 18:00:32.064575 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-config-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dopenstack-config-secret&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:32 crc kubenswrapper[4758]: W0122 18:00:32.083940 4758 reflector.go:561] object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-tzrkw": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovncluster-ovndbcluster-nb-dockercfg-tzrkw&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:32 crc kubenswrapper[4758]: E0122 18:00:32.084079 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovncluster-ovndbcluster-nb-dockercfg-tzrkw\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovncluster-ovndbcluster-nb-dockercfg-tzrkw&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:32 crc kubenswrapper[4758]: W0122 18:00:32.105138 4758 reflector.go:561] object-"cert-manager"/"cert-manager-cainjector-dockercfg-x4h8f": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-cainjector-dockercfg-x4h8f&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:32 crc kubenswrapper[4758]: E0122 18:00:32.105221 4758 reflector.go:158] "Unhandled Error" err="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-x4h8f\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-cainjector-dockercfg-x4h8f&resourceVersion=84512\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:32 crc kubenswrapper[4758]: W0122 18:00:32.126365 4758 reflector.go:561] object-"openstack"/"prometheus-metric-storage-tls-assets-0": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dprometheus-metric-storage-tls-assets-0&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:32 crc kubenswrapper[4758]: E0122 18:00:32.126465 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"prometheus-metric-storage-tls-assets-0\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dprometheus-metric-storage-tls-assets-0&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:32 crc kubenswrapper[4758]: W0122 18:00:32.144248 4758 reflector.go:561] object-"openstack"/"rabbitmq-cell1-plugins-conf": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-cell1-plugins-conf&resourceVersion=84357": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:32 crc kubenswrapper[4758]: E0122 18:00:32.144335 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-cell1-plugins-conf\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-cell1-plugins-conf&resourceVersion=84357\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:32 crc kubenswrapper[4758]: W0122 18:00:32.166795 4758 reflector.go:561] object-"metallb-system"/"controller-certs-secret": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dcontroller-certs-secret&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:32 crc kubenswrapper[4758]: E0122 18:00:32.166891 4758 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"controller-certs-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dcontroller-certs-secret&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:32 crc kubenswrapper[4758]: W0122 18:00:32.184489 4758 reflector.go:561] object-"openstack"/"rabbitmq-notifications-config-data": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-notifications-config-data&resourceVersion=84524": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:32 crc kubenswrapper[4758]: E0122 18:00:32.184590 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-notifications-config-data\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-notifications-config-data&resourceVersion=84524\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:32 crc kubenswrapper[4758]: W0122 18:00:32.204946 4758 reflector.go:561] object-"openstack"/"openstack-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-config&resourceVersion=84357": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:32 crc kubenswrapper[4758]: E0122 18:00:32.205367 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-config&resourceVersion=84357\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:32 crc kubenswrapper[4758]: W0122 18:00:32.226361 4758 reflector.go:561] object-"openstack"/"openstack-edpm-ipam": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-edpm-ipam&resourceVersion=84580": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:32 crc kubenswrapper[4758]: E0122 18:00:32.226468 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-edpm-ipam\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-edpm-ipam&resourceVersion=84580\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:32 crc kubenswrapper[4758]: W0122 18:00:32.247899 4758 reflector.go:561] object-"openstack"/"cert-ovn-metrics": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-ovn-metrics&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:32 crc kubenswrapper[4758]: E0122 18:00:32.248013 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-ovn-metrics\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-ovn-metrics&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:32 crc kubenswrapper[4758]: W0122 18:00:32.263822 4758 reflector.go:561] object-"openstack"/"swift-conf": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dswift-conf&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:32 crc kubenswrapper[4758]: E0122 18:00:32.263901 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"swift-conf\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dswift-conf&resourceVersion=84512\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:32 crc kubenswrapper[4758]: W0122 18:00:32.284869 4758 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=84133": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:32 crc kubenswrapper[4758]: E0122 18:00:32.284950 4758 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=84133\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:32 crc kubenswrapper[4758]: W0122 18:00:32.306322 4758 reflector.go:561] object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-dockercfg-5nsgg&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:32 crc kubenswrapper[4758]: E0122 18:00:32.306629 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-5nsgg\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-dockercfg-5nsgg&resourceVersion=84627\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:32 crc kubenswrapper[4758]: W0122 18:00:32.324659 4758 reflector.go:561] object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/secrets?fieldSelector=metadata.name%3Droute-controller-manager-sa-dockercfg-h2zr2&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:32 crc kubenswrapper[4758]: E0122 18:00:32.324940 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-h2zr2\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/secrets?fieldSelector=metadata.name%3Droute-controller-manager-sa-dockercfg-h2zr2&resourceVersion=84512\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:32 crc kubenswrapper[4758]: W0122 18:00:32.344557 4758 reflector.go:561] object-"openstack"/"barbican-api-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-api-config-data&resourceVersion=84193": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:32 crc kubenswrapper[4758]: E0122 18:00:32.344625 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"barbican-api-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-api-config-data&resourceVersion=84193\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:32 crc kubenswrapper[4758]: W0122 18:00:32.364537 4758 reflector.go:561] object-"openstack"/"ovncontroller-metrics-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovncontroller-metrics-config&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:32 crc kubenswrapper[4758]: E0122 18:00:32.364618 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovncontroller-metrics-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovncontroller-metrics-config&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:32 crc kubenswrapper[4758]: W0122 18:00:32.384123 4758 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&resourceVersion=84625": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:32 crc kubenswrapper[4758]: E0122 18:00:32.384206 4758 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&resourceVersion=84625\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:32 crc kubenswrapper[4758]: I0122 18:00:32.403858 4758 request.go:700] Waited for 7.028646803s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dv4-0-config-system-trusted-ca-bundle&resourceVersion=84400 Jan 22 18:00:32 crc kubenswrapper[4758]: W0122 18:00:32.404411 4758 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dv4-0-config-system-trusted-ca-bundle&resourceVersion=84400": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:32 crc kubenswrapper[4758]: E0122 18:00:32.404499 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dv4-0-config-system-trusted-ca-bundle&resourceVersion=84400\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:32 crc kubenswrapper[4758]: W0122 18:00:32.424538 4758 reflector.go:561] object-"openshift-console"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84439": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:32 crc kubenswrapper[4758]: E0122 18:00:32.424623 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84439\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:32 crc kubenswrapper[4758]: W0122 18:00:32.444340 4758 reflector.go:561] object-"openstack"/"placement-placement-dockercfg-n4qvk": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dplacement-placement-dockercfg-n4qvk&resourceVersion=84193": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:32 crc kubenswrapper[4758]: E0122 18:00:32.444419 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"placement-placement-dockercfg-n4qvk\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dplacement-placement-dockercfg-n4qvk&resourceVersion=84193\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:32 crc kubenswrapper[4758]: W0122 18:00:32.463898 4758 reflector.go:561] object-"openstack"/"rabbitmq-plugins-conf": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-plugins-conf&resourceVersion=84116": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:32 crc kubenswrapper[4758]: E0122 18:00:32.463977 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-plugins-conf\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-plugins-conf&resourceVersion=84116\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:32 crc kubenswrapper[4758]: W0122 18:00:32.484549 4758 reflector.go:561] object-"openstack"/"rabbitmq-default-user": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-default-user&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:32 crc kubenswrapper[4758]: E0122 18:00:32.484637 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-default-user\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-default-user&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:32 crc kubenswrapper[4758]: W0122 18:00:32.504436 4758 reflector.go:561] object-"openstack"/"cert-ovncontroller-ovndbs": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-ovncontroller-ovndbs&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:32 crc kubenswrapper[4758]: E0122 18:00:32.504515 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-ovncontroller-ovndbs\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-ovncontroller-ovndbs&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:32 crc kubenswrapper[4758]: W0122 18:00:32.523971 4758 reflector.go:561] object-"metallb-system"/"metallb-webhook-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmetallb-webhook-cert&resourceVersion=84680": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:32 crc kubenswrapper[4758]: E0122 18:00:32.524052 4758 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"metallb-webhook-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmetallb-webhook-cert&resourceVersion=84680\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:32 crc kubenswrapper[4758]: W0122 18:00:32.543772 4758 reflector.go:561] object-"openstack"/"nova-metadata-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-metadata-config-data&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:32 crc kubenswrapper[4758]: E0122 18:00:32.543858 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-metadata-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-metadata-config-data&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:32 crc kubenswrapper[4758]: W0122 18:00:32.563909 4758 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-images": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dmachine-api-operator-images&resourceVersion=84485": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:32 crc kubenswrapper[4758]: E0122 18:00:32.563991 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-images\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dmachine-api-operator-images&resourceVersion=84485\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:32 crc kubenswrapper[4758]: W0122 18:00:32.583615 4758 reflector.go:561] object-"openshift-machine-config-operator"/"node-bootstrapper-token": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dnode-bootstrapper-token&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:32 crc kubenswrapper[4758]: E0122 18:00:32.583688 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dnode-bootstrapper-token&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:32 crc kubenswrapper[4758]: W0122 18:00:32.604159 4758 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=84485": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:32 crc kubenswrapper[4758]: E0122 18:00:32.604691 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=84485\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:32 crc kubenswrapper[4758]: W0122 18:00:32.624594 4758 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-operator-dockercfg-98p87&resourceVersion=84193": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:32 crc kubenswrapper[4758]: E0122 18:00:32.624873 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-98p87\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-operator-dockercfg-98p87&resourceVersion=84193\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:32 crc kubenswrapper[4758]: W0122 18:00:32.644289 4758 reflector.go:561] object-"openshift-ingress"/"router-stats-default": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-stats-default&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:32 crc kubenswrapper[4758]: E0122 18:00:32.644363 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress\"/\"router-stats-default\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-stats-default&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:32 crc kubenswrapper[4758]: W0122 18:00:32.664011 4758 reflector.go:561] object-"openstack"/"rabbitmq-cell1-config-data": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-cell1-config-data&resourceVersion=84439": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:32 crc kubenswrapper[4758]: E0122 18:00:32.664108 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-cell1-config-data\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-cell1-config-data&resourceVersion=84439\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:32 crc kubenswrapper[4758]: W0122 18:00:32.684259 4758 reflector.go:561] object-"openstack"/"horizon-scripts": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dhorizon-scripts&resourceVersion=84276": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:32 crc kubenswrapper[4758]: E0122 18:00:32.684439 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"horizon-scripts\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dhorizon-scripts&resourceVersion=84276\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:32 crc kubenswrapper[4758]: I0122 18:00:32.688115 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4a7842e2b9ad5dd30a70d3f55fdcf0b151b96e67e4db2513e28a9ec85770037e"} Jan 22 18:00:32 crc kubenswrapper[4758]: I0122 18:00:32.691960 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cb5t8" event={"ID":"26d5529a-b270-40fc-9faa-037435dd2f80","Type":"ContainerStarted","Data":"dee0e88f7ebd2c75fbdae41aff3b519def894d0bc14fe932409343bfae737e93"} Jan 22 18:00:32 crc kubenswrapper[4758]: I0122 18:00:32.694073 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4rlkk" event={"ID":"40845474-36a2-48c0-a0df-af5deb2a31fd","Type":"ContainerStarted","Data":"bff07a437f5fc924349f5e1eb2cd2cd67ab6a607451a68300f3a9b80f24fafb4"} Jan 22 18:00:32 crc kubenswrapper[4758]: I0122 18:00:32.694315 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4rlkk" Jan 22 18:00:32 crc kubenswrapper[4758]: I0122 18:00:32.697497 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2fkhp" event={"ID":"659f7d3e-5518-4d19-bb54-e39295a667d2","Type":"ContainerStarted","Data":"19cc39117dfffee7f12d4214dc6819efe0f6f773093b16cab00870da7b607074"} Jan 22 18:00:32 crc kubenswrapper[4758]: I0122 18:00:32.700603 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-vrzqb_ef543e1b-8068-4ea3-b32a-61027b32e95d/approver/0.log" Jan 22 18:00:32 crc kubenswrapper[4758]: I0122 18:00:32.701207 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"90a60b0d0480658382ecc09fc4ea68bf53474eb76e1a592ad7ad4a32cf2b71c5"} Jan 22 18:00:32 crc kubenswrapper[4758]: I0122 18:00:32.703646 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lb8mx" event={"ID":"f5135718-a42b-4089-922b-9fba781fe6db","Type":"ContainerStarted","Data":"ec249bf443459d83099a5ffad149437d9827fa235843daa76dae2c305f96d608"} Jan 22 18:00:32 crc kubenswrapper[4758]: W0122 18:00:32.703880 4758 reflector.go:561] object-"openstack"/"cert-neutron-ovndbs": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-neutron-ovndbs&resourceVersion=84680": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:32 crc kubenswrapper[4758]: E0122 18:00:32.703951 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-neutron-ovndbs\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-neutron-ovndbs&resourceVersion=84680\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:32 crc kubenswrapper[4758]: I0122 18:00:32.706024 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-59n7w" event={"ID":"d4c5d14c-33e9-4cb0-9ff4-9747c2cd3c13","Type":"ContainerStarted","Data":"0be7c086d866be7d2651793d7e97017c5b15b79937a87cd8a402e21221ec4d55"} Jan 22 18:00:32 crc kubenswrapper[4758]: I0122 18:00:32.709419 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-sb974" event={"ID":"35a3fafd-45ea-465d-90ef-36148a60685e","Type":"ContainerStarted","Data":"8d740abd0ed4523b0bbc53fb6cb986e3dd12d30030fb5decbccb8d5c79e3cb4d"} Jan 22 18:00:32 crc kubenswrapper[4758]: I0122 18:00:32.709605 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-sb974" Jan 22 18:00:32 crc kubenswrapper[4758]: I0122 18:00:32.715451 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_71bb4a3aecc4ba5b26c4b7318770ce13/kube-apiserver/0.log" Jan 22 18:00:32 crc kubenswrapper[4758]: I0122 18:00:32.715959 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9fd3a6328a05eda04a58939c9eb4e24eb806796b36f3e225860f87a080f72c00"} Jan 22 18:00:32 crc kubenswrapper[4758]: I0122 18:00:32.718864 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-675f79667-vjvtj" event={"ID":"c4847ca7-5057-4d6d-80c5-f74c7d633e83","Type":"ContainerStarted","Data":"c54af0dd1a303dde1b3d50fb47c891a9ecbc442d95e4836aa2210f995dcf6c90"} Jan 22 18:00:32 crc kubenswrapper[4758]: I0122 18:00:32.719187 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-675f79667-vjvtj" Jan 22 18:00:32 crc kubenswrapper[4758]: I0122 18:00:32.721982 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-f2gvw" event={"ID":"6daa1231-490e-4ff7-9157-f49cdec96a5e","Type":"ContainerStarted","Data":"542ed8d1796b1c80fd6e195ec7b32f904339447bd00b8e67d8382cb94f9a53f8"} Jan 22 18:00:32 crc kubenswrapper[4758]: I0122 18:00:32.723661 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-gd568" event={"ID":"e8d5a5c6-b15b-490d-aab9-7fc63e9f30f7","Type":"ContainerStarted","Data":"75150cc4b783423b7047afafc321b44caa1cb3d2820b82c5afc4ef8e57d0e276"} Jan 22 18:00:32 crc kubenswrapper[4758]: W0122 18:00:32.723657 4758 reflector.go:561] object-"openshift-service-ca-operator"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:32 crc kubenswrapper[4758]: E0122 18:00:32.723770 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:32 crc kubenswrapper[4758]: I0122 18:00:32.727004 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jr994" event={"ID":"16d19f40-45e9-4f1a-b953-e5c68ca014f3","Type":"ContainerStarted","Data":"6057c010d5a2e16b55b128c8b625c607ee210bd2a7542ae56469c8480cda9a9e"} Jan 22 18:00:32 crc kubenswrapper[4758]: I0122 18:00:32.729677 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-zfcl5" event={"ID":"7d2439ad-1ca6-4c24-9d15-e04f0f89aedf","Type":"ContainerStarted","Data":"a13378759202bf3b5e99273b246f563647b62f9c9ba3d166a097fb3b7a5cd4d4"} Jan 22 18:00:32 crc kubenswrapper[4758]: I0122 18:00:32.732231 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-7tzm4" event={"ID":"c73a71b4-f1fd-4a6c-9832-ce9b48a5f220","Type":"ContainerStarted","Data":"313d83e614b8a8d25ca53b49fd49f5b0805854094c56adf9746feed980253f0f"} Jan 22 18:00:32 crc kubenswrapper[4758]: I0122 18:00:32.734354 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-b7565899b-vlqs7" event={"ID":"4801e5d3-a66d-4856-bfc2-95dfebf6f442","Type":"ContainerStarted","Data":"b0666549cc9f9b2d0e96b99552abca1705409a47f668dc1f9454d507b97ab8cb"} Jan 22 18:00:32 crc kubenswrapper[4758]: I0122 18:00:32.736350 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-d2nmz" event={"ID":"d67bb459-81fe-48a2-ac8a-cb4441bb35bb","Type":"ContainerStarted","Data":"95b8c3c6cc21b228c22b9ffe3228bc4810df2f462264c83f968420945773d045"} Jan 22 18:00:32 crc kubenswrapper[4758]: I0122 18:00:32.736931 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-d2nmz" Jan 22 18:00:32 crc kubenswrapper[4758]: I0122 18:00:32.738081 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-4jthc" event={"ID":"19b4b900-d90f-4e59-b082-61f058f5882b","Type":"ContainerStarted","Data":"48fc7905bf24391116479c62be583909766b4c209c1c234abbd54bc7146a4de2"} Jan 22 18:00:32 crc kubenswrapper[4758]: I0122 18:00:32.738266 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-4jthc" Jan 22 18:00:32 crc kubenswrapper[4758]: I0122 18:00:32.740290 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2xj52" event={"ID":"644142ed-c937-406d-9fd5-3fe856879a92","Type":"ContainerStarted","Data":"90a833c71c723543843ab06176337d5379ddb1f8b1a4aacf6978442d22f2d550"} Jan 22 18:00:32 crc kubenswrapper[4758]: I0122 18:00:32.740447 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2xj52" Jan 22 18:00:32 crc kubenswrapper[4758]: I0122 18:00:32.742551 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-58fc8b87c6-qmw5r" event={"ID":"8afd29cc-2dab-460e-ad9d-f17690c15f41","Type":"ContainerStarted","Data":"0874a7ddc92ab0b24afc711eb8d63f639a492a7de869c8a7af586bf54214b376"} Jan 22 18:00:32 crc kubenswrapper[4758]: W0122 18:00:32.744105 4758 reflector.go:561] object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-dockercfg-k9rxt&resourceVersion=84567": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:32 crc kubenswrapper[4758]: E0122 18:00:32.744180 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-k9rxt\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-dockercfg-k9rxt&resourceVersion=84567\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:32 crc kubenswrapper[4758]: I0122 18:00:32.744266 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-skwtp" event={"ID":"fa976a5e-7cd9-402f-9792-015ca1488d1f","Type":"ContainerStarted","Data":"e613c4f2ad9c6863c7df30149d4cb496ab5143ff68022815bf706b1598c0c8f7"} Jan 22 18:00:32 crc kubenswrapper[4758]: I0122 18:00:32.744365 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-skwtp" Jan 22 18:00:32 crc kubenswrapper[4758]: I0122 18:00:32.746621 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-np2j4" event={"ID":"4612798c-6ae6-4a07-afe6-3f3574ee467b","Type":"ContainerStarted","Data":"689996e75ecd15c365b2fe40e36fee8622a9dc70ebab575fcf9d35ffee33d52f"} Jan 22 18:00:32 crc kubenswrapper[4758]: I0122 18:00:32.749686 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-lpprz" event={"ID":"cc433179-ae5b-4250-80c2-97af371fdfed","Type":"ContainerStarted","Data":"e4095861ad8fe540cd9760115ea9bf60faaf90fc9cf31a69b5d4fc258b8ebeaf"} Jan 22 18:00:32 crc kubenswrapper[4758]: I0122 18:00:32.749874 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-lpprz" Jan 22 18:00:32 crc kubenswrapper[4758]: I0122 18:00:32.752625 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93923998-0016-4db9-adff-a433c7a8d57c","Type":"ContainerStarted","Data":"429da0427fabb2067a99b507976d8344e2c2358762f71dab575c252760a77478"} Jan 22 18:00:32 crc kubenswrapper[4758]: W0122 18:00:32.763912 4758 reflector.go:561] object-"openshift-oauth-apiserver"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:32 crc kubenswrapper[4758]: E0122 18:00:32.764202 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:32 crc kubenswrapper[4758]: W0122 18:00:32.784622 4758 reflector.go:561] object-"openshift-authentication"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:32 crc kubenswrapper[4758]: E0122 18:00:32.784715 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:32 crc kubenswrapper[4758]: W0122 18:00:32.804177 4758 reflector.go:561] object-"openshift-console-operator"/"console-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dconsole-operator-config&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:32 crc kubenswrapper[4758]: E0122 18:00:32.804262 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"console-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dconsole-operator-config&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:32 crc kubenswrapper[4758]: W0122 18:00:32.824146 4758 reflector.go:561] object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-sa-dockercfg-msq4c&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:32 crc kubenswrapper[4758]: E0122 18:00:32.824253 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-msq4c\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-sa-dockercfg-msq4c&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:32 crc kubenswrapper[4758]: W0122 18:00:32.844230 4758 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=84609": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:32 crc kubenswrapper[4758]: E0122 18:00:32.844304 4758 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=84609\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:32 crc kubenswrapper[4758]: W0122 18:00:32.864926 4758 reflector.go:561] object-"openshift-console"/"console-oauth-config": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Dconsole-oauth-config&resourceVersion=84680": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:32 crc kubenswrapper[4758]: E0122 18:00:32.865030 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"console-oauth-config\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Dconsole-oauth-config&resourceVersion=84680\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:32 crc kubenswrapper[4758]: W0122 18:00:32.884198 4758 reflector.go:561] object-"openshift-network-node-identity"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84439": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:32 crc kubenswrapper[4758]: E0122 18:00:32.884271 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84439\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:32 crc kubenswrapper[4758]: W0122 18:00:32.904809 4758 reflector.go:561] object-"openshift-network-operator"/"metrics-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:32 crc kubenswrapper[4758]: E0122 18:00:32.904900 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-operator\"/\"metrics-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:32 crc kubenswrapper[4758]: W0122 18:00:32.924519 4758 reflector.go:561] object-"openshift-config-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:32 crc kubenswrapper[4758]: E0122 18:00:32.924613 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:32 crc kubenswrapper[4758]: W0122 18:00:32.944103 4758 reflector.go:561] object-"openshift-console-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:32 crc kubenswrapper[4758]: E0122 18:00:32.944193 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:32 crc kubenswrapper[4758]: W0122 18:00:32.968995 4758 reflector.go:561] object-"openshift-ingress-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:32 crc kubenswrapper[4758]: E0122 18:00:32.969072 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:32 crc kubenswrapper[4758]: W0122 18:00:32.984771 4758 reflector.go:561] object-"openshift-machine-api"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84400": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:32 crc kubenswrapper[4758]: E0122 18:00:32.984882 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84400\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:33 crc kubenswrapper[4758]: W0122 18:00:33.005341 4758 reflector.go:561] object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/secrets?fieldSelector=metadata.name%3Dkube-storage-version-migrator-sa-dockercfg-5xfcg&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.005445 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-5xfcg\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/secrets?fieldSelector=metadata.name%3Dkube-storage-version-migrator-sa-dockercfg-5xfcg&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:33 crc kubenswrapper[4758]: W0122 18:00:33.023957 4758 reflector.go:561] object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-bsxhx": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Drabbitmq-cluster-operator-controller-manager-dockercfg-bsxhx&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.024280 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"rabbitmq-cluster-operator-controller-manager-dockercfg-bsxhx\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Drabbitmq-cluster-operator-controller-manager-dockercfg-bsxhx&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:33 crc kubenswrapper[4758]: W0122 18:00:33.044053 4758 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-server-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-tls&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.044162 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-tls&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:33 crc kubenswrapper[4758]: W0122 18:00:33.064458 4758 reflector.go:561] object-"openshift-controller-manager"/"client-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&resourceVersion=84524": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.064542 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&resourceVersion=84524\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:33 crc kubenswrapper[4758]: W0122 18:00:33.083864 4758 reflector.go:561] pkg/kubelet/config/apiserver.go:66: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dcrc&resourceVersion=84645": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.084126 4758 reflector.go:158] "Unhandled Error" err="pkg/kubelet/config/apiserver.go:66: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dcrc&resourceVersion=84645\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:33 crc kubenswrapper[4758]: W0122 18:00:33.104499 4758 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-kubernetes-node-dockercfg-pwtwl&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.104870 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-pwtwl\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-kubernetes-node-dockercfg-pwtwl&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:33 crc kubenswrapper[4758]: W0122 18:00:33.124441 4758 reflector.go:561] object-"openstack"/"watcher-applier-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dwatcher-applier-config-data&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.124850 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"watcher-applier-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dwatcher-applier-config-data&resourceVersion=84627\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:33 crc kubenswrapper[4758]: W0122 18:00:33.144384 4758 reflector.go:561] object-"openstack"/"cinder-scheduler-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-scheduler-config-data&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.144497 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cinder-scheduler-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-scheduler-config-data&resourceVersion=84627\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:33 crc kubenswrapper[4758]: W0122 18:00:33.164073 4758 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serving-cert&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.164163 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serving-cert&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:33 crc kubenswrapper[4758]: W0122 18:00:33.184482 4758 reflector.go:561] object-"openstack"/"cert-keystone-public-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-keystone-public-svc&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.184574 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-keystone-public-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-keystone-public-svc&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:33 crc kubenswrapper[4758]: W0122 18:00:33.204603 4758 reflector.go:561] object-"openstack"/"nova-cell1-novncproxy-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-cell1-novncproxy-config-data&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.204722 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-cell1-novncproxy-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-cell1-novncproxy-config-data&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:33 crc kubenswrapper[4758]: W0122 18:00:33.224901 4758 reflector.go:561] object-"openstack"/"cert-nova-novncproxy-cell1-public-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-nova-novncproxy-cell1-public-svc&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.225009 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-nova-novncproxy-cell1-public-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-nova-novncproxy-cell1-public-svc&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:33 crc kubenswrapper[4758]: W0122 18:00:33.244041 4758 reflector.go:561] object-"openshift-ingress-operator"/"metrics-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.244133 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-operator\"/\"metrics-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:33 crc kubenswrapper[4758]: W0122 18:00:33.264931 4758 reflector.go:561] object-"openshift-etcd-operator"/"etcd-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-operator-config&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.265080 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-operator-config&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:33 crc kubenswrapper[4758]: W0122 18:00:33.284463 4758 reflector.go:561] object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/secrets?fieldSelector=metadata.name%3Dkube-storage-version-migrator-operator-dockercfg-2bh8d&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.284565 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2bh8d\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/secrets?fieldSelector=metadata.name%3Dkube-storage-version-migrator-operator-dockercfg-2bh8d&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:33 crc kubenswrapper[4758]: W0122 18:00:33.304842 4758 reflector.go:561] object-"openstack"/"cert-cinder-public-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-cinder-public-svc&resourceVersion=84567": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.304926 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-cinder-public-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-cinder-public-svc&resourceVersion=84567\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:33 crc kubenswrapper[4758]: W0122 18:00:33.324087 4758 reflector.go:561] object-"openshift-cluster-samples-operator"/"samples-operator-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/secrets?fieldSelector=metadata.name%3Dsamples-operator-tls&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.324181 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/secrets?fieldSelector=metadata.name%3Dsamples-operator-tls&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:33 crc kubenswrapper[4758]: W0122 18:00:33.344356 4758 reflector.go:561] object-"openstack"/"cinder-api-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-api-config-data&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.344443 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cinder-api-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-api-config-data&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:33 crc kubenswrapper[4758]: W0122 18:00:33.365591 4758 reflector.go:561] object-"openshift-service-ca"/"signing-cabundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dsigning-cabundle&resourceVersion=84485": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.365684 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca\"/\"signing-cabundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dsigning-cabundle&resourceVersion=84485\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:33 crc kubenswrapper[4758]: W0122 18:00:33.384383 4758 reflector.go:561] object-"openshift-marketplace"/"community-operators-dockercfg-dmngl": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dcommunity-operators-dockercfg-dmngl&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.384794 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"community-operators-dockercfg-dmngl\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dcommunity-operators-dockercfg-dmngl&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:33 crc kubenswrapper[4758]: W0122 18:00:33.405326 4758 reflector.go:561] object-"openstack"/"glance-glance-dockercfg-th7td": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-glance-dockercfg-th7td&resourceVersion=84680": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.405418 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"glance-glance-dockercfg-th7td\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-glance-dockercfg-th7td&resourceVersion=84680\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:33 crc kubenswrapper[4758]: I0122 18:00:33.424365 4758 request.go:700] Waited for 5.100677002s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&resourceVersion=84220 Jan 22 18:00:33 crc kubenswrapper[4758]: W0122 18:00:33.425029 4758 reflector.go:561] object-"openshift-machine-api"/"kube-rbac-proxy": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.425135 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:33 crc kubenswrapper[4758]: W0122 18:00:33.444011 4758 reflector.go:561] object-"metallb-system"/"frr-k8s-daemon-dockercfg-s75rc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dfrr-k8s-daemon-dockercfg-s75rc&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.444272 4758 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"frr-k8s-daemon-dockercfg-s75rc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dfrr-k8s-daemon-dockercfg-s75rc&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:33 crc kubenswrapper[4758]: W0122 18:00:33.464038 4758 reflector.go:561] object-"openstack"/"cert-rabbitmq-cell1-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-rabbitmq-cell1-svc&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.464313 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-rabbitmq-cell1-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-rabbitmq-cell1-svc&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:33 crc kubenswrapper[4758]: W0122 18:00:33.484347 4758 reflector.go:561] object-"openstack"/"openstack-cell1-scripts": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-cell1-scripts&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.484418 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-cell1-scripts\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-cell1-scripts&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:33 crc kubenswrapper[4758]: W0122 18:00:33.504344 4758 reflector.go:561] object-"openshift-image-registry"/"installation-pull-secrets": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dinstallation-pull-secrets&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.504451 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"installation-pull-secrets\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dinstallation-pull-secrets&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:33 crc kubenswrapper[4758]: W0122 18:00:33.524663 4758 reflector.go:561] object-"openshift-image-registry"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.524718 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:33 crc kubenswrapper[4758]: W0122 18:00:33.545046 4758 reflector.go:561] object-"openshift-apiserver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84357": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.545459 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84357\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:33 crc kubenswrapper[4758]: W0122 18:00:33.564484 4758 reflector.go:561] object-"openstack"/"keystone-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone-scripts&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.564562 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"keystone-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone-scripts&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:33 crc kubenswrapper[4758]: W0122 18:00:33.584549 4758 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-template-error": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-template-error&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.584635 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-template-error&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:33 crc kubenswrapper[4758]: W0122 18:00:33.604902 4758 reflector.go:561] object-"openshift-authentication-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84276": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.604987 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84276\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:33 crc kubenswrapper[4758]: W0122 18:00:33.624374 4758 reflector.go:561] object-"openstack"/"barbican-worker-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-worker-config-data&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.624472 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"barbican-worker-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-worker-config-data&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:33 crc kubenswrapper[4758]: W0122 18:00:33.644644 4758 reflector.go:561] object-"metallb-system"/"controller-dockercfg-qdnhd": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dcontroller-dockercfg-qdnhd&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.644717 4758 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"controller-dockercfg-qdnhd\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dcontroller-dockercfg-qdnhd&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:33 crc kubenswrapper[4758]: W0122 18:00:33.664468 4758 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.664538 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:33 crc kubenswrapper[4758]: W0122 18:00:33.684314 4758 reflector.go:561] object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-4q6rk": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-operator-controller-manager-dockercfg-4q6rk&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.684397 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"openstack-operator-controller-manager-dockercfg-4q6rk\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-operator-controller-manager-dockercfg-4q6rk&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:33 crc kubenswrapper[4758]: W0122 18:00:33.704039 4758 reflector.go:561] object-"metallb-system"/"speaker-certs-secret": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dspeaker-certs-secret&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.704112 4758 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"speaker-certs-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dspeaker-certs-secret&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:33 crc kubenswrapper[4758]: W0122 18:00:33.723825 4758 reflector.go:561] object-"openstack"/"tempest-tests-tempest-custom-data-s0": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dtempest-tests-tempest-custom-data-s0&resourceVersion=84357": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.723913 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"tempest-tests-tempest-custom-data-s0\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dtempest-tests-tempest-custom-data-s0&resourceVersion=84357\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:33 crc kubenswrapper[4758]: W0122 18:00:33.744865 4758 reflector.go:561] object-"openstack"/"prometheus-metric-storage": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dprometheus-metric-storage&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.744949 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"prometheus-metric-storage\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dprometheus-metric-storage&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:33 crc kubenswrapper[4758]: W0122 18:00:33.763996 4758 reflector.go:561] object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-nova-novncproxy-cell1-vencrypt&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.764126 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-nova-novncproxy-cell1-vencrypt\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-nova-novncproxy-cell1-vencrypt&resourceVersion=84627\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:33 crc kubenswrapper[4758]: I0122 18:00:33.766890 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-2qp8f" event={"ID":"5ade5af9-f79e-4285-841c-0f08e88cca47","Type":"ContainerStarted","Data":"48d5cef5574134fc4025a12edb366eb6c4bff2060370fa5e7fb9ec24d2b05e35"} Jan 22 18:00:33 crc kubenswrapper[4758]: I0122 18:00:33.767431 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-2qp8f" Jan 22 18:00:33 crc kubenswrapper[4758]: I0122 18:00:33.769070 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wxd6d" event={"ID":"cdd1962b-fbf0-480c-b5e2-e28ee6988046","Type":"ContainerStarted","Data":"b0478355a9a15a9cc6a69787becc745f32d33a182f4c0a808e32762b434b5cb3"} Jan 22 18:00:33 crc kubenswrapper[4758]: I0122 18:00:33.769172 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wxd6d" Jan 22 18:00:33 crc kubenswrapper[4758]: I0122 18:00:33.771497 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-s8q8p" event={"ID":"c3e0f5c7-10cb-441c-9516-f6de8fe29757","Type":"ContainerStarted","Data":"55d6160dda674ad5040345a0fa8788e1f53bdf483cc0a2d4c6c12ef14642a65c"} Jan 22 18:00:33 crc kubenswrapper[4758]: I0122 18:00:33.771823 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-s8q8p" Jan 22 18:00:33 crc kubenswrapper[4758]: I0122 18:00:33.774624 4758 generic.go:334] "Generic (PLEG): container finished" podID="d5a7a812-eaba-4ae7-8d97-e80ae4f70d78" containerID="27f9b813310e252a7f793f80ac5787b3ff22aa21d91520166dc9e50750eb1857" exitCode=1 Jan 22 18:00:33 crc kubenswrapper[4758]: I0122 18:00:33.774712 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d5a7a812-eaba-4ae7-8d97-e80ae4f70d78","Type":"ContainerDied","Data":"27f9b813310e252a7f793f80ac5787b3ff22aa21d91520166dc9e50750eb1857"} Jan 22 18:00:33 crc kubenswrapper[4758]: I0122 18:00:33.775138 4758 scope.go:117] "RemoveContainer" containerID="27f9b813310e252a7f793f80ac5787b3ff22aa21d91520166dc9e50750eb1857" Jan 22 18:00:33 crc kubenswrapper[4758]: I0122 18:00:33.775519 4758 scope.go:117] "RemoveContainer" containerID="78a6ec775e3414b464115c9d589c3eae8881ff824d356dbc942d4deea2d4d1d1" Jan 22 18:00:33 crc kubenswrapper[4758]: I0122 18:00:33.777915 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-tlt96" event={"ID":"e7fdd2cd-e517-46b5-acb3-22b59b7f132f","Type":"ContainerStarted","Data":"131dfcea8a1b84e64e8e8eaad87f89778f3814e34c1690ce4a37e7783bd38c6b"} Jan 22 18:00:33 crc kubenswrapper[4758]: I0122 18:00:33.778133 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-tlt96" Jan 22 18:00:33 crc kubenswrapper[4758]: W0122 18:00:33.787395 4758 reflector.go:561] object-"openstack"/"dnsmasq-dns-dockercfg-w2txv": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Ddnsmasq-dns-dockercfg-w2txv&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:33 crc kubenswrapper[4758]: I0122 18:00:33.787447 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-sb974" event={"ID":"35a3fafd-45ea-465d-90ef-36148a60685e","Type":"ContainerDied","Data":"8d740abd0ed4523b0bbc53fb6cb986e3dd12d30030fb5decbccb8d5c79e3cb4d"} Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.787471 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"dnsmasq-dns-dockercfg-w2txv\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Ddnsmasq-dns-dockercfg-w2txv&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:33 crc kubenswrapper[4758]: I0122 18:00:33.787420 4758 generic.go:334] "Generic (PLEG): container finished" podID="35a3fafd-45ea-465d-90ef-36148a60685e" containerID="8d740abd0ed4523b0bbc53fb6cb986e3dd12d30030fb5decbccb8d5c79e3cb4d" exitCode=1 Jan 22 18:00:33 crc kubenswrapper[4758]: I0122 18:00:33.789019 4758 scope.go:117] "RemoveContainer" containerID="8d740abd0ed4523b0bbc53fb6cb986e3dd12d30030fb5decbccb8d5c79e3cb4d" Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.790460 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=infra-operator-controller-manager-54ccf4f85d-sb974_openstack-operators(35a3fafd-45ea-465d-90ef-36148a60685e)\"" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-sb974" podUID="35a3fafd-45ea-465d-90ef-36148a60685e" Jan 22 18:00:33 crc kubenswrapper[4758]: I0122 18:00:33.790647 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-dfb5n" event={"ID":"78689fee-3fe7-47d2-866d-6465d23378ea","Type":"ContainerStarted","Data":"67ec97fb149004a21a40e31a5dad635eea6dfeab5795c5e8f277e2e20a341301"} Jan 22 18:00:33 crc kubenswrapper[4758]: I0122 18:00:33.791762 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-dfb5n" Jan 22 18:00:33 crc kubenswrapper[4758]: I0122 18:00:33.794504 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-zkfzz" event={"ID":"25848d11-6830-45f8-aff0-0082594b5f3f","Type":"ContainerStarted","Data":"954fa87f14db6331d6b63a3d27dc875468e21cdcd5f69f9250c6cffc55750ba0"} Jan 22 18:00:33 crc kubenswrapper[4758]: I0122 18:00:33.794803 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-zkfzz" Jan 22 18:00:33 crc kubenswrapper[4758]: I0122 18:00:33.799727 4758 generic.go:334] "Generic (PLEG): container finished" podID="71c16ac1-3276-4096-93c5-d10765320713" containerID="40620850c0b41ba5d105b5476e01b243745145ee1653fe36b73b07bb40385f91" exitCode=1 Jan 22 18:00:33 crc kubenswrapper[4758]: I0122 18:00:33.799780 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-85b8fd6746-9vvd6" event={"ID":"71c16ac1-3276-4096-93c5-d10765320713","Type":"ContainerDied","Data":"40620850c0b41ba5d105b5476e01b243745145ee1653fe36b73b07bb40385f91"} Jan 22 18:00:33 crc kubenswrapper[4758]: I0122 18:00:33.800568 4758 scope.go:117] "RemoveContainer" containerID="40620850c0b41ba5d105b5476e01b243745145ee1653fe36b73b07bb40385f91" Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.800880 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=watcher-operator-controller-manager-85b8fd6746-9vvd6_openstack-operators(71c16ac1-3276-4096-93c5-d10765320713)\"" pod="openstack-operators/watcher-operator-controller-manager-85b8fd6746-9vvd6" podUID="71c16ac1-3276-4096-93c5-d10765320713" Jan 22 18:00:33 crc kubenswrapper[4758]: I0122 18:00:33.802221 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-qg57g" event={"ID":"86017532-da20-4917-8f8b-34190218edc2","Type":"ContainerStarted","Data":"00f9f7e22c37037c5a3da51729e231d9b6af70fe75b76ee1a114d7df66735fd4"} Jan 22 18:00:33 crc kubenswrapper[4758]: W0122 18:00:33.803714 4758 reflector.go:561] object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-2fs5z": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dovn-operator-controller-manager-dockercfg-2fs5z&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.803797 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"ovn-operator-controller-manager-dockercfg-2fs5z\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dovn-operator-controller-manager-dockercfg-2fs5z&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:33 crc kubenswrapper[4758]: I0122 18:00:33.804608 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-2mr2s" event={"ID":"901f347a-3b10-4392-8247-41a859112544","Type":"ContainerStarted","Data":"3c2b71af360e27d5d489370c10f51070fefff95fae2f7bc7a6554fadad5db9a8"} Jan 22 18:00:33 crc kubenswrapper[4758]: I0122 18:00:33.805375 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-2mr2s" Jan 22 18:00:33 crc kubenswrapper[4758]: I0122 18:00:33.811995 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-bpw4j" event={"ID":"36cf0be1-e796-4c9e-b232-2a0c0ceaaa79","Type":"ContainerStarted","Data":"706c6e2d27fba9465af00cb0344dbee36f18d4867b0f6be0bc9a50bbb38169aa"} Jan 22 18:00:33 crc kubenswrapper[4758]: I0122 18:00:33.812038 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lb8mx" Jan 22 18:00:33 crc kubenswrapper[4758]: I0122 18:00:33.812106 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-7tzm4" Jan 22 18:00:33 crc kubenswrapper[4758]: I0122 18:00:33.812474 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-b7565899b-vlqs7" Jan 22 18:00:33 crc kubenswrapper[4758]: I0122 18:00:33.813422 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-np2j4" podUID="4612798c-6ae6-4a07-afe6-3f3574ee467b" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.55:7572/metrics\": dial tcp 10.217.0.55:7572: connect: connection refused" Jan 22 18:00:33 crc kubenswrapper[4758]: I0122 18:00:33.813564 4758 status_manager.go:317] "Container readiness changed for unknown container" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-59n7w" containerID="cri-o://cf45c93385b847cb95046805b7d0579501a8fde4e96aec554951f00da0293ebc" Jan 22 18:00:33 crc kubenswrapper[4758]: I0122 18:00:33.813595 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-59n7w" Jan 22 18:00:33 crc kubenswrapper[4758]: I0122 18:00:33.814013 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2fkhp" podUID="659f7d3e-5518-4d19-bb54-e39295a667d2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": dial tcp 10.217.0.80:8081: connect: connection refused" Jan 22 18:00:33 crc kubenswrapper[4758]: I0122 18:00:33.814075 4758 status_manager.go:317] "Container readiness changed for unknown container" pod="metallb-system/metallb-operator-controller-manager-58fc8b87c6-qmw5r" containerID="cri-o://c62d76911da0f5713e9e27fb9411fcce83f728d29a3f1dfcd100c7f9a1299640" Jan 22 18:00:33 crc kubenswrapper[4758]: I0122 18:00:33.814095 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-58fc8b87c6-qmw5r" Jan 22 18:00:33 crc kubenswrapper[4758]: I0122 18:00:33.816130 4758 status_manager.go:317] "Container readiness changed for unknown container" pod="openshift-marketplace/marketplace-operator-79b997595-f2gvw" containerID="cri-o://ad4303b386c6e21f3904b24f988068646e3106398b796a612dade9432bc95cd7" Jan 22 18:00:33 crc kubenswrapper[4758]: I0122 18:00:33.816146 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-f2gvw" Jan 22 18:00:33 crc kubenswrapper[4758]: I0122 18:00:33.816163 4758 status_manager.go:317] "Container readiness changed for unknown container" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-gd568" containerID="cri-o://240edd2f680249409c003b4f15a98966b1e1d8f25dbe8d8d91e622618a7b238d" Jan 22 18:00:33 crc kubenswrapper[4758]: I0122 18:00:33.816171 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-gd568" Jan 22 18:00:33 crc kubenswrapper[4758]: I0122 18:00:33.816182 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jr994" Jan 22 18:00:33 crc kubenswrapper[4758]: I0122 18:00:33.816192 4758 status_manager.go:317] "Container readiness changed for unknown container" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-zfcl5" containerID="cri-o://f3cef0682a195659f7b5e3123741938c84f23055a202fd57fcc714b2d9d731c7" Jan 22 18:00:33 crc kubenswrapper[4758]: I0122 18:00:33.816217 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-zfcl5" Jan 22 18:00:33 crc kubenswrapper[4758]: W0122 18:00:33.824608 4758 reflector.go:561] object-"openstack"/"nova-cell0-conductor-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-cell0-conductor-config-data&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.824675 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-cell0-conductor-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-cell0-conductor-config-data&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:33 crc kubenswrapper[4758]: I0122 18:00:33.829800 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-58fc8b87c6-qmw5r" Jan 22 18:00:33 crc kubenswrapper[4758]: I0122 18:00:33.832511 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 18:00:33 crc kubenswrapper[4758]: I0122 18:00:33.832555 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 18:00:33 crc kubenswrapper[4758]: I0122 18:00:33.832825 4758 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Jan 22 18:00:33 crc kubenswrapper[4758]: I0122 18:00:33.832861 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused" Jan 22 18:00:33 crc kubenswrapper[4758]: W0122 18:00:33.844299 4758 reflector.go:561] object-"openstack"/"ovnnorthd-scripts": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovnnorthd-scripts&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.844719 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovnnorthd-scripts\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovnnorthd-scripts&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:33 crc kubenswrapper[4758]: W0122 18:00:33.864619 4758 reflector.go:561] object-"openshift-authentication-operator"/"authentication-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dauthentication-operator-config&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.864839 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"authentication-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dauthentication-operator-config&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:33 crc kubenswrapper[4758]: W0122 18:00:33.884661 4758 reflector.go:561] object-"openstack"/"cert-nova-metadata-internal-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-nova-metadata-internal-svc&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.885000 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-nova-metadata-internal-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-nova-metadata-internal-svc&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:33 crc kubenswrapper[4758]: W0122 18:00:33.904591 4758 reflector.go:561] object-"openshift-console"/"default-dockercfg-chnjx": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-chnjx&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.904895 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"default-dockercfg-chnjx\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-chnjx&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:33 crc kubenswrapper[4758]: W0122 18:00:33.924969 4758 reflector.go:561] object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-g7xdx": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Doctavia-operator-controller-manager-dockercfg-g7xdx&resourceVersion=84680": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.925245 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"octavia-operator-controller-manager-dockercfg-g7xdx\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Doctavia-operator-controller-manager-dockercfg-g7xdx&resourceVersion=84680\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:33 crc kubenswrapper[4758]: W0122 18:00:33.948074 4758 reflector.go:561] object-"openstack"/"cert-ovndbcluster-sb-ovndbs": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-ovndbcluster-sb-ovndbs&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.948307 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-ovndbcluster-sb-ovndbs\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-ovndbcluster-sb-ovndbs&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:33 crc kubenswrapper[4758]: W0122 18:00:33.964218 4758 reflector.go:561] object-"cert-manager"/"cert-manager-dockercfg-qcl9m": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-dockercfg-qcl9m&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.964499 4758 reflector.go:158] "Unhandled Error" err="object-\"cert-manager\"/\"cert-manager-dockercfg-qcl9m\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-dockercfg-qcl9m&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:33 crc kubenswrapper[4758]: W0122 18:00:33.984267 4758 reflector.go:561] object-"openstack"/"cert-rabbitmq-notifications-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-rabbitmq-notifications-svc&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:33 crc kubenswrapper[4758]: E0122 18:00:33.984340 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-rabbitmq-notifications-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-rabbitmq-notifications-svc&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:34 crc kubenswrapper[4758]: W0122 18:00:34.004185 4758 reflector.go:561] object-"openshift-console"/"oauth-serving-cert": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Doauth-serving-cert&resourceVersion=84485": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.004676 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"oauth-serving-cert\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Doauth-serving-cert&resourceVersion=84485\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:34 crc kubenswrapper[4758]: W0122 18:00:34.024789 4758 reflector.go:561] object-"openstack"/"nova-cell1-conductor-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-cell1-conductor-config-data&resourceVersion=84680": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.024867 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-cell1-conductor-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-cell1-conductor-config-data&resourceVersion=84680\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:34 crc kubenswrapper[4758]: I0122 18:00:34.026306 4758 scope.go:117] "RemoveContainer" containerID="cd55dc9adc842248637987f9b3fb3f590baf4dde9075a2f9fba7f513cf9fe363" Jan 22 18:00:34 crc kubenswrapper[4758]: W0122 18:00:34.044495 4758 reflector.go:561] object-"openshift-nmstate"/"plugin-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dplugin-serving-cert&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.045085 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"plugin-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dplugin-serving-cert&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:34 crc kubenswrapper[4758]: W0122 18:00:34.063882 4758 reflector.go:561] object-"openshift-nmstate"/"openshift-nmstate-webhook": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dopenshift-nmstate-webhook&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.063967 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"openshift-nmstate-webhook\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dopenshift-nmstate-webhook&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:34 crc kubenswrapper[4758]: W0122 18:00:34.084011 4758 reflector.go:561] object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-brw4q": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dcinder-operator-controller-manager-dockercfg-brw4q&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.084362 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"cinder-operator-controller-manager-dockercfg-brw4q\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dcinder-operator-controller-manager-dockercfg-brw4q&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:34 crc kubenswrapper[4758]: W0122 18:00:34.106327 4758 reflector.go:561] object-"openshift-machine-config-operator"/"mcc-proxy-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmcc-proxy-tls&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.106442 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmcc-proxy-tls&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:34 crc kubenswrapper[4758]: W0122 18:00:34.125013 4758 reflector.go:561] object-"openshift-route-controller-manager"/"client-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.125093 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:34 crc kubenswrapper[4758]: I0122 18:00:34.126655 4758 scope.go:117] "RemoveContainer" containerID="b7623be75913161b201b9b3a55bc1959c9b6136ccdad6e64a3461f0147694c7c" Jan 22 18:00:34 crc kubenswrapper[4758]: W0122 18:00:34.144328 4758 reflector.go:561] object-"openshift-etcd-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84439": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.144592 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84439\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:34 crc kubenswrapper[4758]: W0122 18:00:34.164658 4758 reflector.go:561] object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/secrets?fieldSelector=metadata.name%3Dopenshift-kube-scheduler-operator-dockercfg-qt55r&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.164710 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-qt55r\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/secrets?fieldSelector=metadata.name%3Dopenshift-kube-scheduler-operator-dockercfg-qt55r&resourceVersion=84512\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:34 crc kubenswrapper[4758]: W0122 18:00:34.185518 4758 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-template-provider-selection": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-template-provider-selection&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.185606 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-template-provider-selection&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:34 crc kubenswrapper[4758]: W0122 18:00:34.206290 4758 reflector.go:561] object-"openshift-ingress-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84313": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.206384 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84313\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:34 crc kubenswrapper[4758]: W0122 18:00:34.224496 4758 reflector.go:561] object-"openshift-image-registry"/"image-registry-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dimage-registry-tls&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.224593 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"image-registry-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dimage-registry-tls&resourceVersion=84512\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:34 crc kubenswrapper[4758]: W0122 18:00:34.244435 4758 reflector.go:561] object-"openshift-multus"/"multus-daemon-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dmultus-daemon-config&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.245617 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-daemon-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dmultus-daemon-config&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:34 crc kubenswrapper[4758]: W0122 18:00:34.264889 4758 reflector.go:561] object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dredhat-marketplace-dockercfg-x2ctb&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.264959 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-x2ctb\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dredhat-marketplace-dockercfg-x2ctb&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:34 crc kubenswrapper[4758]: W0122 18:00:34.285196 4758 reflector.go:561] object-"openstack"/"swift-proxy-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dswift-proxy-config-data&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.285645 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"swift-proxy-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dswift-proxy-config-data&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:34 crc kubenswrapper[4758]: W0122 18:00:34.304439 4758 reflector.go:561] object-"openstack-operators"/"metrics-server-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dmetrics-server-cert&resourceVersion=84680": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.304555 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"metrics-server-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dmetrics-server-cert&resourceVersion=84680\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:34 crc kubenswrapper[4758]: W0122 18:00:34.325133 4758 reflector.go:561] object-"openstack-operators"/"webhook-server-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dwebhook-server-cert&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.325222 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"webhook-server-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dwebhook-server-cert&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:34 crc kubenswrapper[4758]: W0122 18:00:34.344116 4758 reflector.go:561] object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-dbtnp": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dtelemetry-operator-controller-manager-dockercfg-dbtnp&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.344204 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"telemetry-operator-controller-manager-dockercfg-dbtnp\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dtelemetry-operator-controller-manager-dockercfg-dbtnp&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:34 crc kubenswrapper[4758]: W0122 18:00:34.364724 4758 reflector.go:561] object-"openshift-route-controller-manager"/"config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.365123 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:34 crc kubenswrapper[4758]: W0122 18:00:34.384900 4758 reflector.go:561] object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-dockercfg-vw8fw&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.384989 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-vw8fw\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-dockercfg-vw8fw&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:34 crc kubenswrapper[4758]: W0122 18:00:34.404194 4758 reflector.go:561] object-"openstack"/"metric-storage-prometheus-dockercfg-4ftsd": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmetric-storage-prometheus-dockercfg-4ftsd&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.404304 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"metric-storage-prometheus-dockercfg-4ftsd\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmetric-storage-prometheus-dockercfg-4ftsd&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:34 crc kubenswrapper[4758]: W0122 18:00:34.424509 4758 reflector.go:561] object-"openstack"/"swift-swift-dockercfg-xgjlh": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dswift-swift-dockercfg-xgjlh&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.424621 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"swift-swift-dockercfg-xgjlh\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dswift-swift-dockercfg-xgjlh&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:34 crc kubenswrapper[4758]: I0122 18:00:34.443500 4758 request.go:700] Waited for 4.611090599s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dtest-operator-controller-manager-dockercfg-nwvvt&resourceVersion=84295 Jan 22 18:00:34 crc kubenswrapper[4758]: W0122 18:00:34.444063 4758 reflector.go:561] object-"openstack-operators"/"test-operator-controller-manager-dockercfg-nwvvt": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dtest-operator-controller-manager-dockercfg-nwvvt&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.444176 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"test-operator-controller-manager-dockercfg-nwvvt\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dtest-operator-controller-manager-dockercfg-nwvvt&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:34 crc kubenswrapper[4758]: W0122 18:00:34.463866 4758 reflector.go:561] object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-2zlds": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovnnorthd-ovnnorthd-dockercfg-2zlds&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.463955 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovnnorthd-ovnnorthd-dockercfg-2zlds\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovnnorthd-ovnnorthd-dockercfg-2zlds&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:34 crc kubenswrapper[4758]: W0122 18:00:34.484858 4758 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.484971 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:34 crc kubenswrapper[4758]: W0122 18:00:34.504774 4758 reflector.go:561] object-"openshift-service-ca"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84173": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.504876 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84173\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:34 crc kubenswrapper[4758]: W0122 18:00:34.524159 4758 reflector.go:561] object-"openshift-config-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.524236 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:34 crc kubenswrapper[4758]: W0122 18:00:34.544416 4758 reflector.go:561] object-"openstack"/"cinder-volume-nfs-2-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-volume-nfs-2-config-data&resourceVersion=84343": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.544500 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cinder-volume-nfs-2-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-volume-nfs-2-config-data&resourceVersion=84343\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:34 crc kubenswrapper[4758]: W0122 18:00:34.564121 4758 reflector.go:561] object-"openstack"/"combined-ca-bundle": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcombined-ca-bundle&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.564181 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"combined-ca-bundle\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcombined-ca-bundle&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:34 crc kubenswrapper[4758]: W0122 18:00:34.583839 4758 reflector.go:561] object-"openshift-etcd-operator"/"etcd-service-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-service-ca-bundle&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.583917 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-service-ca-bundle&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:34 crc kubenswrapper[4758]: W0122 18:00:34.604424 4758 reflector.go:561] object-"openshift-image-registry"/"node-ca-dockercfg-4777p": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dnode-ca-dockercfg-4777p&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.604571 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"node-ca-dockercfg-4777p\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dnode-ca-dockercfg-4777p&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:34 crc kubenswrapper[4758]: W0122 18:00:34.624826 4758 reflector.go:561] object-"openstack"/"cert-placement-public-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-placement-public-svc&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.624905 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-placement-public-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-placement-public-svc&resourceVersion=84512\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:34 crc kubenswrapper[4758]: W0122 18:00:34.644600 4758 reflector.go:561] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Dnode-resolver-dockercfg-kz9s7&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.644696 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"node-resolver-dockercfg-kz9s7\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Dnode-resolver-dockercfg-kz9s7&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:34 crc kubenswrapper[4758]: W0122 18:00:34.664430 4758 reflector.go:561] object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-s6bn2": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dwatcher-operator-controller-manager-dockercfg-s6bn2&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.664549 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"watcher-operator-controller-manager-dockercfg-s6bn2\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dwatcher-operator-controller-manager-dockercfg-s6bn2&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:34 crc kubenswrapper[4758]: W0122 18:00:34.684725 4758 reflector.go:561] object-"openshift-apiserver"/"etcd-client": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Detcd-client&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.684859 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Detcd-client&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:34 crc kubenswrapper[4758]: W0122 18:00:34.704486 4758 reflector.go:561] object-"openshift-network-node-identity"/"network-node-identity-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/secrets?fieldSelector=metadata.name%3Dnetwork-node-identity-cert&resourceVersion=84512": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.704588 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/secrets?fieldSelector=metadata.name%3Dnetwork-node-identity-cert&resourceVersion=84512\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:34 crc kubenswrapper[4758]: W0122 18:00:34.724569 4758 reflector.go:561] object-"openstack"/"ovsdbserver-sb": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovsdbserver-sb&resourceVersion=84580": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.724970 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovsdbserver-sb\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovsdbserver-sb&resourceVersion=84580\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.732204 4758 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" interval="7s" Jan 22 18:00:34 crc kubenswrapper[4758]: W0122 18:00:34.744936 4758 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-serving-cert&resourceVersion=84193": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.745021 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-serving-cert&resourceVersion=84193\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:34 crc kubenswrapper[4758]: W0122 18:00:34.764127 4758 reflector.go:561] object-"openstack"/"neutron-neutron-dockercfg-zvr2k": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dneutron-neutron-dockercfg-zvr2k&resourceVersion=84424": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.764207 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"neutron-neutron-dockercfg-zvr2k\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dneutron-neutron-dockercfg-zvr2k&resourceVersion=84424\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:34 crc kubenswrapper[4758]: W0122 18:00:34.784383 4758 reflector.go:561] object-"openstack"/"ovndbcluster-nb-scripts": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-nb-scripts&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.784470 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovndbcluster-nb-scripts\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-nb-scripts&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:34 crc kubenswrapper[4758]: W0122 18:00:34.804818 4758 reflector.go:561] object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dkube-controller-manager-operator-serving-cert&resourceVersion=84295": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.804927 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dkube-controller-manager-operator-serving-cert&resourceVersion=84295\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:34 crc kubenswrapper[4758]: W0122 18:00:34.823991 4758 reflector.go:561] object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.824088 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:34 crc kubenswrapper[4758]: I0122 18:00:34.825789 4758 generic.go:334] "Generic (PLEG): container finished" podID="d5a7a812-eaba-4ae7-8d97-e80ae4f70d78" containerID="bc4f970a22c54315f6513899232257efae4e7e4b6f571d8f0a84f9b878900842" exitCode=1 Jan 22 18:00:34 crc kubenswrapper[4758]: I0122 18:00:34.825838 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d5a7a812-eaba-4ae7-8d97-e80ae4f70d78","Type":"ContainerDied","Data":"bc4f970a22c54315f6513899232257efae4e7e4b6f571d8f0a84f9b878900842"} Jan 22 18:00:34 crc kubenswrapper[4758]: I0122 18:00:34.825884 4758 scope.go:117] "RemoveContainer" containerID="27f9b813310e252a7f793f80ac5787b3ff22aa21d91520166dc9e50750eb1857" Jan 22 18:00:34 crc kubenswrapper[4758]: I0122 18:00:34.826541 4758 scope.go:117] "RemoveContainer" containerID="bc4f970a22c54315f6513899232257efae4e7e4b6f571d8f0a84f9b878900842" Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.826844 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-state-metrics pod=kube-state-metrics-0_openstack(d5a7a812-eaba-4ae7-8d97-e80ae4f70d78)\"" pod="openstack/kube-state-metrics-0" podUID="d5a7a812-eaba-4ae7-8d97-e80ae4f70d78" Jan 22 18:00:34 crc kubenswrapper[4758]: I0122 18:00:34.835084 4758 generic.go:334] "Generic (PLEG): container finished" podID="fa976a5e-7cd9-402f-9792-015ca1488d1f" containerID="e613c4f2ad9c6863c7df30149d4cb496ab5143ff68022815bf706b1598c0c8f7" exitCode=1 Jan 22 18:00:34 crc kubenswrapper[4758]: I0122 18:00:34.835158 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-skwtp" event={"ID":"fa976a5e-7cd9-402f-9792-015ca1488d1f","Type":"ContainerDied","Data":"e613c4f2ad9c6863c7df30149d4cb496ab5143ff68022815bf706b1598c0c8f7"} Jan 22 18:00:34 crc kubenswrapper[4758]: I0122 18:00:34.835826 4758 scope.go:117] "RemoveContainer" containerID="e613c4f2ad9c6863c7df30149d4cb496ab5143ff68022815bf706b1598c0c8f7" Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.836099 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=glance-operator-controller-manager-78fdd796fd-skwtp_openstack-operators(fa976a5e-7cd9-402f-9792-015ca1488d1f)\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-skwtp" podUID="fa976a5e-7cd9-402f-9792-015ca1488d1f" Jan 22 18:00:34 crc kubenswrapper[4758]: I0122 18:00:34.839094 4758 generic.go:334] "Generic (PLEG): container finished" podID="4612798c-6ae6-4a07-afe6-3f3574ee467b" containerID="689996e75ecd15c365b2fe40e36fee8622a9dc70ebab575fcf9d35ffee33d52f" exitCode=1 Jan 22 18:00:34 crc kubenswrapper[4758]: I0122 18:00:34.839143 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-np2j4" event={"ID":"4612798c-6ae6-4a07-afe6-3f3574ee467b","Type":"ContainerDied","Data":"689996e75ecd15c365b2fe40e36fee8622a9dc70ebab575fcf9d35ffee33d52f"} Jan 22 18:00:34 crc kubenswrapper[4758]: I0122 18:00:34.839466 4758 scope.go:117] "RemoveContainer" containerID="689996e75ecd15c365b2fe40e36fee8622a9dc70ebab575fcf9d35ffee33d52f" Jan 22 18:00:34 crc kubenswrapper[4758]: W0122 18:00:34.845000 4758 reflector.go:561] object-"openshift-console"/"service-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dservice-ca&resourceVersion=84173": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.845069 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"service-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dservice-ca&resourceVersion=84173\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:34 crc kubenswrapper[4758]: I0122 18:00:34.850251 4758 generic.go:334] "Generic (PLEG): container finished" podID="19b4b900-d90f-4e59-b082-61f058f5882b" containerID="48fc7905bf24391116479c62be583909766b4c209c1c234abbd54bc7146a4de2" exitCode=1 Jan 22 18:00:34 crc kubenswrapper[4758]: I0122 18:00:34.850303 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-4jthc" event={"ID":"19b4b900-d90f-4e59-b082-61f058f5882b","Type":"ContainerDied","Data":"48fc7905bf24391116479c62be583909766b4c209c1c234abbd54bc7146a4de2"} Jan 22 18:00:34 crc kubenswrapper[4758]: I0122 18:00:34.851105 4758 scope.go:117] "RemoveContainer" containerID="48fc7905bf24391116479c62be583909766b4c209c1c234abbd54bc7146a4de2" Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.851447 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=placement-operator-controller-manager-5d646b7d76-4jthc_openstack-operators(19b4b900-d90f-4e59-b082-61f058f5882b)\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-4jthc" podUID="19b4b900-d90f-4e59-b082-61f058f5882b" Jan 22 18:00:34 crc kubenswrapper[4758]: I0122 18:00:34.853574 4758 scope.go:117] "RemoveContainer" containerID="8d740abd0ed4523b0bbc53fb6cb986e3dd12d30030fb5decbccb8d5c79e3cb4d" Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.853788 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=infra-operator-controller-manager-54ccf4f85d-sb974_openstack-operators(35a3fafd-45ea-465d-90ef-36148a60685e)\"" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-sb974" podUID="35a3fafd-45ea-465d-90ef-36148a60685e" Jan 22 18:00:34 crc kubenswrapper[4758]: I0122 18:00:34.855929 4758 generic.go:334] "Generic (PLEG): container finished" podID="7d2439ad-1ca6-4c24-9d15-e04f0f89aedf" containerID="a13378759202bf3b5e99273b246f563647b62f9c9ba3d166a097fb3b7a5cd4d4" exitCode=1 Jan 22 18:00:34 crc kubenswrapper[4758]: I0122 18:00:34.856000 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-zfcl5" event={"ID":"7d2439ad-1ca6-4c24-9d15-e04f0f89aedf","Type":"ContainerDied","Data":"a13378759202bf3b5e99273b246f563647b62f9c9ba3d166a097fb3b7a5cd4d4"} Jan 22 18:00:34 crc kubenswrapper[4758]: I0122 18:00:34.856313 4758 scope.go:117] "RemoveContainer" containerID="a13378759202bf3b5e99273b246f563647b62f9c9ba3d166a097fb3b7a5cd4d4" Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.856559 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=nova-operator-controller-manager-6b8bc8d87d-zfcl5_openstack-operators(7d2439ad-1ca6-4c24-9d15-e04f0f89aedf)\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-zfcl5" podUID="7d2439ad-1ca6-4c24-9d15-e04f0f89aedf" Jan 22 18:00:34 crc kubenswrapper[4758]: I0122 18:00:34.858935 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-f2gvw_6daa1231-490e-4ff7-9157-f49cdec96a5e/marketplace-operator/1.log" Jan 22 18:00:34 crc kubenswrapper[4758]: I0122 18:00:34.859409 4758 generic.go:334] "Generic (PLEG): container finished" podID="6daa1231-490e-4ff7-9157-f49cdec96a5e" containerID="542ed8d1796b1c80fd6e195ec7b32f904339447bd00b8e67d8382cb94f9a53f8" exitCode=1 Jan 22 18:00:34 crc kubenswrapper[4758]: I0122 18:00:34.859478 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-f2gvw" event={"ID":"6daa1231-490e-4ff7-9157-f49cdec96a5e","Type":"ContainerDied","Data":"542ed8d1796b1c80fd6e195ec7b32f904339447bd00b8e67d8382cb94f9a53f8"} Jan 22 18:00:34 crc kubenswrapper[4758]: I0122 18:00:34.859843 4758 scope.go:117] "RemoveContainer" containerID="542ed8d1796b1c80fd6e195ec7b32f904339447bd00b8e67d8382cb94f9a53f8" Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.860057 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-f2gvw_openshift-marketplace(6daa1231-490e-4ff7-9157-f49cdec96a5e)\"" pod="openshift-marketplace/marketplace-operator-79b997595-f2gvw" podUID="6daa1231-490e-4ff7-9157-f49cdec96a5e" Jan 22 18:00:34 crc kubenswrapper[4758]: I0122 18:00:34.861946 4758 generic.go:334] "Generic (PLEG): container finished" podID="40845474-36a2-48c0-a0df-af5deb2a31fd" containerID="bff07a437f5fc924349f5e1eb2cd2cd67ab6a607451a68300f3a9b80f24fafb4" exitCode=1 Jan 22 18:00:34 crc kubenswrapper[4758]: I0122 18:00:34.862705 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4rlkk" event={"ID":"40845474-36a2-48c0-a0df-af5deb2a31fd","Type":"ContainerDied","Data":"bff07a437f5fc924349f5e1eb2cd2cd67ab6a607451a68300f3a9b80f24fafb4"} Jan 22 18:00:34 crc kubenswrapper[4758]: I0122 18:00:34.863131 4758 scope.go:117] "RemoveContainer" containerID="bff07a437f5fc924349f5e1eb2cd2cd67ab6a607451a68300f3a9b80f24fafb4" Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.863387 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=swift-operator-controller-manager-547cbdb99f-4rlkk_openstack-operators(40845474-36a2-48c0-a0df-af5deb2a31fd)\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4rlkk" podUID="40845474-36a2-48c0-a0df-af5deb2a31fd" Jan 22 18:00:34 crc kubenswrapper[4758]: W0122 18:00:34.864097 4758 reflector.go:561] object-"openstack"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84357": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.864154 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84357\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:34 crc kubenswrapper[4758]: I0122 18:00:34.867042 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-59n7w" Jan 22 18:00:34 crc kubenswrapper[4758]: I0122 18:00:34.867089 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-gd568" Jan 22 18:00:34 crc kubenswrapper[4758]: W0122 18:00:34.885006 4758 reflector.go:561] object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-operator-dockercfg-r9srn&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.885099 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-r9srn\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-operator-dockercfg-r9srn&resourceVersion=84627\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:34 crc kubenswrapper[4758]: W0122 18:00:34.908417 4758 reflector.go:561] object-"openstack"/"watcher-decision-engine-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dwatcher-decision-engine-config-data&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.908523 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"watcher-decision-engine-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dwatcher-decision-engine-config-data&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:34 crc kubenswrapper[4758]: W0122 18:00:34.928290 4758 reflector.go:561] object-"openshift-controller-manager-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84485": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.928403 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84485\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:34 crc kubenswrapper[4758]: I0122 18:00:34.940539 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2fkhp" Jan 22 18:00:34 crc kubenswrapper[4758]: W0122 18:00:34.944573 4758 reflector.go:561] object-"openshift-cluster-version"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84439": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.944667 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84439\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:34 crc kubenswrapper[4758]: W0122 18:00:34.964776 4758 reflector.go:561] object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/secrets?fieldSelector=metadata.name%3Dcluster-samples-operator-dockercfg-xpp9w&resourceVersion=84627": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.964874 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-xpp9w\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/secrets?fieldSelector=metadata.name%3Dcluster-samples-operator-dockercfg-xpp9w&resourceVersion=84627\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:34 crc kubenswrapper[4758]: W0122 18:00:34.985110 4758 reflector.go:561] object-"openstack"/"cert-watcher-public-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-watcher-public-svc&resourceVersion=84193": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:34 crc kubenswrapper[4758]: E0122 18:00:34.985205 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-watcher-public-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-watcher-public-svc&resourceVersion=84193\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:35 crc kubenswrapper[4758]: W0122 18:00:35.004307 4758 reflector.go:561] object-"openshift-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:35 crc kubenswrapper[4758]: E0122 18:00:35.004377 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:35 crc kubenswrapper[4758]: W0122 18:00:35.025237 4758 reflector.go:561] object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-config&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:35 crc kubenswrapper[4758]: E0122 18:00:35.025323 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-config&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:35 crc kubenswrapper[4758]: W0122 18:00:35.043836 4758 reflector.go:561] object-"openshift-oauth-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:35 crc kubenswrapper[4758]: E0122 18:00:35.044146 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:35 crc kubenswrapper[4758]: W0122 18:00:35.063703 4758 reflector.go:561] object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dprometheus-metric-storage-thanos-prometheus-http-client-file&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:35 crc kubenswrapper[4758]: E0122 18:00:35.063798 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"prometheus-metric-storage-thanos-prometheus-http-client-file\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dprometheus-metric-storage-thanos-prometheus-http-client-file&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:35 crc kubenswrapper[4758]: W0122 18:00:35.084349 4758 reflector.go:561] object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/secrets?fieldSelector=metadata.name%3Dconsole-operator-dockercfg-4xjcr&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:35 crc kubenswrapper[4758]: E0122 18:00:35.084446 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"console-operator-dockercfg-4xjcr\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/secrets?fieldSelector=metadata.name%3Dconsole-operator-dockercfg-4xjcr&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:35 crc kubenswrapper[4758]: W0122 18:00:35.104453 4758 reflector.go:561] object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dredhat-operators-dockercfg-ct8rh&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:35 crc kubenswrapper[4758]: E0122 18:00:35.104527 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-ct8rh\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dredhat-operators-dockercfg-ct8rh&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:35 crc kubenswrapper[4758]: W0122 18:00:35.123872 4758 reflector.go:561] object-"openstack"/"rabbitmq-server-conf": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-server-conf&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:35 crc kubenswrapper[4758]: E0122 18:00:35.123991 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-server-conf\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-server-conf&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:35 crc kubenswrapper[4758]: W0122 18:00:35.145020 4758 reflector.go:561] object-"openshift-operators"/"obo-prometheus-operator-dockercfg-4jql8": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobo-prometheus-operator-dockercfg-4jql8&resourceVersion=84466": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:35 crc kubenswrapper[4758]: E0122 18:00:35.145343 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-4jql8\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobo-prometheus-operator-dockercfg-4jql8&resourceVersion=84466\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:35 crc kubenswrapper[4758]: W0122 18:00:35.167383 4758 reflector.go:561] object-"openshift-service-ca"/"service-ca-dockercfg-pn86c": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/secrets?fieldSelector=metadata.name%3Dservice-ca-dockercfg-pn86c&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:35 crc kubenswrapper[4758]: E0122 18:00:35.167471 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca\"/\"service-ca-dockercfg-pn86c\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/secrets?fieldSelector=metadata.name%3Dservice-ca-dockercfg-pn86c&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:35 crc kubenswrapper[4758]: W0122 18:00:35.183690 4758 reflector.go:561] object-"openshift-ingress"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:35 crc kubenswrapper[4758]: E0122 18:00:35.183796 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=84647\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:35 crc kubenswrapper[4758]: W0122 18:00:35.207273 4758 reflector.go:561] object-"openshift-multus"/"default-dockercfg-2q5b6": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-2q5b6&resourceVersion=84384": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:35 crc kubenswrapper[4758]: E0122 18:00:35.207359 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-dockercfg-2q5b6\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-2q5b6&resourceVersion=84384\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:35 crc kubenswrapper[4758]: W0122 18:00:35.225496 4758 reflector.go:561] object-"openshift-image-registry"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:35 crc kubenswrapper[4758]: E0122 18:00:35.225574 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:35 crc kubenswrapper[4758]: W0122 18:00:35.243940 4758 reflector.go:561] object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-serving-cert&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:35 crc kubenswrapper[4758]: E0122 18:00:35.244259 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-serving-cert&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:35 crc kubenswrapper[4758]: W0122 18:00:35.263946 4758 reflector.go:561] object-"openstack"/"swift-ring-files": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dswift-ring-files&resourceVersion=84220": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:35 crc kubenswrapper[4758]: E0122 18:00:35.264028 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"swift-ring-files\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dswift-ring-files&resourceVersion=84220\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:35 crc kubenswrapper[4758]: W0122 18:00:35.300007 4758 reflector.go:561] object-"openshift-dns"/"dns-dockercfg-jwfmh": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-dockercfg-jwfmh&resourceVersion=84252": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:35 crc kubenswrapper[4758]: E0122 18:00:35.300121 4758 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"dns-dockercfg-jwfmh\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-dockercfg-jwfmh&resourceVersion=84252\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:35 crc kubenswrapper[4758]: W0122 18:00:35.304987 4758 reflector.go:561] object-"openstack"/"cert-galera-openstack-cell1-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-galera-openstack-cell1-svc&resourceVersion=84143": dial tcp 38.102.83.223:6443: connect: connection refused Jan 22 18:00:35 crc kubenswrapper[4758]: E0122 18:00:35.305067 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-galera-openstack-cell1-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-galera-openstack-cell1-svc&resourceVersion=84143\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Jan 22 18:00:35 crc kubenswrapper[4758]: E0122 18:00:35.310947 4758 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc73a71b4_f1fd_4a6c_9832_ce9b48a5f220.slice/crio-313d83e614b8a8d25ca53b49fd49f5b0805854094c56adf9746feed980253f0f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8afd29cc_2dab_460e_ad9d_f17690c15f41.slice/crio-0874a7ddc92ab0b24afc711eb8d63f639a492a7de869c8a7af586bf54214b376.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86017532_da20_4917_8f8b_34190218edc2.slice/crio-conmon-00f9f7e22c37037c5a3da51729e231d9b6af70fe75b76ee1a114d7df66735fd4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc73a71b4_f1fd_4a6c_9832_ce9b48a5f220.slice/crio-conmon-313d83e614b8a8d25ca53b49fd49f5b0805854094c56adf9746feed980253f0f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5135718_a42b_4089_922b_9fba781fe6db.slice/crio-ec249bf443459d83099a5ffad149437d9827fa235843daa76dae2c305f96d608.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd67bb459_81fe_48a2_ac8a_cb4441bb35bb.slice/crio-95b8c3c6cc21b228c22b9ffe3228bc4810df2f462264c83f968420945773d045.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc433179_ae5b_4250_80c2_97af371fdfed.slice/crio-e4095861ad8fe540cd9760115ea9bf60faaf90fc9cf31a69b5d4fc258b8ebeaf.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd67bb459_81fe_48a2_ac8a_cb4441bb35bb.slice/crio-conmon-95b8c3c6cc21b228c22b9ffe3228bc4810df2f462264c83f968420945773d045.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8afd29cc_2dab_460e_ad9d_f17690c15f41.slice/crio-conmon-0874a7ddc92ab0b24afc711eb8d63f639a492a7de869c8a7af586bf54214b376.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86017532_da20_4917_8f8b_34190218edc2.slice/crio-00f9f7e22c37037c5a3da51729e231d9b6af70fe75b76ee1a114d7df66735fd4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5135718_a42b_4089_922b_9fba781fe6db.slice/crio-conmon-ec249bf443459d83099a5ffad149437d9827fa235843daa76dae2c305f96d608.scope\": RecentStats: unable to find data in memory cache]" Jan 22 18:00:35 crc kubenswrapper[4758]: I0122 18:00:35.443635 4758 request.go:700] Waited for 4.663173499s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dservice-ca-operator-config&resourceVersion=84485 Jan 22 18:00:35 crc kubenswrapper[4758]: I0122 18:00:35.482094 4758 scope.go:117] "RemoveContainer" containerID="ce5016f114838dcaca7cc66b44c49904276b6456085e1179fe6e8e2419474ace" Jan 22 18:00:35 crc kubenswrapper[4758]: I0122 18:00:35.544374 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-zfcl5" Jan 22 18:00:35 crc kubenswrapper[4758]: I0122 18:00:35.567472 4758 scope.go:117] "RemoveContainer" containerID="a86ae74b37544ab164be41ebf400131e9e7d915da894679621c4bbdc42ef92f9" Jan 22 18:00:35 crc kubenswrapper[4758]: I0122 18:00:35.597953 4758 scope.go:117] "RemoveContainer" containerID="d51fb1ad15f929a23ca45418e301aaa67b68ac4fdfe0dfa8eb39fcbdb4b8a0f6" Jan 22 18:00:35 crc kubenswrapper[4758]: I0122 18:00:35.695927 4758 scope.go:117] "RemoveContainer" containerID="f3cef0682a195659f7b5e3123741938c84f23055a202fd57fcc714b2d9d731c7" Jan 22 18:00:35 crc kubenswrapper[4758]: I0122 18:00:35.728413 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-np2j4" Jan 22 18:00:35 crc kubenswrapper[4758]: I0122 18:00:35.728516 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-np2j4" Jan 22 18:00:35 crc kubenswrapper[4758]: I0122 18:00:35.873243 4758 generic.go:334] "Generic (PLEG): container finished" podID="86017532-da20-4917-8f8b-34190218edc2" containerID="00f9f7e22c37037c5a3da51729e231d9b6af70fe75b76ee1a114d7df66735fd4" exitCode=1 Jan 22 18:00:35 crc kubenswrapper[4758]: I0122 18:00:35.873334 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-qg57g" event={"ID":"86017532-da20-4917-8f8b-34190218edc2","Type":"ContainerDied","Data":"00f9f7e22c37037c5a3da51729e231d9b6af70fe75b76ee1a114d7df66735fd4"} Jan 22 18:00:35 crc kubenswrapper[4758]: I0122 18:00:35.874031 4758 scope.go:117] "RemoveContainer" containerID="00f9f7e22c37037c5a3da51729e231d9b6af70fe75b76ee1a114d7df66735fd4" Jan 22 18:00:35 crc kubenswrapper[4758]: E0122 18:00:35.874258 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-cainjector\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cert-manager-cainjector pod=cert-manager-cainjector-cf98fcc89-qg57g_cert-manager(86017532-da20-4917-8f8b-34190218edc2)\"" pod="cert-manager/cert-manager-cainjector-cf98fcc89-qg57g" podUID="86017532-da20-4917-8f8b-34190218edc2" Jan 22 18:00:35 crc kubenswrapper[4758]: I0122 18:00:35.881865 4758 generic.go:334] "Generic (PLEG): container finished" podID="e8d5a5c6-b15b-490d-aab9-7fc63e9f30f7" containerID="75150cc4b783423b7047afafc321b44caa1cb3d2820b82c5afc4ef8e57d0e276" exitCode=1 Jan 22 18:00:35 crc kubenswrapper[4758]: I0122 18:00:35.881965 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-gd568" event={"ID":"e8d5a5c6-b15b-490d-aab9-7fc63e9f30f7","Type":"ContainerDied","Data":"75150cc4b783423b7047afafc321b44caa1cb3d2820b82c5afc4ef8e57d0e276"} Jan 22 18:00:35 crc kubenswrapper[4758]: I0122 18:00:35.882695 4758 scope.go:117] "RemoveContainer" containerID="75150cc4b783423b7047afafc321b44caa1cb3d2820b82c5afc4ef8e57d0e276" Jan 22 18:00:35 crc kubenswrapper[4758]: E0122 18:00:35.883094 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=ironic-operator-controller-manager-69d6c9f5b8-gd568_openstack-operators(e8d5a5c6-b15b-490d-aab9-7fc63e9f30f7)\"" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-gd568" podUID="e8d5a5c6-b15b-490d-aab9-7fc63e9f30f7" Jan 22 18:00:35 crc kubenswrapper[4758]: I0122 18:00:35.884844 4758 generic.go:334] "Generic (PLEG): container finished" podID="16d19f40-45e9-4f1a-b953-e5c68ca014f3" containerID="6057c010d5a2e16b55b128c8b625c607ee210bd2a7542ae56469c8480cda9a9e" exitCode=1 Jan 22 18:00:35 crc kubenswrapper[4758]: I0122 18:00:35.884905 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jr994" event={"ID":"16d19f40-45e9-4f1a-b953-e5c68ca014f3","Type":"ContainerDied","Data":"6057c010d5a2e16b55b128c8b625c607ee210bd2a7542ae56469c8480cda9a9e"} Jan 22 18:00:35 crc kubenswrapper[4758]: I0122 18:00:35.886012 4758 scope.go:117] "RemoveContainer" containerID="6057c010d5a2e16b55b128c8b625c607ee210bd2a7542ae56469c8480cda9a9e" Jan 22 18:00:35 crc kubenswrapper[4758]: E0122 18:00:35.886363 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=octavia-operator-controller-manager-7bd9774b6-jr994_openstack-operators(16d19f40-45e9-4f1a-b953-e5c68ca014f3)\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jr994" podUID="16d19f40-45e9-4f1a-b953-e5c68ca014f3" Jan 22 18:00:35 crc kubenswrapper[4758]: I0122 18:00:35.887085 4758 generic.go:334] "Generic (PLEG): container finished" podID="26d5529a-b270-40fc-9faa-037435dd2f80" containerID="dee0e88f7ebd2c75fbdae41aff3b519def894d0bc14fe932409343bfae737e93" exitCode=1 Jan 22 18:00:35 crc kubenswrapper[4758]: I0122 18:00:35.887158 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cb5t8" event={"ID":"26d5529a-b270-40fc-9faa-037435dd2f80","Type":"ContainerDied","Data":"dee0e88f7ebd2c75fbdae41aff3b519def894d0bc14fe932409343bfae737e93"} Jan 22 18:00:35 crc kubenswrapper[4758]: I0122 18:00:35.887487 4758 scope.go:117] "RemoveContainer" containerID="dee0e88f7ebd2c75fbdae41aff3b519def894d0bc14fe932409343bfae737e93" Jan 22 18:00:35 crc kubenswrapper[4758]: E0122 18:00:35.887755 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=operator pod=rabbitmq-cluster-operator-manager-668c99d594-cb5t8_openstack-operators(26d5529a-b270-40fc-9faa-037435dd2f80)\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cb5t8" podUID="26d5529a-b270-40fc-9faa-037435dd2f80" Jan 22 18:00:35 crc kubenswrapper[4758]: I0122 18:00:35.889415 4758 generic.go:334] "Generic (PLEG): container finished" podID="f5135718-a42b-4089-922b-9fba781fe6db" containerID="ec249bf443459d83099a5ffad149437d9827fa235843daa76dae2c305f96d608" exitCode=1 Jan 22 18:00:35 crc kubenswrapper[4758]: I0122 18:00:35.889472 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lb8mx" event={"ID":"f5135718-a42b-4089-922b-9fba781fe6db","Type":"ContainerDied","Data":"ec249bf443459d83099a5ffad149437d9827fa235843daa76dae2c305f96d608"} Jan 22 18:00:35 crc kubenswrapper[4758]: I0122 18:00:35.890104 4758 scope.go:117] "RemoveContainer" containerID="ec249bf443459d83099a5ffad149437d9827fa235843daa76dae2c305f96d608" Jan 22 18:00:35 crc kubenswrapper[4758]: E0122 18:00:35.890379 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=ovn-operator-controller-manager-55db956ddc-lb8mx_openstack-operators(f5135718-a42b-4089-922b-9fba781fe6db)\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lb8mx" podUID="f5135718-a42b-4089-922b-9fba781fe6db" Jan 22 18:00:35 crc kubenswrapper[4758]: I0122 18:00:35.892676 4758 generic.go:334] "Generic (PLEG): container finished" podID="cc433179-ae5b-4250-80c2-97af371fdfed" containerID="e4095861ad8fe540cd9760115ea9bf60faaf90fc9cf31a69b5d4fc258b8ebeaf" exitCode=1 Jan 22 18:00:35 crc kubenswrapper[4758]: I0122 18:00:35.892786 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-lpprz" event={"ID":"cc433179-ae5b-4250-80c2-97af371fdfed","Type":"ContainerDied","Data":"e4095861ad8fe540cd9760115ea9bf60faaf90fc9cf31a69b5d4fc258b8ebeaf"} Jan 22 18:00:35 crc kubenswrapper[4758]: I0122 18:00:35.893492 4758 scope.go:117] "RemoveContainer" containerID="e4095861ad8fe540cd9760115ea9bf60faaf90fc9cf31a69b5d4fc258b8ebeaf" Jan 22 18:00:35 crc kubenswrapper[4758]: I0122 18:00:35.897946 4758 generic.go:334] "Generic (PLEG): container finished" podID="d67bb459-81fe-48a2-ac8a-cb4441bb35bb" containerID="95b8c3c6cc21b228c22b9ffe3228bc4810df2f462264c83f968420945773d045" exitCode=1 Jan 22 18:00:35 crc kubenswrapper[4758]: I0122 18:00:35.898009 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-d2nmz" event={"ID":"d67bb459-81fe-48a2-ac8a-cb4441bb35bb","Type":"ContainerDied","Data":"95b8c3c6cc21b228c22b9ffe3228bc4810df2f462264c83f968420945773d045"} Jan 22 18:00:35 crc kubenswrapper[4758]: I0122 18:00:35.898724 4758 scope.go:117] "RemoveContainer" containerID="95b8c3c6cc21b228c22b9ffe3228bc4810df2f462264c83f968420945773d045" Jan 22 18:00:35 crc kubenswrapper[4758]: E0122 18:00:35.899013 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=mariadb-operator-controller-manager-c87fff755-d2nmz_openstack-operators(d67bb459-81fe-48a2-ac8a-cb4441bb35bb)\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-d2nmz" podUID="d67bb459-81fe-48a2-ac8a-cb4441bb35bb" Jan 22 18:00:35 crc kubenswrapper[4758]: I0122 18:00:35.907763 4758 generic.go:334] "Generic (PLEG): container finished" podID="8afd29cc-2dab-460e-ad9d-f17690c15f41" containerID="0874a7ddc92ab0b24afc711eb8d63f639a492a7de869c8a7af586bf54214b376" exitCode=1 Jan 22 18:00:35 crc kubenswrapper[4758]: I0122 18:00:35.907859 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-58fc8b87c6-qmw5r" event={"ID":"8afd29cc-2dab-460e-ad9d-f17690c15f41","Type":"ContainerDied","Data":"0874a7ddc92ab0b24afc711eb8d63f639a492a7de869c8a7af586bf54214b376"} Jan 22 18:00:35 crc kubenswrapper[4758]: I0122 18:00:35.908898 4758 scope.go:117] "RemoveContainer" containerID="0874a7ddc92ab0b24afc711eb8d63f639a492a7de869c8a7af586bf54214b376" Jan 22 18:00:35 crc kubenswrapper[4758]: E0122 18:00:35.909366 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=metallb-operator-controller-manager-58fc8b87c6-qmw5r_metallb-system(8afd29cc-2dab-460e-ad9d-f17690c15f41)\"" pod="metallb-system/metallb-operator-controller-manager-58fc8b87c6-qmw5r" podUID="8afd29cc-2dab-460e-ad9d-f17690c15f41" Jan 22 18:00:35 crc kubenswrapper[4758]: I0122 18:00:35.911078 4758 scope.go:117] "RemoveContainer" containerID="bc4f970a22c54315f6513899232257efae4e7e4b6f571d8f0a84f9b878900842" Jan 22 18:00:35 crc kubenswrapper[4758]: E0122 18:00:35.911389 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-state-metrics pod=kube-state-metrics-0_openstack(d5a7a812-eaba-4ae7-8d97-e80ae4f70d78)\"" pod="openstack/kube-state-metrics-0" podUID="d5a7a812-eaba-4ae7-8d97-e80ae4f70d78" Jan 22 18:00:35 crc kubenswrapper[4758]: I0122 18:00:35.913007 4758 generic.go:334] "Generic (PLEG): container finished" podID="c73a71b4-f1fd-4a6c-9832-ce9b48a5f220" containerID="313d83e614b8a8d25ca53b49fd49f5b0805854094c56adf9746feed980253f0f" exitCode=1 Jan 22 18:00:35 crc kubenswrapper[4758]: I0122 18:00:35.913123 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-7tzm4" event={"ID":"c73a71b4-f1fd-4a6c-9832-ce9b48a5f220","Type":"ContainerDied","Data":"313d83e614b8a8d25ca53b49fd49f5b0805854094c56adf9746feed980253f0f"} Jan 22 18:00:35 crc kubenswrapper[4758]: I0122 18:00:35.913627 4758 scope.go:117] "RemoveContainer" containerID="313d83e614b8a8d25ca53b49fd49f5b0805854094c56adf9746feed980253f0f" Jan 22 18:00:35 crc kubenswrapper[4758]: E0122 18:00:35.913944 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=neutron-operator-controller-manager-5d8f59fb49-7tzm4_openstack-operators(c73a71b4-f1fd-4a6c-9832-ce9b48a5f220)\"" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-7tzm4" podUID="c73a71b4-f1fd-4a6c-9832-ce9b48a5f220" Jan 22 18:00:35 crc kubenswrapper[4758]: I0122 18:00:35.930714 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-np2j4" event={"ID":"4612798c-6ae6-4a07-afe6-3f3574ee467b","Type":"ContainerStarted","Data":"6d19c279c49e52b974b4f1a171eb4636e1d31547c5fcfec7d163b5ac11387149"} Jan 22 18:00:35 crc kubenswrapper[4758]: I0122 18:00:35.930977 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-np2j4" Jan 22 18:00:35 crc kubenswrapper[4758]: I0122 18:00:35.938304 4758 generic.go:334] "Generic (PLEG): container finished" podID="659f7d3e-5518-4d19-bb54-e39295a667d2" containerID="19cc39117dfffee7f12d4214dc6819efe0f6f773093b16cab00870da7b607074" exitCode=1 Jan 22 18:00:35 crc kubenswrapper[4758]: I0122 18:00:35.938418 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2fkhp" event={"ID":"659f7d3e-5518-4d19-bb54-e39295a667d2","Type":"ContainerDied","Data":"19cc39117dfffee7f12d4214dc6819efe0f6f773093b16cab00870da7b607074"} Jan 22 18:00:35 crc kubenswrapper[4758]: I0122 18:00:35.938717 4758 scope.go:117] "RemoveContainer" containerID="a13378759202bf3b5e99273b246f563647b62f9c9ba3d166a097fb3b7a5cd4d4" Jan 22 18:00:35 crc kubenswrapper[4758]: E0122 18:00:35.938946 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=nova-operator-controller-manager-6b8bc8d87d-zfcl5_openstack-operators(7d2439ad-1ca6-4c24-9d15-e04f0f89aedf)\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-zfcl5" podUID="7d2439ad-1ca6-4c24-9d15-e04f0f89aedf" Jan 22 18:00:35 crc kubenswrapper[4758]: I0122 18:00:35.939040 4758 scope.go:117] "RemoveContainer" containerID="542ed8d1796b1c80fd6e195ec7b32f904339447bd00b8e67d8382cb94f9a53f8" Jan 22 18:00:35 crc kubenswrapper[4758]: I0122 18:00:35.939062 4758 scope.go:117] "RemoveContainer" containerID="19cc39117dfffee7f12d4214dc6819efe0f6f773093b16cab00870da7b607074" Jan 22 18:00:35 crc kubenswrapper[4758]: E0122 18:00:35.939265 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-f2gvw_openshift-marketplace(6daa1231-490e-4ff7-9157-f49cdec96a5e)\"" pod="openshift-marketplace/marketplace-operator-79b997595-f2gvw" podUID="6daa1231-490e-4ff7-9157-f49cdec96a5e" Jan 22 18:00:36 crc kubenswrapper[4758]: I0122 18:00:36.039363 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-85b8fd6746-9vvd6" Jan 22 18:00:36 crc kubenswrapper[4758]: I0122 18:00:36.040145 4758 scope.go:117] "RemoveContainer" containerID="40620850c0b41ba5d105b5476e01b243745145ee1653fe36b73b07bb40385f91" Jan 22 18:00:36 crc kubenswrapper[4758]: E0122 18:00:36.040413 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=watcher-operator-controller-manager-85b8fd6746-9vvd6_openstack-operators(71c16ac1-3276-4096-93c5-d10765320713)\"" pod="openstack-operators/watcher-operator-controller-manager-85b8fd6746-9vvd6" podUID="71c16ac1-3276-4096-93c5-d10765320713" Jan 22 18:00:36 crc kubenswrapper[4758]: I0122 18:00:36.055055 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2xj52" Jan 22 18:00:36 crc kubenswrapper[4758]: I0122 18:00:36.443983 4758 request.go:700] Waited for 4.974004131s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobo-prometheus-operator-admission-webhook-dockercfg-z2sxt&resourceVersion=84343 Jan 22 18:00:36 crc kubenswrapper[4758]: I0122 18:00:36.528341 4758 scope.go:117] "RemoveContainer" containerID="ad4303b386c6e21f3904b24f988068646e3106398b796a612dade9432bc95cd7" Jan 22 18:00:36 crc kubenswrapper[4758]: I0122 18:00:36.630981 4758 scope.go:117] "RemoveContainer" containerID="a7809f27497752a919b6754cb12a9a6bab28418e529fc85219c6af1b2b6e0687" Jan 22 18:00:36 crc kubenswrapper[4758]: I0122 18:00:36.760439 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-sb974" Jan 22 18:00:36 crc kubenswrapper[4758]: I0122 18:00:36.761511 4758 scope.go:117] "RemoveContainer" containerID="8d740abd0ed4523b0bbc53fb6cb986e3dd12d30030fb5decbccb8d5c79e3cb4d" Jan 22 18:00:36 crc kubenswrapper[4758]: E0122 18:00:36.761828 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=infra-operator-controller-manager-54ccf4f85d-sb974_openstack-operators(35a3fafd-45ea-465d-90ef-36148a60685e)\"" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-sb974" podUID="35a3fafd-45ea-465d-90ef-36148a60685e" Jan 22 18:00:36 crc kubenswrapper[4758]: I0122 18:00:36.806574 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 18:00:36 crc kubenswrapper[4758]: I0122 18:00:36.863728 4758 scope.go:117] "RemoveContainer" containerID="f4e1ecc33b122dfeea31b64b121de90bd388c7aeb97dc5736a98282952aea0bb" Jan 22 18:00:36 crc kubenswrapper[4758]: I0122 18:00:36.865829 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 18:00:36 crc kubenswrapper[4758]: I0122 18:00:36.957131 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/speaker-lpprz" Jan 22 18:00:36 crc kubenswrapper[4758]: I0122 18:00:36.962134 4758 scope.go:117] "RemoveContainer" containerID="a13378759202bf3b5e99273b246f563647b62f9c9ba3d166a097fb3b7a5cd4d4" Jan 22 18:00:36 crc kubenswrapper[4758]: E0122 18:00:36.962410 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=nova-operator-controller-manager-6b8bc8d87d-zfcl5_openstack-operators(7d2439ad-1ca6-4c24-9d15-e04f0f89aedf)\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-zfcl5" podUID="7d2439ad-1ca6-4c24-9d15-e04f0f89aedf" Jan 22 18:00:36 crc kubenswrapper[4758]: I0122 18:00:36.964645 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-f2gvw_6daa1231-490e-4ff7-9157-f49cdec96a5e/marketplace-operator/1.log" Jan 22 18:00:36 crc kubenswrapper[4758]: I0122 18:00:36.971875 4758 scope.go:117] "RemoveContainer" containerID="0874a7ddc92ab0b24afc711eb8d63f639a492a7de869c8a7af586bf54214b376" Jan 22 18:00:36 crc kubenswrapper[4758]: E0122 18:00:36.972105 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=metallb-operator-controller-manager-58fc8b87c6-qmw5r_metallb-system(8afd29cc-2dab-460e-ad9d-f17690c15f41)\"" pod="metallb-system/metallb-operator-controller-manager-58fc8b87c6-qmw5r" podUID="8afd29cc-2dab-460e-ad9d-f17690c15f41" Jan 22 18:00:36 crc kubenswrapper[4758]: I0122 18:00:36.972379 4758 scope.go:117] "RemoveContainer" containerID="75150cc4b783423b7047afafc321b44caa1cb3d2820b82c5afc4ef8e57d0e276" Jan 22 18:00:36 crc kubenswrapper[4758]: E0122 18:00:36.972574 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=ironic-operator-controller-manager-69d6c9f5b8-gd568_openstack-operators(e8d5a5c6-b15b-490d-aab9-7fc63e9f30f7)\"" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-gd568" podUID="e8d5a5c6-b15b-490d-aab9-7fc63e9f30f7" Jan 22 18:00:36 crc kubenswrapper[4758]: I0122 18:00:36.973060 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2fkhp" event={"ID":"659f7d3e-5518-4d19-bb54-e39295a667d2","Type":"ContainerStarted","Data":"9328257692ad94f36d64f1b178ddb0d894c66e0d78c6aea357ca87a8b11b2646"} Jan 22 18:00:36 crc kubenswrapper[4758]: I0122 18:00:36.973138 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 18:00:36 crc kubenswrapper[4758]: I0122 18:00:36.973618 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2fkhp" Jan 22 18:00:37 crc kubenswrapper[4758]: I0122 18:00:37.104349 4758 scope.go:117] "RemoveContainer" containerID="240edd2f680249409c003b4f15a98966b1e1d8f25dbe8d8d91e622618a7b238d" Jan 22 18:00:37 crc kubenswrapper[4758]: I0122 18:00:37.371844 4758 scope.go:117] "RemoveContainer" containerID="1d61b57ea732060a674fca3da40faafd12a801a2feede3f87bc0a9c8194f85bb" Jan 22 18:00:37 crc kubenswrapper[4758]: I0122 18:00:37.439246 4758 scope.go:117] "RemoveContainer" containerID="fc8ff14bdec8806608a8a75f3794ae87e47866f8eec743c5d6cec4f1daefb700" Jan 22 18:00:37 crc kubenswrapper[4758]: I0122 18:00:37.463912 4758 request.go:700] Waited for 4.671396663s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dinfra-operator-webhook-server-cert&resourceVersion=84252 Jan 22 18:00:37 crc kubenswrapper[4758]: I0122 18:00:37.473076 4758 scope.go:117] "RemoveContainer" containerID="09f5beedb93e30a4b68e826f33ffdbcfe408d643e4a6667b28b1a56cfbd08bc2" Jan 22 18:00:37 crc kubenswrapper[4758]: I0122 18:00:37.508994 4758 scope.go:117] "RemoveContainer" containerID="94d80fab259bbdba24e6cb6f6b906c1c7fc7544cc57f0cf0de9ee3c67a648b6c" Jan 22 18:00:37 crc kubenswrapper[4758]: I0122 18:00:37.547723 4758 scope.go:117] "RemoveContainer" containerID="95d524686bf752428f84ea0aeeb170f883fe48d942e5469121e60914ddd0df88" Jan 22 18:00:37 crc kubenswrapper[4758]: I0122 18:00:37.650319 4758 scope.go:117] "RemoveContainer" containerID="c62d76911da0f5713e9e27fb9411fcce83f728d29a3f1dfcd100c7f9a1299640" Jan 22 18:00:37 crc kubenswrapper[4758]: I0122 18:00:37.742889 4758 scope.go:117] "RemoveContainer" containerID="98763afcc5b175076c7ccd2ff919e441b44b7eef4344c4bb01c274b2de476b81" Jan 22 18:00:37 crc kubenswrapper[4758]: I0122 18:00:37.808468 4758 scope.go:117] "RemoveContainer" containerID="71d0f9a93a1f198cee3e61be87dac5fd13220229181dc2ee3ad7a9d1aecf76fb" Jan 22 18:00:37 crc kubenswrapper[4758]: E0122 18:00:37.808884 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:00:37 crc kubenswrapper[4758]: I0122 18:00:37.829788 4758 scope.go:117] "RemoveContainer" containerID="5e4cfe8dee549f90ddd7da44b917a696b4ad8b9811a62376b4463b33d409636a" Jan 22 18:00:37 crc kubenswrapper[4758]: I0122 18:00:37.996869 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-lpprz" event={"ID":"cc433179-ae5b-4250-80c2-97af371fdfed","Type":"ContainerStarted","Data":"bff7030b2957837cd32c1a701d5da4290501a5bae4bbb52d0d660c8dba7dee95"} Jan 22 18:00:37 crc kubenswrapper[4758]: I0122 18:00:37.997728 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-lpprz" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.105137 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.125104 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.145805 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.165220 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.185593 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.205547 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.225492 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.245465 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.270058 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.285178 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.305577 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-2w6mb" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.325067 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-q7gzx" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.344923 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.369645 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.385927 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.397275 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/kube-state-metrics-0" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.397323 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.398198 4758 scope.go:117] "RemoveContainer" containerID="bc4f970a22c54315f6513899232257efae4e7e4b6f571d8f0a84f9b878900842" Jan 22 18:00:38 crc kubenswrapper[4758]: E0122 18:00:38.398459 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-state-metrics pod=kube-state-metrics-0_openstack(d5a7a812-eaba-4ae7-8d97-e80ae4f70d78)\"" pod="openstack/kube-state-metrics-0" podUID="d5a7a812-eaba-4ae7-8d97-e80ae4f70d78" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.405303 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.425617 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.444987 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.479932 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-x59mw" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.483410 4758 request.go:700] Waited for 4.561111167s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-default-internal-config-data&resourceVersion=84252 Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.485313 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.517759 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.525126 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.545557 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-lk2r2" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.564922 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.585216 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.605722 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-d8jxf" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.626793 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.636908 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-675f79667-vjvtj" podUID="c4847ca7-5057-4d6d-80c5-f74c7d633e83" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.98:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.645410 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.665189 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.685601 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.705813 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.725049 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.751113 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.765192 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.800775 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.805801 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.826140 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.847667 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.852121 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.853057 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.865597 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-z4pqk" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.885922 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.904674 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.925370 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.945251 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.965628 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 22 18:00:38 crc kubenswrapper[4758]: I0122 18:00:38.985448 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.007152 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.025926 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.048483 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.067005 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.085074 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.115874 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.126425 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.145278 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.166493 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.186234 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.212002 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.228392 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.245553 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.266348 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-rdwz2" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.285266 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.305525 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.325399 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.345454 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-pdg6h" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.365837 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.387308 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.405693 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-v97lh" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.425648 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.445690 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.469095 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.483779 4758 request.go:700] Waited for 4.891837652s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-ceilometer-internal-svc&resourceVersion=84252 Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.486432 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.505190 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.528126 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.547205 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.565486 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.585181 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.605199 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.625691 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.632927 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-f2gvw" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.633024 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-f2gvw" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.634541 4758 scope.go:117] "RemoveContainer" containerID="542ed8d1796b1c80fd6e195ec7b32f904339447bd00b8e67d8382cb94f9a53f8" Jan 22 18:00:39 crc kubenswrapper[4758]: E0122 18:00:39.635015 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-f2gvw_openshift-marketplace(6daa1231-490e-4ff7-9157-f49cdec96a5e)\"" pod="openshift-marketplace/marketplace-operator-79b997595-f2gvw" podUID="6daa1231-490e-4ff7-9157-f49cdec96a5e" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.645897 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.667104 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.690035 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.705486 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.726018 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-9zqsl" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.746627 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-plugins-conf" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.765686 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.785472 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-r6mc9" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.805132 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.828012 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.846461 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.867081 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.884977 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.906196 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.925842 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.945459 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.965260 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-d798m" Jan 22 18:00:39 crc kubenswrapper[4758]: I0122 18:00:39.985719 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.005541 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.026377 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.046982 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.065900 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.086518 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-server-conf" Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.104856 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.125811 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.144856 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.165130 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.185643 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.212192 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.227709 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.245062 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.265108 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.285274 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.305222 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.325925 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.345233 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.365353 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.386296 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.405882 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-ckpvf" Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.426580 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.445849 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-9jfxj" Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.465324 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.485862 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.503551 4758 request.go:700] Waited for 4.712371871s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-ovncontroller-ovndbs&resourceVersion=84343 Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.504648 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.526064 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.546498 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.565082 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.585661 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.605392 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.625212 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.645764 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.664929 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.686255 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-erlang-cookie" Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.705557 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.742296 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-n4qvk" Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.745344 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.765158 4758 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-x4h8f" Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.785909 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.805718 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-s6gv4" Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.826666 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.845525 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.865540 4758 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.885251 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.906122 4758 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.925542 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.945428 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.964835 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 22 18:00:40 crc kubenswrapper[4758]: I0122 18:00:40.986551 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-tzrkw" Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.005964 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.026850 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.046508 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.065897 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.085031 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.105472 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.105999 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wxd6d" Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.126059 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.145546 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.165083 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.185820 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.206536 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.228383 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.245771 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.265972 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.285114 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.305475 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.326945 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-config-data" Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.345380 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.365002 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.392644 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.405268 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-thg4w" Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.426536 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-server-dockercfg-8d4mj" Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.445941 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.466203 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.488765 4758 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.503850 4758 request.go:700] Waited for 4.576420853s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/secrets?fieldSelector=metadata.name%3Droute-controller-manager-sa-dockercfg-h2zr2&resourceVersion=84512 Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.509989 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.525867 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.545251 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.571427 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.585823 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-bvchw" Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.610038 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.625798 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.644918 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.665902 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.685322 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.707504 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.726899 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.745079 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.772233 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.801825 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.810672 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.827772 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.845563 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.870064 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.888523 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.910244 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.925363 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-p9vjx" Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.944534 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.967681 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 22 18:00:41 crc kubenswrapper[4758]: I0122 18:00:41.985908 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.085405 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.085910 4758 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.086109 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.180296 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.201485 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.203203 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.203360 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.203586 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.207107 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.233276 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.233352 4758 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.233546 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.249330 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.276446 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.294676 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.318714 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.340760 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.352651 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.384197 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.399535 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.421251 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.428495 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.447962 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-bsxhx" Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.474658 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.487071 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.504252 4758 request.go:700] Waited for 2.303507709s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dspeaker-certs-secret&resourceVersion=84384 Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.508798 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.525032 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.556967 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.581770 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.591286 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.635268 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-4q6rk" Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.661086 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-w2txv" Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.661597 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.665996 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.686556 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.755143 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.763597 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-2fs5z" Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.763903 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.778182 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-s75rc" Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.793229 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-dbtnp" Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.813172 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-s6bn2" Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.832948 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.874216 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.895787 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.931897 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.944345 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.944714 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.974342 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.980777 4758 trace.go:236] Trace[1826154167]: "Reflector ListAndWatch" name:object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" (22-Jan-2026 18:00:30.861) (total time: 12119ms): Jan 22 18:00:42 crc kubenswrapper[4758]: Trace[1826154167]: ---"Objects listed" error: 12119ms (18:00:42.980) Jan 22 18:00:42 crc kubenswrapper[4758]: Trace[1826154167]: [12.119268118s] [12.119268118s] END Jan 22 18:00:42 crc kubenswrapper[4758]: I0122 18:00:42.980812 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.025283 4758 trace.go:236] Trace[299899561]: "Reflector ListAndWatch" name:object-"cert-manager"/"cert-manager-webhook-dockercfg-9xxdc" (22-Jan-2026 18:00:31.670) (total time: 11355ms): Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[299899561]: ---"Objects listed" error: 11355ms (18:00:43.025) Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[299899561]: [11.35517503s] [11.35517503s] END Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.025311 4758 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-9xxdc" Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.025385 4758 trace.go:236] Trace[1017841075]: "Reflector ListAndWatch" name:object-"openshift-ingress"/"router-dockercfg-zdk86" (22-Jan-2026 18:00:30.781) (total time: 12243ms): Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1017841075]: ---"Objects listed" error: 12243ms (18:00:43.025) Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1017841075]: [12.243991417s] [12.243991417s] END Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.025409 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.025464 4758 trace.go:236] Trace[1994513172]: "Reflector ListAndWatch" name:object-"openstack-operators"/"openstack-operator-index-dockercfg-ck689" (22-Jan-2026 18:00:30.980) (total time: 12044ms): Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1994513172]: ---"Objects listed" error: 12044ms (18:00:43.025) Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1994513172]: [12.044932351s] [12.044932351s] END Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.025485 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-ck689" Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.049451 4758 trace.go:236] Trace[1224337150]: "Reflector ListAndWatch" name:object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" (22-Jan-2026 18:00:31.371) (total time: 11678ms): Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1224337150]: ---"Objects listed" error: 11678ms (18:00:43.049) Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1224337150]: [11.678343299s] [11.678343299s] END Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.049478 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.087796 4758 trace.go:236] Trace[549424585]: "Reflector ListAndWatch" name:object-"openshift-ingress-canary"/"canary-serving-cert" (22-Jan-2026 18:00:31.008) (total time: 12079ms): Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[549424585]: ---"Objects listed" error: 12078ms (18:00:43.087) Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[549424585]: [12.079017781s] [12.079017781s] END Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.087826 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.143774 4758 trace.go:236] Trace[985592252]: "Reflector ListAndWatch" name:object-"openshift-dns-operator"/"openshift-service-ca.crt" (22-Jan-2026 18:00:30.929) (total time: 12214ms): Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[985592252]: ---"Objects listed" error: 12214ms (18:00:43.143) Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[985592252]: [12.214195475s] [12.214195475s] END Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.143804 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.153423 4758 trace.go:236] Trace[727209375]: "Reflector ListAndWatch" name:object-"openshift-authentication-operator"/"trusted-ca-bundle" (22-Jan-2026 18:00:30.905) (total time: 12248ms): Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[727209375]: ---"Objects listed" error: 12248ms (18:00:43.153) Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[727209375]: [12.248131739s] [12.248131739s] END Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.153452 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.190595 4758 trace.go:236] Trace[1440684575]: "Reflector ListAndWatch" name:object-"openshift-machine-config-operator"/"openshift-service-ca.crt" (22-Jan-2026 18:00:30.944) (total time: 12246ms): Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1440684575]: ---"Objects listed" error: 12246ms (18:00:43.190) Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1440684575]: [12.246494945s] [12.246494945s] END Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.190626 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.193884 4758 trace.go:236] Trace[1282068126]: "Reflector ListAndWatch" name:object-"openstack"/"cert-horizon-svc" (22-Jan-2026 18:00:30.935) (total time: 12258ms): Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1282068126]: ---"Objects listed" error: 12258ms (18:00:43.193) Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1282068126]: [12.258119432s] [12.258119432s] END Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.193905 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.207554 4758 trace.go:236] Trace[1870710425]: "Reflector ListAndWatch" name:object-"openstack"/"memcached-config-data" (22-Jan-2026 18:00:30.895) (total time: 12311ms): Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1870710425]: ---"Objects listed" error: 12311ms (18:00:43.207) Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1870710425]: [12.311791715s] [12.311791715s] END Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.207578 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.215380 4758 trace.go:236] Trace[1574674760]: "Reflector ListAndWatch" name:object-"openshift-network-operator"/"iptables-alerter-script" (22-Jan-2026 18:00:31.764) (total time: 11450ms): Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1574674760]: ---"Objects listed" error: 11450ms (18:00:43.215) Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1574674760]: [11.450641613s] [11.450641613s] END Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.215412 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.235893 4758 trace.go:236] Trace[1501733513]: "Reflector ListAndWatch" name:object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" (22-Jan-2026 18:00:31.904) (total time: 11331ms): Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1501733513]: ---"Objects listed" error: 11331ms (18:00:43.235) Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1501733513]: [11.331635019s] [11.331635019s] END Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.235926 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.261595 4758 trace.go:236] Trace[1119216979]: "Reflector ListAndWatch" name:object-"openshift-console"/"console-serving-cert" (22-Jan-2026 18:00:31.904) (total time: 11357ms): Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1119216979]: ---"Objects listed" error: 11357ms (18:00:43.261) Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1119216979]: [11.357463943s] [11.357463943s] END Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.261620 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.280491 4758 trace.go:236] Trace[1495866488]: "Reflector ListAndWatch" name:object-"metallb-system"/"metallb-operator-webhook-server-service-cert" (22-Jan-2026 18:00:31.073) (total time: 12207ms): Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1495866488]: ---"Objects listed" error: 12207ms (18:00:43.280) Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1495866488]: [12.207116081s] [12.207116081s] END Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.280525 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.309725 4758 trace.go:236] Trace[172351008]: "Reflector ListAndWatch" name:object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" (22-Jan-2026 18:00:30.963) (total time: 12346ms): Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[172351008]: ---"Objects listed" error: 12346ms (18:00:43.309) Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[172351008]: [12.346169152s] [12.346169152s] END Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.309785 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.310113 4758 trace.go:236] Trace[322342737]: "Reflector ListAndWatch" name:object-"openshift-controller-manager"/"serving-cert" (22-Jan-2026 18:00:31.875) (total time: 11434ms): Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[322342737]: ---"Objects listed" error: 11434ms (18:00:43.310) Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[322342737]: [11.434648695s] [11.434648695s] END Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.310133 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.369226 4758 trace.go:236] Trace[374777344]: "Reflector ListAndWatch" name:object-"openshift-console-operator"/"openshift-service-ca.crt" (22-Jan-2026 18:00:30.920) (total time: 12448ms): Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[374777344]: ---"Objects listed" error: 12448ms (18:00:43.369) Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[374777344]: [12.448636365s] [12.448636365s] END Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.369254 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.408993 4758 trace.go:236] Trace[1805717850]: "Reflector ListAndWatch" name:object-"openstack"/"barbican-keystone-listener-config-data" (22-Jan-2026 18:00:31.250) (total time: 12158ms): Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1805717850]: ---"Objects listed" error: 12158ms (18:00:43.408) Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1805717850]: [12.158672661s] [12.158672661s] END Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.409032 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.409320 4758 trace.go:236] Trace[990386485]: "Reflector ListAndWatch" name:object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" (22-Jan-2026 18:00:30.917) (total time: 12491ms): Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[990386485]: ---"Objects listed" error: 12491ms (18:00:43.409) Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[990386485]: [12.491959866s] [12.491959866s] END Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.409330 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.409459 4758 trace.go:236] Trace[1149675477]: "Reflector ListAndWatch" name:object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" (22-Jan-2026 18:00:30.976) (total time: 12432ms): Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1149675477]: ---"Objects listed" error: 12432ms (18:00:43.409) Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1149675477]: [12.432469004s] [12.432469004s] END Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.409467 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.437799 4758 trace.go:236] Trace[2119895782]: "Reflector ListAndWatch" name:object-"openstack"/"horizon" (22-Jan-2026 18:00:31.380) (total time: 12057ms): Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[2119895782]: ---"Objects listed" error: 12057ms (18:00:43.437) Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[2119895782]: [12.057303249s] [12.057303249s] END Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.437825 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.438182 4758 trace.go:236] Trace[349278041]: "Reflector ListAndWatch" name:object-"openshift-operators"/"openshift-service-ca.crt" (22-Jan-2026 18:00:30.950) (total time: 12486ms): Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[349278041]: ---"Objects listed" error: 12486ms (18:00:43.437) Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[349278041]: [12.486796206s] [12.486796206s] END Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.438327 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.473761 4758 trace.go:236] Trace[2010342551]: "Reflector ListAndWatch" name:object-"openstack"/"cert-rabbitmq-svc" (22-Jan-2026 18:00:31.147) (total time: 12326ms): Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[2010342551]: ---"Objects listed" error: 12326ms (18:00:43.473) Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[2010342551]: [12.326409483s] [12.326409483s] END Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.474053 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.473925 4758 trace.go:236] Trace[610183363]: "Reflector ListAndWatch" name:object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" (22-Jan-2026 18:00:31.550) (total time: 11922ms): Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[610183363]: ---"Objects listed" error: 11922ms (18:00:43.473) Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[610183363]: [11.922883804s] [11.922883804s] END Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.474166 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.504616 4758 trace.go:236] Trace[1457554223]: "Reflector ListAndWatch" name:object-"openstack"/"rabbitmq-notifications-default-user" (22-Jan-2026 18:00:31.151) (total time: 12352ms): Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1457554223]: ---"Objects listed" error: 12352ms (18:00:43.504) Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1457554223]: [12.352771581s] [12.352771581s] END Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.504649 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-default-user" Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.523414 4758 request.go:700] Waited for 1.786888318s, retries: 1, retry-after: 5s - retry-reason: 503 - request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=84680 Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.542106 4758 trace.go:236] Trace[1906513492]: "Reflector ListAndWatch" name:object-"openshift-service-ca-operator"/"openshift-service-ca.crt" (22-Jan-2026 18:00:31.558) (total time: 11983ms): Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1906513492]: ---"Objects listed" error: 11983ms (18:00:43.542) Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1906513492]: [11.98396257s] [11.98396257s] END Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.542137 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.542400 4758 trace.go:236] Trace[1473206868]: "Reflector ListAndWatch" name:object-"openshift-network-console"/"networking-console-plugin-cert" (22-Jan-2026 18:00:30.674) (total time: 12867ms): Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1473206868]: ---"Objects listed" error: 12867ms (18:00:43.542) Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1473206868]: [12.867505032s] [12.867505032s] END Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.542409 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.566733 4758 trace.go:236] Trace[354250565]: "Reflector ListAndWatch" name:object-"openshift-nmstate"/"kube-root-ca.crt" (22-Jan-2026 18:00:31.949) (total time: 11617ms): Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[354250565]: ---"Objects listed" error: 11617ms (18:00:43.566) Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[354250565]: [11.617636765s] [11.617636765s] END Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.566779 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.570291 4758 trace.go:236] Trace[471152479]: "Reflector ListAndWatch" name:object-"openstack"/"cert-nova-internal-svc" (22-Jan-2026 18:00:31.150) (total time: 12419ms): Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[471152479]: ---"Objects listed" error: 12419ms (18:00:43.570) Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[471152479]: [12.419677296s] [12.419677296s] END Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.570315 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.585367 4758 trace.go:236] Trace[72814283]: "Reflector ListAndWatch" name:object-"metallb-system"/"frr-startup" (22-Jan-2026 18:00:30.683) (total time: 12902ms): Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[72814283]: ---"Objects listed" error: 12902ms (18:00:43.585) Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[72814283]: [12.902221869s] [12.902221869s] END Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.585394 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.605389 4758 trace.go:236] Trace[1707854242]: "Reflector ListAndWatch" name:object-"openshift-service-ca-operator"/"service-ca-operator-config" (22-Jan-2026 18:00:30.780) (total time: 12825ms): Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1707854242]: ---"Objects listed" error: 12825ms (18:00:43.605) Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1707854242]: [12.825094367s] [12.825094367s] END Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.605419 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.629402 4758 trace.go:236] Trace[1766613747]: "Reflector ListAndWatch" name:object-"openshift-operators"/"perses-operator-dockercfg-c658k" (22-Jan-2026 18:00:30.777) (total time: 12852ms): Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1766613747]: ---"Objects listed" error: 12852ms (18:00:43.629) Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1766613747]: [12.852338849s] [12.852338849s] END Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.629435 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-c658k" Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.653577 4758 trace.go:236] Trace[669109245]: "Reflector ListAndWatch" name:object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" (22-Jan-2026 18:00:30.686) (total time: 12967ms): Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[669109245]: ---"Objects listed" error: 12967ms (18:00:43.653) Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[669109245]: [12.967431927s] [12.967431927s] END Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.653607 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.686278 4758 trace.go:236] Trace[431518505]: "Reflector ListAndWatch" name:object-"openshift-machine-config-operator"/"mco-proxy-tls" (22-Jan-2026 18:00:31.508) (total time: 12177ms): Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[431518505]: ---"Objects listed" error: 12177ms (18:00:43.686) Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[431518505]: [12.177260488s] [12.177260488s] END Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.686314 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.711258 4758 trace.go:236] Trace[1824552912]: "Reflector ListAndWatch" name:object-"metallb-system"/"metallb-operator-controller-manager-service-cert" (22-Jan-2026 18:00:30.743) (total time: 12967ms): Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1824552912]: ---"Objects listed" error: 12967ms (18:00:43.711) Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1824552912]: [12.967865398s] [12.967865398s] END Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.711284 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.732967 4758 trace.go:236] Trace[464972786]: "Reflector ListAndWatch" name:object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" (22-Jan-2026 18:00:31.546) (total time: 12186ms): Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[464972786]: ---"Objects listed" error: 12186ms (18:00:43.732) Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[464972786]: [12.186893471s] [12.186893471s] END Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.732994 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.746366 4758 trace.go:236] Trace[1613756109]: "Reflector ListAndWatch" name:object-"openshift-service-ca"/"kube-root-ca.crt" (22-Jan-2026 18:00:30.838) (total time: 12907ms): Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1613756109]: ---"Objects listed" error: 12907ms (18:00:43.746) Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1613756109]: [12.907435452s] [12.907435452s] END Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.746392 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.766388 4758 trace.go:236] Trace[1053993759]: "Reflector ListAndWatch" name:object-"openstack"/"cert-nova-public-svc" (22-Jan-2026 18:00:31.397) (total time: 12368ms): Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1053993759]: ---"Objects listed" error: 12368ms (18:00:43.766) Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1053993759]: [12.368631035s] [12.368631035s] END Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.766423 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.797806 4758 trace.go:236] Trace[1200699458]: "Reflector ListAndWatch" name:object-"openshift-controller-manager"/"openshift-global-ca" (22-Jan-2026 18:00:31.266) (total time: 12531ms): Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1200699458]: ---"Objects listed" error: 12531ms (18:00:43.797) Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1200699458]: [12.531290338s] [12.531290338s] END Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.797826 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.811311 4758 trace.go:236] Trace[1211219230]: "Reflector ListAndWatch" name:object-"openstack"/"cinder-volume-nfs-config-data" (22-Jan-2026 18:00:31.296) (total time: 12514ms): Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1211219230]: ---"Objects listed" error: 12514ms (18:00:43.811) Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1211219230]: [12.514801278s] [12.514801278s] END Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.811618 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-config-data" Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.854010 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-b7565899b-vlqs7" Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.864832 4758 trace.go:236] Trace[403712677]: "Reflector ListAndWatch" name:object-"openstack"/"ceilometer-config-data" (22-Jan-2026 18:00:31.305) (total time: 12559ms): Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[403712677]: ---"Objects listed" error: 12559ms (18:00:43.864) Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[403712677]: [12.559106977s] [12.559106977s] END Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.864862 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.865215 4758 trace.go:236] Trace[577748832]: "Reflector ListAndWatch" name:object-"openstack"/"cert-swift-public-svc" (22-Jan-2026 18:00:31.249) (total time: 12615ms): Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[577748832]: ---"Objects listed" error: 12615ms (18:00:43.865) Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[577748832]: [12.615991027s] [12.615991027s] END Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.865243 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.896950 4758 trace.go:236] Trace[1635855464]: "Reflector ListAndWatch" name:object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" (22-Jan-2026 18:00:31.315) (total time: 12581ms): Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1635855464]: ---"Objects listed" error: 12581ms (18:00:43.896) Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1635855464]: [12.58126346s] [12.58126346s] END Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.897288 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.899068 4758 trace.go:236] Trace[1156093195]: "Reflector ListAndWatch" name:object-"openstack"/"cert-barbican-internal-svc" (22-Jan-2026 18:00:31.108) (total time: 12790ms): Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1156093195]: ---"Objects listed" error: 12790ms (18:00:43.899) Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1156093195]: [12.790790621s] [12.790790621s] END Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.899203 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.921441 4758 trace.go:236] Trace[1039703320]: "Reflector ListAndWatch" name:object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" (22-Jan-2026 18:00:31.112) (total time: 12808ms): Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1039703320]: ---"Objects listed" error: 12808ms (18:00:43.921) Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1039703320]: [12.808962067s] [12.808962067s] END Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.921728 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.960756 4758 trace.go:236] Trace[517897775]: "Reflector ListAndWatch" name:object-"openstack"/"cinder-scripts" (22-Jan-2026 18:00:31.128) (total time: 12832ms): Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[517897775]: ---"Objects listed" error: 12832ms (18:00:43.960) Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[517897775]: [12.832362925s] [12.832362925s] END Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.960781 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.960988 4758 trace.go:236] Trace[1412977606]: "Reflector ListAndWatch" name:object-"openstack"/"keystone-config-data" (22-Jan-2026 18:00:31.381) (total time: 12579ms): Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1412977606]: ---"Objects listed" error: 12579ms (18:00:43.960) Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1412977606]: [12.579314897s] [12.579314897s] END Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.960998 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.990815 4758 trace.go:236] Trace[1553275075]: "Reflector ListAndWatch" name:object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-zfvmv" (22-Jan-2026 18:00:31.368) (total time: 12622ms): Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1553275075]: ---"Objects listed" error: 12622ms (18:00:43.990) Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[1553275075]: [12.622551377s] [12.622551377s] END Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.990845 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-zfvmv" Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.991878 4758 trace.go:236] Trace[814425973]: "Reflector ListAndWatch" name:object-"openstack"/"kube-root-ca.crt" (22-Jan-2026 18:00:31.392) (total time: 12599ms): Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[814425973]: ---"Objects listed" error: 12599ms (18:00:43.991) Jan 22 18:00:43 crc kubenswrapper[4758]: Trace[814425973]: [12.599375554s] [12.599375554s] END Jan 22 18:00:43 crc kubenswrapper[4758]: I0122 18:00:43.991903 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.006108 4758 trace.go:236] Trace[1419268963]: "Reflector ListAndWatch" name:object-"openshift-controller-manager-operator"/"kube-root-ca.crt" (22-Jan-2026 18:00:31.278) (total time: 12727ms): Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[1419268963]: ---"Objects listed" error: 12727ms (18:00:44.006) Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[1419268963]: [12.727157826s] [12.727157826s] END Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.006140 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.028208 4758 trace.go:236] Trace[2051395524]: "Reflector ListAndWatch" name:object-"openstack"/"neutron-config" (22-Jan-2026 18:00:31.959) (total time: 12068ms): Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[2051395524]: ---"Objects listed" error: 12068ms (18:00:44.028) Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[2051395524]: [12.068571435s] [12.068571435s] END Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.028239 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.045089 4758 trace.go:236] Trace[1754318560]: "Reflector ListAndWatch" name:object-"openstack"/"placement-config-data" (22-Jan-2026 18:00:31.342) (total time: 12702ms): Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[1754318560]: ---"Objects listed" error: 12702ms (18:00:44.045) Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[1754318560]: [12.702162566s] [12.702162566s] END Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.045122 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.077171 4758 trace.go:236] Trace[593094995]: "Reflector ListAndWatch" name:object-"openstack"/"telemetry-ceilometer-dockercfg-kvpw9" (22-Jan-2026 18:00:31.366) (total time: 12710ms): Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[593094995]: ---"Objects listed" error: 12710ms (18:00:44.077) Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[593094995]: [12.710835283s] [12.710835283s] END Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.077403 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-kvpw9" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.087928 4758 trace.go:236] Trace[2093501072]: "Reflector ListAndWatch" name:object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-gmg82" (22-Jan-2026 18:00:31.473) (total time: 12614ms): Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[2093501072]: ---"Objects listed" error: 12614ms (18:00:44.087) Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[2093501072]: [12.614218089s] [12.614218089s] END Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.087953 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-gmg82" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.109821 4758 trace.go:236] Trace[543376984]: "Reflector ListAndWatch" name:object-"hostpath-provisioner"/"kube-root-ca.crt" (22-Jan-2026 18:00:31.260) (total time: 12849ms): Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[543376984]: ---"Objects listed" error: 12849ms (18:00:44.109) Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[543376984]: [12.849669326s] [12.849669326s] END Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.109850 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.127545 4758 trace.go:236] Trace[1689326083]: "Reflector ListAndWatch" name:object-"openshift-machine-api"/"kube-root-ca.crt" (22-Jan-2026 18:00:31.401) (total time: 12725ms): Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[1689326083]: ---"Objects listed" error: 12725ms (18:00:44.127) Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[1689326083]: [12.72577041s] [12.72577041s] END Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.127572 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.148191 4758 trace.go:236] Trace[1780210698]: "Reflector ListAndWatch" name:object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-z2sxt" (22-Jan-2026 18:00:31.469) (total time: 12678ms): Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[1780210698]: ---"Objects listed" error: 12678ms (18:00:44.148) Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[1780210698]: [12.6784536s] [12.6784536s] END Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.148455 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-z2sxt" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.166706 4758 trace.go:236] Trace[1640244314]: "Reflector ListAndWatch" name:object-"openstack"/"horizon-horizon-dockercfg-n2vxv" (22-Jan-2026 18:00:31.204) (total time: 12962ms): Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[1640244314]: ---"Objects listed" error: 12962ms (18:00:44.166) Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[1640244314]: [12.962492222s] [12.962492222s] END Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.166752 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-n2vxv" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.184540 4758 trace.go:236] Trace[599469525]: "Reflector ListAndWatch" name:object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" (22-Jan-2026 18:00:31.918) (total time: 12266ms): Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[599469525]: ---"Objects listed" error: 12266ms (18:00:44.184) Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[599469525]: [12.266198472s] [12.266198472s] END Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.184564 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.205902 4758 trace.go:236] Trace[151650313]: "Reflector ListAndWatch" name:object-"metallb-system"/"metallb-excludel2" (22-Jan-2026 18:00:31.398) (total time: 12807ms): Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[151650313]: ---"Objects listed" error: 12807ms (18:00:44.205) Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[151650313]: [12.807027174s] [12.807027174s] END Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.205937 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.225064 4758 trace.go:236] Trace[1705941466]: "Reflector ListAndWatch" name:object-"openstack"/"cert-cinder-internal-svc" (22-Jan-2026 18:00:31.180) (total time: 13044ms): Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[1705941466]: ---"Objects listed" error: 13044ms (18:00:44.224) Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[1705941466]: [13.044220841s] [13.044220841s] END Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.225096 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.247808 4758 trace.go:236] Trace[1535575872]: "Reflector ListAndWatch" name:object-"openshift-marketplace"/"kube-root-ca.crt" (22-Jan-2026 18:00:31.749) (total time: 12498ms): Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[1535575872]: ---"Objects listed" error: 12498ms (18:00:44.247) Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[1535575872]: [12.498544886s] [12.498544886s] END Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.247836 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.265671 4758 trace.go:236] Trace[1988601205]: "Reflector ListAndWatch" name:object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-zpd54" (22-Jan-2026 18:00:31.163) (total time: 13102ms): Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[1988601205]: ---"Objects listed" error: 13102ms (18:00:44.265) Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[1988601205]: [13.102589191s] [13.102589191s] END Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.265697 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-zpd54" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.286627 4758 trace.go:236] Trace[696275613]: "Reflector ListAndWatch" name:object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" (22-Jan-2026 18:00:31.407) (total time: 12879ms): Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[696275613]: ---"Objects listed" error: 12879ms (18:00:44.286) Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[696275613]: [12.879093429s] [12.879093429s] END Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.286660 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.307271 4758 trace.go:236] Trace[521606248]: "Reflector ListAndWatch" name:object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" (22-Jan-2026 18:00:31.466) (total time: 12840ms): Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[521606248]: ---"Objects listed" error: 12840ms (18:00:44.307) Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[521606248]: [12.840552358s] [12.840552358s] END Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.307295 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.326050 4758 trace.go:236] Trace[2025839798]: "Reflector ListAndWatch" name:object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-8x67n" (22-Jan-2026 18:00:31.434) (total time: 12891ms): Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[2025839798]: ---"Objects listed" error: 12891ms (18:00:44.325) Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[2025839798]: [12.891034004s] [12.891034004s] END Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.326080 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-8x67n" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.350630 4758 trace.go:236] Trace[997153523]: "Reflector ListAndWatch" name:object-"openshift-network-diagnostics"/"openshift-service-ca.crt" (22-Jan-2026 18:00:31.460) (total time: 12890ms): Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[997153523]: ---"Objects listed" error: 12890ms (18:00:44.350) Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[997153523]: [12.89013022s] [12.89013022s] END Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.350660 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.366030 4758 trace.go:236] Trace[328994481]: "Reflector ListAndWatch" name:object-"openshift-authentication"/"openshift-service-ca.crt" (22-Jan-2026 18:00:31.405) (total time: 12960ms): Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[328994481]: ---"Objects listed" error: 12960ms (18:00:44.365) Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[328994481]: [12.960882497s] [12.960882497s] END Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.366057 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.391020 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.406906 4758 trace.go:236] Trace[1798534441]: "Reflector ListAndWatch" name:object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" (22-Jan-2026 18:00:31.968) (total time: 12438ms): Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[1798534441]: ---"Objects listed" error: 12438ms (18:00:44.406) Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[1798534441]: [12.438705615s] [12.438705615s] END Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.406932 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.425309 4758 trace.go:236] Trace[253639399]: "Reflector ListAndWatch" name:object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" (22-Jan-2026 18:00:31.960) (total time: 12464ms): Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[253639399]: ---"Objects listed" error: 12464ms (18:00:44.425) Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[253639399]: [12.46455027s] [12.46455027s] END Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.425340 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.451065 4758 trace.go:236] Trace[1653985352]: "Reflector ListAndWatch" name:object-"openshift-network-node-identity"/"kube-root-ca.crt" (22-Jan-2026 18:00:32.003) (total time: 12447ms): Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[1653985352]: ---"Objects listed" error: 12447ms (18:00:44.451) Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[1653985352]: [12.447237507s] [12.447237507s] END Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.451092 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.453807 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-s8q8p" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.465017 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.485943 4758 trace.go:236] Trace[1488896198]: "Reflector ListAndWatch" name:object-"openshift-operators"/"observability-operator-tls" (22-Jan-2026 18:00:32.039) (total time: 12446ms): Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[1488896198]: ---"Objects listed" error: 12446ms (18:00:44.485) Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[1488896198]: [12.446089496s] [12.446089496s] END Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.485970 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.505652 4758 trace.go:236] Trace[762492818]: "Reflector ListAndWatch" name:object-"openshift-dns"/"dns-default-metrics-tls" (22-Jan-2026 18:00:32.066) (total time: 12439ms): Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[762492818]: ---"Objects listed" error: 12439ms (18:00:44.505) Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[762492818]: [12.439069665s] [12.439069665s] END Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.505676 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.523919 4758 request.go:700] Waited for 2.659880572s, retries: 1, retry-after: 5s - retry-reason: 503 - request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-admission-controller-secret&resourceVersion=84143 Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.525469 4758 trace.go:236] Trace[1883770868]: "Reflector ListAndWatch" name:object-"openshift-multus"/"multus-admission-controller-secret" (22-Jan-2026 18:00:32.084) (total time: 12440ms): Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[1883770868]: ---"Objects listed" error: 12440ms (18:00:44.525) Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[1883770868]: [12.440777721s] [12.440777721s] END Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.525488 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.541430 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-2mr2s" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.544574 4758 trace.go:236] Trace[1191698661]: "Reflector ListAndWatch" name:object-"openstack"/"cert-placement-internal-svc" (22-Jan-2026 18:00:32.105) (total time: 12438ms): Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[1191698661]: ---"Objects listed" error: 12438ms (18:00:44.544) Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[1191698661]: [12.43888027s] [12.43888027s] END Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.544600 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.567077 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.584549 4758 trace.go:236] Trace[358988372]: "Reflector ListAndWatch" name:object-"openstack"/"ovndbcluster-nb-config" (22-Jan-2026 18:00:32.111) (total time: 12472ms): Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[358988372]: ---"Objects listed" error: 12472ms (18:00:44.584) Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[358988372]: [12.472669821s] [12.472669821s] END Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.584577 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.606557 4758 trace.go:236] Trace[751264792]: "Reflector ListAndWatch" name:object-"openshift-marketplace"/"marketplace-operator-metrics" (22-Jan-2026 18:00:32.127) (total time: 12479ms): Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[751264792]: ---"Objects listed" error: 12479ms (18:00:44.606) Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[751264792]: [12.479179028s] [12.479179028s] END Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.606583 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.613058 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-tlt96" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.626505 4758 trace.go:236] Trace[780798037]: "Reflector ListAndWatch" name:object-"openshift-apiserver"/"serving-cert" (22-Jan-2026 18:00:32.193) (total time: 12432ms): Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[780798037]: ---"Objects listed" error: 12432ms (18:00:44.626) Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[780798037]: [12.432485706s] [12.432485706s] END Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.626531 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.646135 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.664703 4758 trace.go:236] Trace[1509059233]: "Reflector ListAndWatch" name:object-"openshift-nmstate"/"nginx-conf" (22-Jan-2026 18:00:32.261) (total time: 12403ms): Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[1509059233]: ---"Objects listed" error: 12403ms (18:00:44.664) Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[1509059233]: [12.403494315s] [12.403494315s] END Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.664733 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.686684 4758 trace.go:236] Trace[1858029997]: "Reflector ListAndWatch" name:object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" (22-Jan-2026 18:00:32.274) (total time: 12412ms): Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[1858029997]: ---"Objects listed" error: 12412ms (18:00:44.686) Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[1858029997]: [12.412353387s] [12.412353387s] END Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.686708 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.703379 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-zkfzz" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.707791 4758 trace.go:236] Trace[873096606]: "Reflector ListAndWatch" name:object-"openstack"/"rabbitmq-cell1-server-conf" (22-Jan-2026 18:00:32.206) (total time: 12500ms): Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[873096606]: ---"Objects listed" error: 12500ms (18:00:44.707) Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[873096606]: [12.500897071s] [12.500897071s] END Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.707820 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.726274 4758 trace.go:236] Trace[1830644373]: "Reflector ListAndWatch" name:object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-8t2s8" (22-Jan-2026 18:00:32.275) (total time: 12450ms): Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[1830644373]: ---"Objects listed" error: 12450ms (18:00:44.726) Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[1830644373]: [12.450719103s] [12.450719103s] END Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.726304 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-8t2s8" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.759783 4758 trace.go:236] Trace[1627651857]: "Reflector ListAndWatch" name:object-"openshift-image-registry"/"trusted-ca" (22-Jan-2026 18:00:32.278) (total time: 12480ms): Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[1627651857]: ---"Objects listed" error: 12480ms (18:00:44.759) Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[1627651857]: [12.480880375s] [12.480880375s] END Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.760116 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.789526 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.812168 4758 trace.go:236] Trace[1047951330]: "Reflector ListAndWatch" name:object-"openstack"/"ovncontroller-ovncontroller-dockercfg-pxl5h" (22-Jan-2026 18:00:32.379) (total time: 12432ms): Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[1047951330]: ---"Objects listed" error: 12432ms (18:00:44.812) Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[1047951330]: [12.43265918s] [12.43265918s] END Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.812194 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-pxl5h" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.832383 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.845269 4758 trace.go:236] Trace[977101592]: "Reflector ListAndWatch" name:object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" (22-Jan-2026 18:00:32.289) (total time: 12555ms): Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[977101592]: ---"Objects listed" error: 12555ms (18:00:44.845) Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[977101592]: [12.555807187s] [12.555807187s] END Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.845299 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.858052 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.884666 4758 trace.go:236] Trace[1176144442]: "Reflector ListAndWatch" name:object-"openshift-image-registry"/"image-registry-certificates" (22-Jan-2026 18:00:32.398) (total time: 12486ms): Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[1176144442]: ---"Objects listed" error: 12486ms (18:00:44.884) Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[1176144442]: [12.486073026s] [12.486073026s] END Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.884693 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.886211 4758 trace.go:236] Trace[2054577478]: "Reflector ListAndWatch" name:object-"openshift-network-node-identity"/"env-overrides" (22-Jan-2026 18:00:32.412) (total time: 12473ms): Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[2054577478]: ---"Objects listed" error: 12473ms (18:00:44.886) Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[2054577478]: [12.473450132s] [12.473450132s] END Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.886230 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.915461 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.928029 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-skwtp" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.928783 4758 scope.go:117] "RemoveContainer" containerID="e613c4f2ad9c6863c7df30149d4cb496ab5143ff68022815bf706b1598c0c8f7" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.932229 4758 trace.go:236] Trace[650578711]: "Reflector ListAndWatch" name:object-"openshift-apiserver-operator"/"openshift-service-ca.crt" (22-Jan-2026 18:00:32.429) (total time: 12503ms): Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[650578711]: ---"Objects listed" error: 12503ms (18:00:44.932) Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[650578711]: [12.503165022s] [12.503165022s] END Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.932259 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.947565 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2fkhp" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.954506 4758 trace.go:236] Trace[2062733460]: "Reflector ListAndWatch" name:object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" (22-Jan-2026 18:00:32.442) (total time: 12511ms): Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[2062733460]: ---"Objects listed" error: 12511ms (18:00:44.954) Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[2062733460]: [12.511805427s] [12.511805427s] END Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.954531 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.977164 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-dfb5n" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.977211 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-gd568" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.978077 4758 scope.go:117] "RemoveContainer" containerID="75150cc4b783423b7047afafc321b44caa1cb3d2820b82c5afc4ef8e57d0e276" Jan 22 18:00:44 crc kubenswrapper[4758]: E0122 18:00:44.978374 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=ironic-operator-controller-manager-69d6c9f5b8-gd568_openstack-operators(e8d5a5c6-b15b-490d-aab9-7fc63e9f30f7)\"" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-gd568" podUID="e8d5a5c6-b15b-490d-aab9-7fc63e9f30f7" Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.999676 4758 trace.go:236] Trace[1223530787]: "Reflector ListAndWatch" name:object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-nzrzh" (22-Jan-2026 18:00:32.470) (total time: 12529ms): Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[1223530787]: ---"Objects listed" error: 12529ms (18:00:44.999) Jan 22 18:00:44 crc kubenswrapper[4758]: Trace[1223530787]: [12.52916224s] [12.52916224s] END Jan 22 18:00:44 crc kubenswrapper[4758]: I0122 18:00:44.999709 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-nzrzh" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.008093 4758 trace.go:236] Trace[628173947]: "Reflector ListAndWatch" name:object-"openshift-etcd-operator"/"etcd-ca-bundle" (22-Jan-2026 18:00:32.506) (total time: 12501ms): Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[628173947]: ---"Objects listed" error: 12501ms (18:00:45.008) Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[628173947]: [12.501062604s] [12.501062604s] END Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.008125 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.022523 4758 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.027772 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.045913 4758 trace.go:236] Trace[1201158753]: "Reflector ListAndWatch" name:object-"cert-manager"/"kube-root-ca.crt" (22-Jan-2026 18:00:32.538) (total time: 12507ms): Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[1201158753]: ---"Objects listed" error: 12507ms (18:00:45.045) Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[1201158753]: [12.507440508s] [12.507440508s] END Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.045943 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.068449 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.087558 4758 trace.go:236] Trace[43436079]: "Reflector ListAndWatch" name:object-"openstack"/"openstackclient-openstackclient-dockercfg-kmlnc" (22-Jan-2026 18:00:32.551) (total time: 12536ms): Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[43436079]: ---"Objects listed" error: 12536ms (18:00:45.087) Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[43436079]: [12.536268104s] [12.536268104s] END Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.087582 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-kmlnc" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.095604 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-2qp8f" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.106423 4758 trace.go:236] Trace[607556952]: "Reflector ListAndWatch" name:object-"openstack"/"rabbitmq-cell1-erlang-cookie" (22-Jan-2026 18:00:32.554) (total time: 12551ms): Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[607556952]: ---"Objects listed" error: 12551ms (18:00:45.106) Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[607556952]: [12.551508039s] [12.551508039s] END Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.106447 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.124574 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-d2nmz" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.125394 4758 scope.go:117] "RemoveContainer" containerID="95b8c3c6cc21b228c22b9ffe3228bc4810df2f462264c83f968420945773d045" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.129261 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.151486 4758 trace.go:236] Trace[1042979550]: "Reflector ListAndWatch" name:object-"openshift-service-ca-operator"/"kube-root-ca.crt" (22-Jan-2026 18:00:32.561) (total time: 12590ms): Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[1042979550]: ---"Objects listed" error: 12590ms (18:00:45.151) Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[1042979550]: [12.590356308s] [12.590356308s] END Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.151511 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.168426 4758 trace.go:236] Trace[463666736]: "Reflector ListAndWatch" name:object-"openstack"/"cert-barbican-public-svc" (22-Jan-2026 18:00:32.571) (total time: 12596ms): Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[463666736]: ---"Objects listed" error: 12596ms (18:00:45.168) Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[463666736]: [12.596500376s] [12.596500376s] END Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.168800 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.189094 4758 trace.go:236] Trace[230008532]: "Reflector ListAndWatch" name:object-"openstack"/"rabbitmq-cell1-server-dockercfg-5sdkn" (22-Jan-2026 18:00:32.587) (total time: 12601ms): Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[230008532]: ---"Objects listed" error: 12601ms (18:00:45.189) Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[230008532]: [12.60142637s] [12.60142637s] END Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.189555 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-5sdkn" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.207542 4758 trace.go:236] Trace[773131546]: "Reflector ListAndWatch" name:object-"openshift-image-registry"/"registry-dockercfg-kzzsd" (22-Jan-2026 18:00:32.614) (total time: 12593ms): Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[773131546]: ---"Objects listed" error: 12593ms (18:00:45.207) Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[773131546]: [12.593148244s] [12.593148244s] END Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.207575 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.226601 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.245473 4758 trace.go:236] Trace[52900934]: "Reflector ListAndWatch" name:object-"openshift-cluster-version"/"openshift-service-ca.crt" (22-Jan-2026 18:00:32.640) (total time: 12604ms): Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[52900934]: ---"Objects listed" error: 12604ms (18:00:45.245) Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[52900934]: [12.604873515s] [12.604873515s] END Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.245500 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.272163 4758 trace.go:236] Trace[1363367182]: "Reflector ListAndWatch" name:object-"openstack"/"cert-keystone-internal-svc" (22-Jan-2026 18:00:32.651) (total time: 12620ms): Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[1363367182]: ---"Objects listed" error: 12620ms (18:00:45.272) Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[1363367182]: [12.620929752s] [12.620929752s] END Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.272484 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.289989 4758 trace.go:236] Trace[688606950]: "Reflector ListAndWatch" name:object-"openstack"/"cinder-cinder-dockercfg-85hcg" (22-Jan-2026 18:00:32.667) (total time: 12622ms): Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[688606950]: ---"Objects listed" error: 12622ms (18:00:45.289) Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[688606950]: [12.622871314s] [12.622871314s] END Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.290017 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-85hcg" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.305295 4758 trace.go:236] Trace[1077743143]: "Reflector ListAndWatch" name:object-"openstack"/"cert-metric-storage-prometheus-svc" (22-Jan-2026 18:00:32.677) (total time: 12627ms): Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[1077743143]: ---"Objects listed" error: 12627ms (18:00:45.305) Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[1077743143]: [12.627301835s] [12.627301835s] END Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.305330 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.324936 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-4jql8" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.349328 4758 trace.go:236] Trace[309023379]: "Reflector ListAndWatch" name:object-"openstack"/"watcher-api-config-data" (22-Jan-2026 18:00:32.761) (total time: 12588ms): Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[309023379]: ---"Objects listed" error: 12588ms (18:00:45.349) Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[309023379]: [12.58820203s] [12.58820203s] END Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.349357 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.369072 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.396449 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.398527 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jr994" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.399459 4758 scope.go:117] "RemoveContainer" containerID="6057c010d5a2e16b55b128c8b625c607ee210bd2a7542ae56469c8480cda9a9e" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.400104 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lb8mx" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.400485 4758 scope.go:117] "RemoveContainer" containerID="ec249bf443459d83099a5ffad149437d9827fa235843daa76dae2c305f96d608" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.409383 4758 trace.go:236] Trace[1376366992]: "Reflector ListAndWatch" name:object-"openshift-authentication-operator"/"service-ca-bundle" (22-Jan-2026 18:00:32.790) (total time: 12619ms): Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[1376366992]: ---"Objects listed" error: 12619ms (18:00:45.409) Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[1376366992]: [12.619158694s] [12.619158694s] END Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.409415 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.434570 4758 trace.go:236] Trace[1580675396]: "Reflector ListAndWatch" name:object-"openstack-operators"/"infra-operator-webhook-server-cert" (22-Jan-2026 18:00:32.792) (total time: 12642ms): Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[1580675396]: ---"Objects listed" error: 12642ms (18:00:45.434) Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[1580675396]: [12.642158051s] [12.642158051s] END Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.434596 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.457623 4758 trace.go:236] Trace[2024164069]: "Reflector ListAndWatch" name:object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" (22-Jan-2026 18:00:32.807) (total time: 12650ms): Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[2024164069]: ---"Objects listed" error: 12650ms (18:00:45.457) Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[2024164069]: [12.65020973s] [12.65020973s] END Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.457650 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.475947 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.493373 4758 trace.go:236] Trace[1068053035]: "Reflector ListAndWatch" name:object-"openshift-authentication"/"v4-0-config-system-router-certs" (22-Jan-2026 18:00:32.809) (total time: 12683ms): Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[1068053035]: ---"Objects listed" error: 12683ms (18:00:45.493) Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[1068053035]: [12.68396303s] [12.68396303s] END Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.493406 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.494295 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-7tzm4" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.495114 4758 scope.go:117] "RemoveContainer" containerID="313d83e614b8a8d25ca53b49fd49f5b0805854094c56adf9746feed980253f0f" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.510838 4758 trace.go:236] Trace[176448581]: "Reflector ListAndWatch" name:object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" (22-Jan-2026 18:00:32.809) (total time: 12701ms): Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[176448581]: ---"Objects listed" error: 12701ms (18:00:45.510) Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[176448581]: [12.701330283s] [12.701330283s] END Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.510858 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.525487 4758 trace.go:236] Trace[1413545201]: "Reflector ListAndWatch" name:object-"openstack"/"keystone" (22-Jan-2026 18:00:32.841) (total time: 12683ms): Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[1413545201]: ---"Objects listed" error: 12683ms (18:00:45.525) Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[1413545201]: [12.683736784s] [12.683736784s] END Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.525516 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.529954 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-4jthc" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.530853 4758 scope.go:117] "RemoveContainer" containerID="48fc7905bf24391116479c62be583909766b4c209c1c234abbd54bc7146a4de2" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.549933 4758 request.go:700] Waited for 2.984783289s, retries: 1, retry-after: 5s - retry-reason: 503 - request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmemcached-memcached-dockercfg-2w6nn&resourceVersion=84384 Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.550957 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-zfcl5" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.551698 4758 scope.go:117] "RemoveContainer" containerID="a13378759202bf3b5e99273b246f563647b62f9c9ba3d166a097fb3b7a5cd4d4" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.557093 4758 trace.go:236] Trace[1042415057]: "Reflector ListAndWatch" name:object-"openstack"/"memcached-memcached-dockercfg-2w6nn" (22-Jan-2026 18:00:32.846) (total time: 12710ms): Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[1042415057]: ---"Objects listed" error: 12710ms (18:00:45.557) Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[1042415057]: [12.710502163s] [12.710502163s] END Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.557123 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-2w6nn" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.564817 4758 trace.go:236] Trace[655542069]: "Reflector ListAndWatch" name:object-"openshift-route-controller-manager"/"openshift-service-ca.crt" (22-Jan-2026 18:00:32.852) (total time: 12711ms): Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[655542069]: ---"Objects listed" error: 12711ms (18:00:45.564) Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[655542069]: [12.711881661s] [12.711881661s] END Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.564838 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.593713 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.617735 4758 trace.go:236] Trace[70153135]: "Reflector ListAndWatch" name:object-"openshift-authentication"/"v4-0-config-system-session" (22-Jan-2026 18:00:32.855) (total time: 12762ms): Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[70153135]: ---"Objects listed" error: 12762ms (18:00:45.617) Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[70153135]: [12.762583633s] [12.762583633s] END Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.617811 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.635464 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.659837 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.685282 4758 trace.go:236] Trace[419728689]: "Reflector ListAndWatch" name:object-"openshift-authentication-operator"/"serving-cert" (22-Jan-2026 18:00:32.885) (total time: 12799ms): Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[419728689]: ---"Objects listed" error: 12799ms (18:00:45.685) Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[419728689]: [12.799990733s] [12.799990733s] END Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.685547 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.704200 4758 trace.go:236] Trace[251671155]: "Reflector ListAndWatch" name:object-"openstack"/"cert-memcached-svc" (22-Jan-2026 18:00:32.887) (total time: 12816ms): Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[251671155]: ---"Objects listed" error: 12816ms (18:00:45.704) Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[251671155]: [12.816793581s] [12.816793581s] END Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.704527 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.731012 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4rlkk" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.731808 4758 scope.go:117] "RemoveContainer" containerID="bff07a437f5fc924349f5e1eb2cd2cd67ab6a607451a68300f3a9b80f24fafb4" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.732054 4758 trace.go:236] Trace[1598150237]: "Reflector ListAndWatch" name:object-"openstack"/"test-operator-controller-priv-key" (22-Jan-2026 18:00:32.900) (total time: 12831ms): Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[1598150237]: ---"Objects listed" error: 12831ms (18:00:45.731) Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[1598150237]: [12.831861831s] [12.831861831s] END Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.732260 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.732811 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.749049 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.758077 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-59n7w" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.791594 4758 trace.go:236] Trace[547449454]: "Reflector ListAndWatch" name:object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-f7gls" (22-Jan-2026 18:00:32.953) (total time: 12837ms): Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[547449454]: ---"Objects listed" error: 12837ms (18:00:45.791) Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[547449454]: [12.837723552s] [12.837723552s] END Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.791622 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-f7gls" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.803208 4758 trace.go:236] Trace[85182392]: "Reflector ListAndWatch" name:object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" (22-Jan-2026 18:00:32.963) (total time: 12839ms): Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[85182392]: ---"Objects listed" error: 12839ms (18:00:45.803) Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[85182392]: [12.839998444s] [12.839998444s] END Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.803239 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.822007 4758 trace.go:236] Trace[858185786]: "Reflector ListAndWatch" name:object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" (22-Jan-2026 18:00:32.963) (total time: 12858ms): Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[858185786]: ---"Objects listed" error: 12858ms (18:00:45.821) Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[858185786]: [12.858739664s] [12.858739664s] END Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.822036 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.828319 4758 trace.go:236] Trace[585291413]: "Reflector ListAndWatch" name:object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-pz96z" (22-Jan-2026 18:00:32.963) (total time: 12864ms): Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[585291413]: ---"Objects listed" error: 12864ms (18:00:45.828) Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[585291413]: [12.864995085s] [12.864995085s] END Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.828342 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-pz96z" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.863901 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-np2j4" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.881384 4758 trace.go:236] Trace[281045751]: "Reflector ListAndWatch" name:object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" (22-Jan-2026 18:00:33.063) (total time: 12817ms): Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[281045751]: ---"Objects listed" error: 12817ms (18:00:45.881) Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[281045751]: [12.817685025s] [12.817685025s] END Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.881699 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.882036 4758 trace.go:236] Trace[960738349]: "Reflector ListAndWatch" name:object-"openshift-apiserver"/"etcd-serving-ca" (22-Jan-2026 18:00:33.055) (total time: 12826ms): Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[960738349]: ---"Objects listed" error: 12826ms (18:00:45.882) Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[960738349]: [12.826770273s] [12.826770273s] END Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.882046 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.890236 4758 trace.go:236] Trace[1226644623]: "Reflector ListAndWatch" name:object-"openstack"/"galera-openstack-dockercfg-g2jsf" (22-Jan-2026 18:00:33.044) (total time: 12845ms): Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[1226644623]: ---"Objects listed" error: 12845ms (18:00:45.890) Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[1226644623]: [12.845992737s] [12.845992737s] END Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.890266 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-g2jsf" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.917568 4758 trace.go:236] Trace[1250081991]: "Reflector ListAndWatch" name:object-"openstack"/"default-dockercfg-d4w66" (22-Jan-2026 18:00:33.075) (total time: 12841ms): Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[1250081991]: ---"Objects listed" error: 12841ms (18:00:45.917) Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[1250081991]: [12.841419632s] [12.841419632s] END Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.917703 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-d4w66" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.948117 4758 trace.go:236] Trace[1286215597]: "Reflector ListAndWatch" name:object-"openshift-network-diagnostics"/"kube-root-ca.crt" (22-Jan-2026 18:00:33.087) (total time: 12860ms): Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[1286215597]: ---"Objects listed" error: 12860ms (18:00:45.948) Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[1286215597]: [12.860181732s] [12.860181732s] END Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.948148 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.954399 4758 trace.go:236] Trace[1944113159]: "Reflector ListAndWatch" name:object-"openshift-oauth-apiserver"/"encryption-config-1" (22-Jan-2026 18:00:33.093) (total time: 12860ms): Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[1944113159]: ---"Objects listed" error: 12860ms (18:00:45.954) Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[1944113159]: [12.860990286s] [12.860990286s] END Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.954429 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.966536 4758 trace.go:236] Trace[390846240]: "Reflector ListAndWatch" name:object-"openstack"/"openstack-config-data" (22-Jan-2026 18:00:33.099) (total time: 12866ms): Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[390846240]: ---"Objects listed" error: 12866ms (18:00:45.966) Jan 22 18:00:45 crc kubenswrapper[4758]: Trace[390846240]: [12.866890296s] [12.866890296s] END Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.966566 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 22 18:00:45 crc kubenswrapper[4758]: I0122 18:00:45.993183 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.019390 4758 trace.go:236] Trace[1214770757]: "Reflector ListAndWatch" name:object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-qcqlv" (22-Jan-2026 18:00:33.100) (total time: 12918ms): Jan 22 18:00:46 crc kubenswrapper[4758]: Trace[1214770757]: ---"Objects listed" error: 12918ms (18:00:46.019) Jan 22 18:00:46 crc kubenswrapper[4758]: Trace[1214770757]: [12.918526993s] [12.918526993s] END Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.019421 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-qcqlv" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.029364 4758 trace.go:236] Trace[477628240]: "Reflector ListAndWatch" name:object-"openshift-nmstate"/"nmstate-operator-dockercfg-2sf4f" (22-Jan-2026 18:00:33.137) (total time: 12892ms): Jan 22 18:00:46 crc kubenswrapper[4758]: Trace[477628240]: ---"Objects listed" error: 12891ms (18:00:46.029) Jan 22 18:00:46 crc kubenswrapper[4758]: Trace[477628240]: [12.892015691s] [12.892015691s] END Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.029395 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-2sf4f" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.040830 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/watcher-operator-controller-manager-85b8fd6746-9vvd6" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.041390 4758 scope.go:117] "RemoveContainer" containerID="40620850c0b41ba5d105b5476e01b243745145ee1653fe36b73b07bb40385f91" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.048651 4758 trace.go:236] Trace[2094953872]: "Reflector ListAndWatch" name:object-"openstack"/"neutron-httpd-config" (22-Jan-2026 18:00:33.161) (total time: 12886ms): Jan 22 18:00:46 crc kubenswrapper[4758]: Trace[2094953872]: ---"Objects listed" error: 12886ms (18:00:46.048) Jan 22 18:00:46 crc kubenswrapper[4758]: Trace[2094953872]: [12.886901012s] [12.886901012s] END Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.048692 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.070431 4758 trace.go:236] Trace[1531474068]: "Reflector ListAndWatch" name:object-"openshift-console"/"console-dockercfg-f62pw" (22-Jan-2026 18:00:33.179) (total time: 12890ms): Jan 22 18:00:46 crc kubenswrapper[4758]: Trace[1531474068]: ---"Objects listed" error: 12890ms (18:00:46.070) Jan 22 18:00:46 crc kubenswrapper[4758]: Trace[1531474068]: [12.890729696s] [12.890729696s] END Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.070458 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.097610 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-notifications-svc" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.106220 4758 trace.go:236] Trace[240118089]: "Reflector ListAndWatch" name:object-"openstack"/"keystone-keystone-dockercfg-q7l7k" (22-Jan-2026 18:00:33.201) (total time: 12904ms): Jan 22 18:00:46 crc kubenswrapper[4758]: Trace[240118089]: ---"Objects listed" error: 12904ms (18:00:46.106) Jan 22 18:00:46 crc kubenswrapper[4758]: Trace[240118089]: [12.90447067s] [12.90447067s] END Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.106576 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-q7l7k" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.129894 4758 trace.go:236] Trace[590929813]: "Reflector ListAndWatch" name:object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" (22-Jan-2026 18:00:33.242) (total time: 12887ms): Jan 22 18:00:46 crc kubenswrapper[4758]: Trace[590929813]: ---"Objects listed" error: 12887ms (18:00:46.129) Jan 22 18:00:46 crc kubenswrapper[4758]: Trace[590929813]: [12.887197409s] [12.887197409s] END Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.129926 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.137683 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-7tzm4" event={"ID":"c73a71b4-f1fd-4a6c-9832-ce9b48a5f220","Type":"ContainerStarted","Data":"bed6fce42629725d0d88a5e3ff55c192c9274204299b85326a9a4972a262017a"} Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.138603 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-7tzm4" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.147609 4758 trace.go:236] Trace[1518432977]: "Reflector ListAndWatch" name:object-"openshift-ovn-kubernetes"/"ovnkube-config" (22-Jan-2026 18:00:33.248) (total time: 12898ms): Jan 22 18:00:46 crc kubenswrapper[4758]: Trace[1518432977]: ---"Objects listed" error: 12898ms (18:00:46.147) Jan 22 18:00:46 crc kubenswrapper[4758]: Trace[1518432977]: [12.898839067s] [12.898839067s] END Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.148069 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.161032 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-d2nmz" event={"ID":"d67bb459-81fe-48a2-ac8a-cb4441bb35bb","Type":"ContainerStarted","Data":"448c9c1ff5c2bf7c7800a482c302a7095b026e28e6a55264a7fad83392bc0e4d"} Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.161395 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-d2nmz" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.173014 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.191211 4758 trace.go:236] Trace[595956856]: "Reflector ListAndWatch" name:object-"openstack"/"cert-neutron-internal-svc" (22-Jan-2026 18:00:33.253) (total time: 12937ms): Jan 22 18:00:46 crc kubenswrapper[4758]: Trace[595956856]: ---"Objects listed" error: 12937ms (18:00:46.191) Jan 22 18:00:46 crc kubenswrapper[4758]: Trace[595956856]: [12.937284925s] [12.937284925s] END Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.191235 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.192302 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-skwtp" event={"ID":"fa976a5e-7cd9-402f-9792-015ca1488d1f","Type":"ContainerStarted","Data":"723937e2eb977f116ff32a9d5e8fb3d130b56464417e7a0ffd6e16066c3f1cc7"} Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.192519 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-skwtp" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.209112 4758 trace.go:236] Trace[930209101]: "Reflector ListAndWatch" name:object-"openshift-authentication"/"v4-0-config-system-cliconfig" (22-Jan-2026 18:00:33.256) (total time: 12952ms): Jan 22 18:00:46 crc kubenswrapper[4758]: Trace[930209101]: ---"Objects listed" error: 12952ms (18:00:46.209) Jan 22 18:00:46 crc kubenswrapper[4758]: Trace[930209101]: [12.952226293s] [12.952226293s] END Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.209461 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.226396 4758 trace.go:236] Trace[1558372343]: "Reflector ListAndWatch" name:object-"openstack"/"ovsdbserver-nb" (22-Jan-2026 18:00:33.290) (total time: 12935ms): Jan 22 18:00:46 crc kubenswrapper[4758]: Trace[1558372343]: ---"Objects listed" error: 12935ms (18:00:46.226) Jan 22 18:00:46 crc kubenswrapper[4758]: Trace[1558372343]: [12.935490465s] [12.935490465s] END Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.226422 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.264886 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-brw4q" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.288515 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.310875 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-zvr2k" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.328073 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-th7td" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.346697 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.368553 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.389252 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-qdnhd" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.425102 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.445735 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.467115 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.484920 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.506580 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.527131 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.546519 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.563996 4758 request.go:700] Waited for 2.249307371s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=84220 Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.567668 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.588090 4758 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-qcl9m" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.612591 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-2-config-data" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.628303 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.645226 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.666316 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.685334 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.705333 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.725434 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.750155 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-xgjlh" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.783339 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.790515 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.842094 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-nwvvt" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.842489 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-4ftsd" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.846632 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.872585 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.888973 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-g7xdx" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.907250 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.927130 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.945647 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.964795 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-lpprz" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.968109 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 22 18:00:46 crc kubenswrapper[4758]: I0122 18:00:46.987430 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 22 18:00:47 crc kubenswrapper[4758]: I0122 18:00:47.010156 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 22 18:00:47 crc kubenswrapper[4758]: I0122 18:00:47.026296 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 22 18:00:47 crc kubenswrapper[4758]: I0122 18:00:47.047201 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 22 18:00:47 crc kubenswrapper[4758]: I0122 18:00:47.067100 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 22 18:00:47 crc kubenswrapper[4758]: I0122 18:00:47.087393 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Jan 22 18:00:47 crc kubenswrapper[4758]: I0122 18:00:47.108657 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 22 18:00:47 crc kubenswrapper[4758]: I0122 18:00:47.125809 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 22 18:00:47 crc kubenswrapper[4758]: I0122 18:00:47.147649 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 22 18:00:47 crc kubenswrapper[4758]: I0122 18:00:47.164937 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 22 18:00:47 crc kubenswrapper[4758]: I0122 18:00:47.185243 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 22 18:00:47 crc kubenswrapper[4758]: I0122 18:00:47.205355 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 22 18:00:47 crc kubenswrapper[4758]: I0122 18:00:47.215781 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lb8mx" event={"ID":"f5135718-a42b-4089-922b-9fba781fe6db","Type":"ContainerStarted","Data":"8e13169dc0c6dd441ebd02b3a4a27bda50dfaaad4f27a0c85f2e35215876258d"} Jan 22 18:00:47 crc kubenswrapper[4758]: I0122 18:00:47.262481 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 22 18:00:47 crc kubenswrapper[4758]: I0122 18:00:47.263225 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 22 18:00:47 crc kubenswrapper[4758]: I0122 18:00:47.268318 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 22 18:00:47 crc kubenswrapper[4758]: I0122 18:00:47.307104 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 22 18:00:47 crc kubenswrapper[4758]: I0122 18:00:47.345243 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 22 18:00:47 crc kubenswrapper[4758]: I0122 18:00:47.367867 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 22 18:00:47 crc kubenswrapper[4758]: I0122 18:00:47.392801 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 22 18:00:47 crc kubenswrapper[4758]: I0122 18:00:47.422468 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 22 18:00:47 crc kubenswrapper[4758]: I0122 18:00:47.425557 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 22 18:00:47 crc kubenswrapper[4758]: I0122 18:00:47.445795 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-2zlds" Jan 22 18:00:47 crc kubenswrapper[4758]: I0122 18:00:47.485222 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 22 18:00:47 crc kubenswrapper[4758]: I0122 18:00:47.506818 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 22 18:00:47 crc kubenswrapper[4758]: I0122 18:00:47.546588 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 22 18:00:47 crc kubenswrapper[4758]: I0122 18:00:47.598856 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-675f79667-vjvtj" Jan 22 18:00:47 crc kubenswrapper[4758]: I0122 18:00:47.607464 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 22 18:00:47 crc kubenswrapper[4758]: I0122 18:00:47.688541 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 22 18:00:47 crc kubenswrapper[4758]: I0122 18:00:47.703550 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf" containerName="galera" probeResult="failure" output="command timed out" Jan 22 18:00:47 crc kubenswrapper[4758]: I0122 18:00:47.706403 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf" containerName="galera" probeResult="failure" output="command timed out" Jan 22 18:00:47 crc kubenswrapper[4758]: I0122 18:00:47.944999 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 22 18:00:48 crc kubenswrapper[4758]: I0122 18:00:48.004599 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 22 18:00:48 crc kubenswrapper[4758]: I0122 18:00:48.105761 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 22 18:00:48 crc kubenswrapper[4758]: I0122 18:00:48.226407 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-zfcl5" event={"ID":"7d2439ad-1ca6-4c24-9d15-e04f0f89aedf","Type":"ContainerStarted","Data":"14a8dbe44b9b80a41e4c56d9ac443ed645fa2c03ff8bc1eb83c4af41ba1a10d4"} Jan 22 18:00:48 crc kubenswrapper[4758]: I0122 18:00:48.226917 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-zfcl5" Jan 22 18:00:48 crc kubenswrapper[4758]: I0122 18:00:48.228261 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-85b8fd6746-9vvd6" event={"ID":"71c16ac1-3276-4096-93c5-d10765320713","Type":"ContainerStarted","Data":"109915eb3e394104dc6d9abe8c718c9cddf8633e4763d33486aff69c270c6890"} Jan 22 18:00:48 crc kubenswrapper[4758]: I0122 18:00:48.228380 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-85b8fd6746-9vvd6" Jan 22 18:00:48 crc kubenswrapper[4758]: I0122 18:00:48.230325 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4rlkk" event={"ID":"40845474-36a2-48c0-a0df-af5deb2a31fd","Type":"ContainerStarted","Data":"3aec4dd5219c74d553f5a393e3967d207d3db740367551cd19f8d42e5892b973"} Jan 22 18:00:48 crc kubenswrapper[4758]: I0122 18:00:48.230507 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4rlkk" Jan 22 18:00:48 crc kubenswrapper[4758]: I0122 18:00:48.232343 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-4jthc" event={"ID":"19b4b900-d90f-4e59-b082-61f058f5882b","Type":"ContainerStarted","Data":"f0f82a29b053da2722c7495a7bb783003ff5bbadaa9bb7b5f711834baddd17da"} Jan 22 18:00:48 crc kubenswrapper[4758]: I0122 18:00:48.232570 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-4jthc" Jan 22 18:00:48 crc kubenswrapper[4758]: I0122 18:00:48.234446 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jr994" event={"ID":"16d19f40-45e9-4f1a-b953-e5c68ca014f3","Type":"ContainerStarted","Data":"237bcaccf5c55b4c4899ee435e0c49d11d7a6fb6b2e28ff1bfdb16b8d41383a5"} Jan 22 18:00:48 crc kubenswrapper[4758]: I0122 18:00:48.234677 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jr994" Jan 22 18:00:48 crc kubenswrapper[4758]: I0122 18:00:48.270370 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 18:00:49 crc kubenswrapper[4758]: I0122 18:00:49.817204 4758 scope.go:117] "RemoveContainer" containerID="0874a7ddc92ab0b24afc711eb8d63f639a492a7de869c8a7af586bf54214b376" Jan 22 18:00:50 crc kubenswrapper[4758]: I0122 18:00:50.256180 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-58fc8b87c6-qmw5r" event={"ID":"8afd29cc-2dab-460e-ad9d-f17690c15f41","Type":"ContainerStarted","Data":"b95331d97c2af8521ead3b67015b87ef8bafc473c844c167241ddf346b609c25"} Jan 22 18:00:50 crc kubenswrapper[4758]: I0122 18:00:50.256807 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-58fc8b87c6-qmw5r" Jan 22 18:00:50 crc kubenswrapper[4758]: I0122 18:00:50.808197 4758 scope.go:117] "RemoveContainer" containerID="bc4f970a22c54315f6513899232257efae4e7e4b6f571d8f0a84f9b878900842" Jan 22 18:00:50 crc kubenswrapper[4758]: I0122 18:00:50.808610 4758 scope.go:117] "RemoveContainer" containerID="8d740abd0ed4523b0bbc53fb6cb986e3dd12d30030fb5decbccb8d5c79e3cb4d" Jan 22 18:00:50 crc kubenswrapper[4758]: I0122 18:00:50.808959 4758 scope.go:117] "RemoveContainer" containerID="00f9f7e22c37037c5a3da51729e231d9b6af70fe75b76ee1a114d7df66735fd4" Jan 22 18:00:50 crc kubenswrapper[4758]: I0122 18:00:50.809543 4758 scope.go:117] "RemoveContainer" containerID="dee0e88f7ebd2c75fbdae41aff3b519def894d0bc14fe932409343bfae737e93" Jan 22 18:00:51 crc kubenswrapper[4758]: I0122 18:00:51.279729 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-qg57g" event={"ID":"86017532-da20-4917-8f8b-34190218edc2","Type":"ContainerStarted","Data":"9e541569f7cd2d0056af7825ce452c60ae5c9850c6196bb360e5e1d28f75f385"} Jan 22 18:00:51 crc kubenswrapper[4758]: I0122 18:00:51.288331 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-sb974" event={"ID":"35a3fafd-45ea-465d-90ef-36148a60685e","Type":"ContainerStarted","Data":"5a2d3ef4be7de63fe4ef85c24d5d89d4fd120608455020aa2ba7a2cc465397aa"} Jan 22 18:00:51 crc kubenswrapper[4758]: I0122 18:00:51.288619 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-sb974" Jan 22 18:00:52 crc kubenswrapper[4758]: I0122 18:00:52.301140 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d5a7a812-eaba-4ae7-8d97-e80ae4f70d78","Type":"ContainerStarted","Data":"48d298e2d442869501cd81a21b130a48ef886e2260471edb7170b57282d22665"} Jan 22 18:00:52 crc kubenswrapper[4758]: I0122 18:00:52.304914 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 22 18:00:52 crc kubenswrapper[4758]: I0122 18:00:52.308252 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cb5t8" event={"ID":"26d5529a-b270-40fc-9faa-037435dd2f80","Type":"ContainerStarted","Data":"d551a3f86377d873d08931ee5aca79e5b6342375eddaeb399bd443f346adee57"} Jan 22 18:00:52 crc kubenswrapper[4758]: I0122 18:00:52.808582 4758 scope.go:117] "RemoveContainer" containerID="71d0f9a93a1f198cee3e61be87dac5fd13220229181dc2ee3ad7a9d1aecf76fb" Jan 22 18:00:52 crc kubenswrapper[4758]: E0122 18:00:52.809041 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:00:54 crc kubenswrapper[4758]: I0122 18:00:54.814356 4758 scope.go:117] "RemoveContainer" containerID="542ed8d1796b1c80fd6e195ec7b32f904339447bd00b8e67d8382cb94f9a53f8" Jan 22 18:00:54 crc kubenswrapper[4758]: I0122 18:00:54.931249 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-skwtp" Jan 22 18:00:55 crc kubenswrapper[4758]: I0122 18:00:55.123181 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-d2nmz" Jan 22 18:00:55 crc kubenswrapper[4758]: I0122 18:00:55.360474 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-f2gvw_6daa1231-490e-4ff7-9157-f49cdec96a5e/marketplace-operator/1.log" Jan 22 18:00:55 crc kubenswrapper[4758]: I0122 18:00:55.360555 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-f2gvw" event={"ID":"6daa1231-490e-4ff7-9157-f49cdec96a5e","Type":"ContainerStarted","Data":"6c0cc54091f22452a9f78b7fb7f750f4165fb116b9fc9f2d09b1e9dec7b4bd5d"} Jan 22 18:00:55 crc kubenswrapper[4758]: I0122 18:00:55.361074 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-f2gvw" Jan 22 18:00:55 crc kubenswrapper[4758]: I0122 18:00:55.362074 4758 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-f2gvw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.56:8080/healthz\": dial tcp 10.217.0.56:8080: connect: connection refused" start-of-body= Jan 22 18:00:55 crc kubenswrapper[4758]: I0122 18:00:55.362127 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-f2gvw" podUID="6daa1231-490e-4ff7-9157-f49cdec96a5e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.56:8080/healthz\": dial tcp 10.217.0.56:8080: connect: connection refused" Jan 22 18:00:55 crc kubenswrapper[4758]: I0122 18:00:55.399412 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lb8mx" Jan 22 18:00:55 crc kubenswrapper[4758]: I0122 18:00:55.401532 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jr994" Jan 22 18:00:55 crc kubenswrapper[4758]: I0122 18:00:55.402070 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lb8mx" Jan 22 18:00:55 crc kubenswrapper[4758]: I0122 18:00:55.496428 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-7tzm4" Jan 22 18:00:55 crc kubenswrapper[4758]: I0122 18:00:55.532286 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-4jthc" Jan 22 18:00:55 crc kubenswrapper[4758]: I0122 18:00:55.546489 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-zfcl5" Jan 22 18:00:55 crc kubenswrapper[4758]: I0122 18:00:55.728986 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4rlkk" Jan 22 18:00:56 crc kubenswrapper[4758]: I0122 18:00:56.042115 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-85b8fd6746-9vvd6" Jan 22 18:00:56 crc kubenswrapper[4758]: I0122 18:00:56.375555 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-f2gvw" Jan 22 18:00:56 crc kubenswrapper[4758]: I0122 18:00:56.766712 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-sb974" Jan 22 18:00:56 crc kubenswrapper[4758]: I0122 18:00:56.810271 4758 scope.go:117] "RemoveContainer" containerID="75150cc4b783423b7047afafc321b44caa1cb3d2820b82c5afc4ef8e57d0e276" Jan 22 18:00:58 crc kubenswrapper[4758]: I0122 18:00:58.388555 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-gd568" event={"ID":"e8d5a5c6-b15b-490d-aab9-7fc63e9f30f7","Type":"ContainerStarted","Data":"4133e4a972b0ae32282df18261d7f69865de89638254f8a50ec3132f2019da12"} Jan 22 18:00:58 crc kubenswrapper[4758]: I0122 18:00:58.389277 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-gd568" Jan 22 18:00:58 crc kubenswrapper[4758]: I0122 18:00:58.426243 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 22 18:00:59 crc kubenswrapper[4758]: I0122 18:00:59.686019 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hk9ns"] Jan 22 18:00:59 crc kubenswrapper[4758]: E0122 18:00:59.687367 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04a0da3e-0209-4ec2-9c6f-118c19d1499d" containerName="registry-server" Jan 22 18:00:59 crc kubenswrapper[4758]: I0122 18:00:59.687399 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="04a0da3e-0209-4ec2-9c6f-118c19d1499d" containerName="registry-server" Jan 22 18:00:59 crc kubenswrapper[4758]: E0122 18:00:59.687420 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04a0da3e-0209-4ec2-9c6f-118c19d1499d" containerName="extract-utilities" Jan 22 18:00:59 crc kubenswrapper[4758]: I0122 18:00:59.687427 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="04a0da3e-0209-4ec2-9c6f-118c19d1499d" containerName="extract-utilities" Jan 22 18:00:59 crc kubenswrapper[4758]: E0122 18:00:59.687462 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04a0da3e-0209-4ec2-9c6f-118c19d1499d" containerName="extract-content" Jan 22 18:00:59 crc kubenswrapper[4758]: I0122 18:00:59.687467 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="04a0da3e-0209-4ec2-9c6f-118c19d1499d" containerName="extract-content" Jan 22 18:00:59 crc kubenswrapper[4758]: I0122 18:00:59.687665 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="04a0da3e-0209-4ec2-9c6f-118c19d1499d" containerName="registry-server" Jan 22 18:00:59 crc kubenswrapper[4758]: I0122 18:00:59.689231 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hk9ns" Jan 22 18:00:59 crc kubenswrapper[4758]: I0122 18:00:59.785569 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn8tn\" (UniqueName: \"kubernetes.io/projected/cb5ed769-fbe1-4d5d-93fa-5d0477efc533-kube-api-access-cn8tn\") pod \"community-operators-hk9ns\" (UID: \"cb5ed769-fbe1-4d5d-93fa-5d0477efc533\") " pod="openshift-marketplace/community-operators-hk9ns" Jan 22 18:00:59 crc kubenswrapper[4758]: I0122 18:00:59.785621 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb5ed769-fbe1-4d5d-93fa-5d0477efc533-catalog-content\") pod \"community-operators-hk9ns\" (UID: \"cb5ed769-fbe1-4d5d-93fa-5d0477efc533\") " pod="openshift-marketplace/community-operators-hk9ns" Jan 22 18:00:59 crc kubenswrapper[4758]: I0122 18:00:59.785859 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb5ed769-fbe1-4d5d-93fa-5d0477efc533-utilities\") pod \"community-operators-hk9ns\" (UID: \"cb5ed769-fbe1-4d5d-93fa-5d0477efc533\") " pod="openshift-marketplace/community-operators-hk9ns" Jan 22 18:00:59 crc kubenswrapper[4758]: I0122 18:00:59.887781 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb5ed769-fbe1-4d5d-93fa-5d0477efc533-catalog-content\") pod \"community-operators-hk9ns\" (UID: \"cb5ed769-fbe1-4d5d-93fa-5d0477efc533\") " pod="openshift-marketplace/community-operators-hk9ns" Jan 22 18:00:59 crc kubenswrapper[4758]: I0122 18:00:59.888153 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb5ed769-fbe1-4d5d-93fa-5d0477efc533-utilities\") pod \"community-operators-hk9ns\" (UID: \"cb5ed769-fbe1-4d5d-93fa-5d0477efc533\") " pod="openshift-marketplace/community-operators-hk9ns" Jan 22 18:00:59 crc kubenswrapper[4758]: I0122 18:00:59.888331 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn8tn\" (UniqueName: \"kubernetes.io/projected/cb5ed769-fbe1-4d5d-93fa-5d0477efc533-kube-api-access-cn8tn\") pod \"community-operators-hk9ns\" (UID: \"cb5ed769-fbe1-4d5d-93fa-5d0477efc533\") " pod="openshift-marketplace/community-operators-hk9ns" Jan 22 18:00:59 crc kubenswrapper[4758]: I0122 18:00:59.889051 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb5ed769-fbe1-4d5d-93fa-5d0477efc533-utilities\") pod \"community-operators-hk9ns\" (UID: \"cb5ed769-fbe1-4d5d-93fa-5d0477efc533\") " pod="openshift-marketplace/community-operators-hk9ns" Jan 22 18:00:59 crc kubenswrapper[4758]: I0122 18:00:59.889302 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb5ed769-fbe1-4d5d-93fa-5d0477efc533-catalog-content\") pod \"community-operators-hk9ns\" (UID: \"cb5ed769-fbe1-4d5d-93fa-5d0477efc533\") " pod="openshift-marketplace/community-operators-hk9ns" Jan 22 18:00:59 crc kubenswrapper[4758]: I0122 18:00:59.918366 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn8tn\" (UniqueName: \"kubernetes.io/projected/cb5ed769-fbe1-4d5d-93fa-5d0477efc533-kube-api-access-cn8tn\") pod \"community-operators-hk9ns\" (UID: \"cb5ed769-fbe1-4d5d-93fa-5d0477efc533\") " pod="openshift-marketplace/community-operators-hk9ns" Jan 22 18:01:00 crc kubenswrapper[4758]: I0122 18:01:00.009483 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hk9ns" Jan 22 18:01:00 crc kubenswrapper[4758]: I0122 18:01:00.476880 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hk9ns"] Jan 22 18:01:01 crc kubenswrapper[4758]: I0122 18:01:01.000860 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hk9ns"] Jan 22 18:01:01 crc kubenswrapper[4758]: I0122 18:01:01.428153 4758 generic.go:334] "Generic (PLEG): container finished" podID="cb5ed769-fbe1-4d5d-93fa-5d0477efc533" containerID="161c3951a32544a5191388b130cdcc87147f1afc012601e57300af41bd0d0443" exitCode=0 Jan 22 18:01:01 crc kubenswrapper[4758]: I0122 18:01:01.430782 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hk9ns" event={"ID":"cb5ed769-fbe1-4d5d-93fa-5d0477efc533","Type":"ContainerDied","Data":"161c3951a32544a5191388b130cdcc87147f1afc012601e57300af41bd0d0443"} Jan 22 18:01:01 crc kubenswrapper[4758]: I0122 18:01:01.430823 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hk9ns" event={"ID":"cb5ed769-fbe1-4d5d-93fa-5d0477efc533","Type":"ContainerStarted","Data":"ecf473d87bcdf50360895c0ae243f74050e3ad42b653bff3c91839892c50b861"} Jan 22 18:01:03 crc kubenswrapper[4758]: I0122 18:01:03.448043 4758 generic.go:334] "Generic (PLEG): container finished" podID="cb5ed769-fbe1-4d5d-93fa-5d0477efc533" containerID="9f0734c4c0fc1f6ef41f2e660b10166def034df9090882111a8b81a5afc73132" exitCode=0 Jan 22 18:01:03 crc kubenswrapper[4758]: I0122 18:01:03.448115 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hk9ns" event={"ID":"cb5ed769-fbe1-4d5d-93fa-5d0477efc533","Type":"ContainerDied","Data":"9f0734c4c0fc1f6ef41f2e660b10166def034df9090882111a8b81a5afc73132"} Jan 22 18:01:04 crc kubenswrapper[4758]: I0122 18:01:04.462379 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hk9ns" event={"ID":"cb5ed769-fbe1-4d5d-93fa-5d0477efc533","Type":"ContainerStarted","Data":"17cc9e4013ecf4baec1037b1d43e413da08e2ed6c66053a2d1baadd66b218c04"} Jan 22 18:01:04 crc kubenswrapper[4758]: I0122 18:01:04.497489 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hk9ns" podStartSLOduration=18.030530248 podStartE2EDuration="20.497439802s" podCreationTimestamp="2026-01-22 18:00:44 +0000 UTC" firstStartedPulling="2026-01-22 18:01:01.434403319 +0000 UTC m=+5482.917742604" lastFinishedPulling="2026-01-22 18:01:03.901312873 +0000 UTC m=+5485.384652158" observedRunningTime="2026-01-22 18:01:04.486240727 +0000 UTC m=+5485.969580012" watchObservedRunningTime="2026-01-22 18:01:04.497439802 +0000 UTC m=+5485.980779097" Jan 22 18:01:04 crc kubenswrapper[4758]: I0122 18:01:04.979780 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-gd568" Jan 22 18:01:05 crc kubenswrapper[4758]: I0122 18:01:05.246798 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29485081-78sn6"] Jan 22 18:01:05 crc kubenswrapper[4758]: I0122 18:01:05.248470 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29485081-78sn6" Jan 22 18:01:05 crc kubenswrapper[4758]: I0122 18:01:05.319591 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29485081-78sn6"] Jan 22 18:01:05 crc kubenswrapper[4758]: I0122 18:01:05.386047 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485080-mncss"] Jan 22 18:01:05 crc kubenswrapper[4758]: I0122 18:01:05.387860 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485080-mncss" Jan 22 18:01:05 crc kubenswrapper[4758]: I0122 18:01:05.393234 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 18:01:05 crc kubenswrapper[4758]: I0122 18:01:05.393498 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 18:01:05 crc kubenswrapper[4758]: I0122 18:01:05.408109 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnhvb\" (UniqueName: \"kubernetes.io/projected/a0a8915e-da6f-453e-bee3-3ef86673f477-kube-api-access-mnhvb\") pod \"keystone-cron-29485081-78sn6\" (UID: \"a0a8915e-da6f-453e-bee3-3ef86673f477\") " pod="openstack/keystone-cron-29485081-78sn6" Jan 22 18:01:05 crc kubenswrapper[4758]: I0122 18:01:05.408171 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0a8915e-da6f-453e-bee3-3ef86673f477-combined-ca-bundle\") pod \"keystone-cron-29485081-78sn6\" (UID: \"a0a8915e-da6f-453e-bee3-3ef86673f477\") " pod="openstack/keystone-cron-29485081-78sn6" Jan 22 18:01:05 crc kubenswrapper[4758]: I0122 18:01:05.408240 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0a8915e-da6f-453e-bee3-3ef86673f477-config-data\") pod \"keystone-cron-29485081-78sn6\" (UID: \"a0a8915e-da6f-453e-bee3-3ef86673f477\") " pod="openstack/keystone-cron-29485081-78sn6" Jan 22 18:01:05 crc kubenswrapper[4758]: I0122 18:01:05.408343 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a0a8915e-da6f-453e-bee3-3ef86673f477-fernet-keys\") pod \"keystone-cron-29485081-78sn6\" (UID: \"a0a8915e-da6f-453e-bee3-3ef86673f477\") " pod="openstack/keystone-cron-29485081-78sn6" Jan 22 18:01:05 crc kubenswrapper[4758]: I0122 18:01:05.435646 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485080-mncss"] Jan 22 18:01:05 crc kubenswrapper[4758]: I0122 18:01:05.497636 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-65454647d6-pr5dd"] Jan 22 18:01:05 crc kubenswrapper[4758]: I0122 18:01:05.517530 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a0a8915e-da6f-453e-bee3-3ef86673f477-fernet-keys\") pod \"keystone-cron-29485081-78sn6\" (UID: \"a0a8915e-da6f-453e-bee3-3ef86673f477\") " pod="openstack/keystone-cron-29485081-78sn6" Jan 22 18:01:05 crc kubenswrapper[4758]: I0122 18:01:05.517602 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v9k7\" (UniqueName: \"kubernetes.io/projected/d567df43-3774-4bac-b052-b1171edaa044-kube-api-access-5v9k7\") pod \"collect-profiles-29485080-mncss\" (UID: \"d567df43-3774-4bac-b052-b1171edaa044\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485080-mncss" Jan 22 18:01:05 crc kubenswrapper[4758]: I0122 18:01:05.517644 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d567df43-3774-4bac-b052-b1171edaa044-config-volume\") pod \"collect-profiles-29485080-mncss\" (UID: \"d567df43-3774-4bac-b052-b1171edaa044\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485080-mncss" Jan 22 18:01:05 crc kubenswrapper[4758]: I0122 18:01:05.517718 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnhvb\" (UniqueName: \"kubernetes.io/projected/a0a8915e-da6f-453e-bee3-3ef86673f477-kube-api-access-mnhvb\") pod \"keystone-cron-29485081-78sn6\" (UID: \"a0a8915e-da6f-453e-bee3-3ef86673f477\") " pod="openstack/keystone-cron-29485081-78sn6" Jan 22 18:01:05 crc kubenswrapper[4758]: I0122 18:01:05.517763 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0a8915e-da6f-453e-bee3-3ef86673f477-combined-ca-bundle\") pod \"keystone-cron-29485081-78sn6\" (UID: \"a0a8915e-da6f-453e-bee3-3ef86673f477\") " pod="openstack/keystone-cron-29485081-78sn6" Jan 22 18:01:05 crc kubenswrapper[4758]: I0122 18:01:05.517844 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0a8915e-da6f-453e-bee3-3ef86673f477-config-data\") pod \"keystone-cron-29485081-78sn6\" (UID: \"a0a8915e-da6f-453e-bee3-3ef86673f477\") " pod="openstack/keystone-cron-29485081-78sn6" Jan 22 18:01:05 crc kubenswrapper[4758]: I0122 18:01:05.517885 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d567df43-3774-4bac-b052-b1171edaa044-secret-volume\") pod \"collect-profiles-29485080-mncss\" (UID: \"d567df43-3774-4bac-b052-b1171edaa044\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485080-mncss" Jan 22 18:01:05 crc kubenswrapper[4758]: I0122 18:01:05.531598 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a0a8915e-da6f-453e-bee3-3ef86673f477-fernet-keys\") pod \"keystone-cron-29485081-78sn6\" (UID: \"a0a8915e-da6f-453e-bee3-3ef86673f477\") " pod="openstack/keystone-cron-29485081-78sn6" Jan 22 18:01:05 crc kubenswrapper[4758]: I0122 18:01:05.540621 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0a8915e-da6f-453e-bee3-3ef86673f477-config-data\") pod \"keystone-cron-29485081-78sn6\" (UID: \"a0a8915e-da6f-453e-bee3-3ef86673f477\") " pod="openstack/keystone-cron-29485081-78sn6" Jan 22 18:01:05 crc kubenswrapper[4758]: I0122 18:01:05.570025 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0a8915e-da6f-453e-bee3-3ef86673f477-combined-ca-bundle\") pod \"keystone-cron-29485081-78sn6\" (UID: \"a0a8915e-da6f-453e-bee3-3ef86673f477\") " pod="openstack/keystone-cron-29485081-78sn6" Jan 22 18:01:05 crc kubenswrapper[4758]: I0122 18:01:05.607392 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnhvb\" (UniqueName: \"kubernetes.io/projected/a0a8915e-da6f-453e-bee3-3ef86673f477-kube-api-access-mnhvb\") pod \"keystone-cron-29485081-78sn6\" (UID: \"a0a8915e-da6f-453e-bee3-3ef86673f477\") " pod="openstack/keystone-cron-29485081-78sn6" Jan 22 18:01:05 crc kubenswrapper[4758]: I0122 18:01:05.620146 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d567df43-3774-4bac-b052-b1171edaa044-secret-volume\") pod \"collect-profiles-29485080-mncss\" (UID: \"d567df43-3774-4bac-b052-b1171edaa044\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485080-mncss" Jan 22 18:01:05 crc kubenswrapper[4758]: I0122 18:01:05.620258 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v9k7\" (UniqueName: \"kubernetes.io/projected/d567df43-3774-4bac-b052-b1171edaa044-kube-api-access-5v9k7\") pod \"collect-profiles-29485080-mncss\" (UID: \"d567df43-3774-4bac-b052-b1171edaa044\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485080-mncss" Jan 22 18:01:05 crc kubenswrapper[4758]: I0122 18:01:05.620307 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d567df43-3774-4bac-b052-b1171edaa044-config-volume\") pod \"collect-profiles-29485080-mncss\" (UID: \"d567df43-3774-4bac-b052-b1171edaa044\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485080-mncss" Jan 22 18:01:05 crc kubenswrapper[4758]: I0122 18:01:05.621487 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d567df43-3774-4bac-b052-b1171edaa044-config-volume\") pod \"collect-profiles-29485080-mncss\" (UID: \"d567df43-3774-4bac-b052-b1171edaa044\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485080-mncss" Jan 22 18:01:05 crc kubenswrapper[4758]: I0122 18:01:05.624916 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29485081-78sn6" Jan 22 18:01:05 crc kubenswrapper[4758]: I0122 18:01:05.660492 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d567df43-3774-4bac-b052-b1171edaa044-secret-volume\") pod \"collect-profiles-29485080-mncss\" (UID: \"d567df43-3774-4bac-b052-b1171edaa044\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485080-mncss" Jan 22 18:01:05 crc kubenswrapper[4758]: I0122 18:01:05.674437 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v9k7\" (UniqueName: \"kubernetes.io/projected/d567df43-3774-4bac-b052-b1171edaa044-kube-api-access-5v9k7\") pod \"collect-profiles-29485080-mncss\" (UID: \"d567df43-3774-4bac-b052-b1171edaa044\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485080-mncss" Jan 22 18:01:05 crc kubenswrapper[4758]: I0122 18:01:05.706153 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="f52e2571-4001-441f-b7b7-b4746ae1c10d" containerName="galera" probeResult="failure" output="command timed out" Jan 22 18:01:05 crc kubenswrapper[4758]: I0122 18:01:05.737233 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485080-mncss" Jan 22 18:01:06 crc kubenswrapper[4758]: I0122 18:01:06.367726 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29485081-78sn6"] Jan 22 18:01:06 crc kubenswrapper[4758]: I0122 18:01:06.540712 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29485081-78sn6" event={"ID":"a0a8915e-da6f-453e-bee3-3ef86673f477","Type":"ContainerStarted","Data":"9649480e919536d498fbfec536f3e221284afcc7cef49d3eda9ecd41aa279714"} Jan 22 18:01:06 crc kubenswrapper[4758]: W0122 18:01:06.692942 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd567df43_3774_4bac_b052_b1171edaa044.slice/crio-c585abb83a4050797d2aa4f76dd032a0b59ec879e1f9be87ce8494e6ef91525b WatchSource:0}: Error finding container c585abb83a4050797d2aa4f76dd032a0b59ec879e1f9be87ce8494e6ef91525b: Status 404 returned error can't find the container with id c585abb83a4050797d2aa4f76dd032a0b59ec879e1f9be87ce8494e6ef91525b Jan 22 18:01:06 crc kubenswrapper[4758]: I0122 18:01:06.696157 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485080-mncss"] Jan 22 18:01:07 crc kubenswrapper[4758]: I0122 18:01:07.553076 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29485081-78sn6" event={"ID":"a0a8915e-da6f-453e-bee3-3ef86673f477","Type":"ContainerStarted","Data":"fdb79a6326df78825d94406eb73b1d79afdd6d79356abad95de9141387a84ad9"} Jan 22 18:01:07 crc kubenswrapper[4758]: I0122 18:01:07.555570 4758 generic.go:334] "Generic (PLEG): container finished" podID="d567df43-3774-4bac-b052-b1171edaa044" containerID="f749021b1554683a8846f058a1ebe881a20e61b073dc8ca037b974f78055e713" exitCode=0 Jan 22 18:01:07 crc kubenswrapper[4758]: I0122 18:01:07.555620 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485080-mncss" event={"ID":"d567df43-3774-4bac-b052-b1171edaa044","Type":"ContainerDied","Data":"f749021b1554683a8846f058a1ebe881a20e61b073dc8ca037b974f78055e713"} Jan 22 18:01:07 crc kubenswrapper[4758]: I0122 18:01:07.555653 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485080-mncss" event={"ID":"d567df43-3774-4bac-b052-b1171edaa044","Type":"ContainerStarted","Data":"c585abb83a4050797d2aa4f76dd032a0b59ec879e1f9be87ce8494e6ef91525b"} Jan 22 18:01:07 crc kubenswrapper[4758]: I0122 18:01:07.571169 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29485081-78sn6" podStartSLOduration=2.571146983 podStartE2EDuration="2.571146983s" podCreationTimestamp="2026-01-22 18:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 18:01:07.566320211 +0000 UTC m=+5489.049659506" watchObservedRunningTime="2026-01-22 18:01:07.571146983 +0000 UTC m=+5489.054486278" Jan 22 18:01:07 crc kubenswrapper[4758]: I0122 18:01:07.809432 4758 scope.go:117] "RemoveContainer" containerID="71d0f9a93a1f198cee3e61be87dac5fd13220229181dc2ee3ad7a9d1aecf76fb" Jan 22 18:01:07 crc kubenswrapper[4758]: E0122 18:01:07.809670 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:01:09 crc kubenswrapper[4758]: I0122 18:01:09.162205 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485080-mncss" Jan 22 18:01:09 crc kubenswrapper[4758]: I0122 18:01:09.244359 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d567df43-3774-4bac-b052-b1171edaa044-secret-volume\") pod \"d567df43-3774-4bac-b052-b1171edaa044\" (UID: \"d567df43-3774-4bac-b052-b1171edaa044\") " Jan 22 18:01:09 crc kubenswrapper[4758]: I0122 18:01:09.244432 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d567df43-3774-4bac-b052-b1171edaa044-config-volume\") pod \"d567df43-3774-4bac-b052-b1171edaa044\" (UID: \"d567df43-3774-4bac-b052-b1171edaa044\") " Jan 22 18:01:09 crc kubenswrapper[4758]: I0122 18:01:09.244558 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5v9k7\" (UniqueName: \"kubernetes.io/projected/d567df43-3774-4bac-b052-b1171edaa044-kube-api-access-5v9k7\") pod \"d567df43-3774-4bac-b052-b1171edaa044\" (UID: \"d567df43-3774-4bac-b052-b1171edaa044\") " Jan 22 18:01:09 crc kubenswrapper[4758]: I0122 18:01:09.245655 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d567df43-3774-4bac-b052-b1171edaa044-config-volume" (OuterVolumeSpecName: "config-volume") pod "d567df43-3774-4bac-b052-b1171edaa044" (UID: "d567df43-3774-4bac-b052-b1171edaa044"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 18:01:09 crc kubenswrapper[4758]: I0122 18:01:09.251753 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d567df43-3774-4bac-b052-b1171edaa044-kube-api-access-5v9k7" (OuterVolumeSpecName: "kube-api-access-5v9k7") pod "d567df43-3774-4bac-b052-b1171edaa044" (UID: "d567df43-3774-4bac-b052-b1171edaa044"). InnerVolumeSpecName "kube-api-access-5v9k7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 18:01:09 crc kubenswrapper[4758]: I0122 18:01:09.268537 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d567df43-3774-4bac-b052-b1171edaa044-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d567df43-3774-4bac-b052-b1171edaa044" (UID: "d567df43-3774-4bac-b052-b1171edaa044"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 18:01:09 crc kubenswrapper[4758]: I0122 18:01:09.347432 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5v9k7\" (UniqueName: \"kubernetes.io/projected/d567df43-3774-4bac-b052-b1171edaa044-kube-api-access-5v9k7\") on node \"crc\" DevicePath \"\"" Jan 22 18:01:09 crc kubenswrapper[4758]: I0122 18:01:09.347477 4758 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d567df43-3774-4bac-b052-b1171edaa044-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 18:01:09 crc kubenswrapper[4758]: I0122 18:01:09.347490 4758 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d567df43-3774-4bac-b052-b1171edaa044-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 18:01:09 crc kubenswrapper[4758]: I0122 18:01:09.578247 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485080-mncss" event={"ID":"d567df43-3774-4bac-b052-b1171edaa044","Type":"ContainerDied","Data":"c585abb83a4050797d2aa4f76dd032a0b59ec879e1f9be87ce8494e6ef91525b"} Jan 22 18:01:09 crc kubenswrapper[4758]: I0122 18:01:09.578292 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485080-mncss" Jan 22 18:01:09 crc kubenswrapper[4758]: I0122 18:01:09.578372 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c585abb83a4050797d2aa4f76dd032a0b59ec879e1f9be87ce8494e6ef91525b" Jan 22 18:01:10 crc kubenswrapper[4758]: I0122 18:01:10.010724 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hk9ns" Jan 22 18:01:10 crc kubenswrapper[4758]: I0122 18:01:10.011735 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hk9ns" Jan 22 18:01:10 crc kubenswrapper[4758]: I0122 18:01:10.264577 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hk9ns" Jan 22 18:01:10 crc kubenswrapper[4758]: I0122 18:01:10.643287 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hk9ns" Jan 22 18:01:10 crc kubenswrapper[4758]: I0122 18:01:10.708356 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hk9ns"] Jan 22 18:01:11 crc kubenswrapper[4758]: I0122 18:01:11.384211 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485035-ntr6j"] Jan 22 18:01:11 crc kubenswrapper[4758]: I0122 18:01:11.395678 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485035-ntr6j"] Jan 22 18:01:12 crc kubenswrapper[4758]: I0122 18:01:12.609632 4758 generic.go:334] "Generic (PLEG): container finished" podID="a0a8915e-da6f-453e-bee3-3ef86673f477" containerID="fdb79a6326df78825d94406eb73b1d79afdd6d79356abad95de9141387a84ad9" exitCode=0 Jan 22 18:01:12 crc kubenswrapper[4758]: I0122 18:01:12.609693 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29485081-78sn6" event={"ID":"a0a8915e-da6f-453e-bee3-3ef86673f477","Type":"ContainerDied","Data":"fdb79a6326df78825d94406eb73b1d79afdd6d79356abad95de9141387a84ad9"} Jan 22 18:01:12 crc kubenswrapper[4758]: I0122 18:01:12.610154 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hk9ns" podUID="cb5ed769-fbe1-4d5d-93fa-5d0477efc533" containerName="registry-server" containerID="cri-o://17cc9e4013ecf4baec1037b1d43e413da08e2ed6c66053a2d1baadd66b218c04" gracePeriod=2 Jan 22 18:01:12 crc kubenswrapper[4758]: I0122 18:01:12.822028 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="748791b9-ce3e-4a89-8098-318c6da7b3db" path="/var/lib/kubelet/pods/748791b9-ce3e-4a89-8098-318c6da7b3db/volumes" Jan 22 18:01:13 crc kubenswrapper[4758]: I0122 18:01:13.146915 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hk9ns" Jan 22 18:01:13 crc kubenswrapper[4758]: I0122 18:01:13.237406 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb5ed769-fbe1-4d5d-93fa-5d0477efc533-utilities\") pod \"cb5ed769-fbe1-4d5d-93fa-5d0477efc533\" (UID: \"cb5ed769-fbe1-4d5d-93fa-5d0477efc533\") " Jan 22 18:01:13 crc kubenswrapper[4758]: I0122 18:01:13.237637 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cn8tn\" (UniqueName: \"kubernetes.io/projected/cb5ed769-fbe1-4d5d-93fa-5d0477efc533-kube-api-access-cn8tn\") pod \"cb5ed769-fbe1-4d5d-93fa-5d0477efc533\" (UID: \"cb5ed769-fbe1-4d5d-93fa-5d0477efc533\") " Jan 22 18:01:13 crc kubenswrapper[4758]: I0122 18:01:13.237684 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb5ed769-fbe1-4d5d-93fa-5d0477efc533-catalog-content\") pod \"cb5ed769-fbe1-4d5d-93fa-5d0477efc533\" (UID: \"cb5ed769-fbe1-4d5d-93fa-5d0477efc533\") " Jan 22 18:01:13 crc kubenswrapper[4758]: I0122 18:01:13.238318 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb5ed769-fbe1-4d5d-93fa-5d0477efc533-utilities" (OuterVolumeSpecName: "utilities") pod "cb5ed769-fbe1-4d5d-93fa-5d0477efc533" (UID: "cb5ed769-fbe1-4d5d-93fa-5d0477efc533"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 18:01:13 crc kubenswrapper[4758]: I0122 18:01:13.243316 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb5ed769-fbe1-4d5d-93fa-5d0477efc533-kube-api-access-cn8tn" (OuterVolumeSpecName: "kube-api-access-cn8tn") pod "cb5ed769-fbe1-4d5d-93fa-5d0477efc533" (UID: "cb5ed769-fbe1-4d5d-93fa-5d0477efc533"). InnerVolumeSpecName "kube-api-access-cn8tn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 18:01:13 crc kubenswrapper[4758]: I0122 18:01:13.299320 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb5ed769-fbe1-4d5d-93fa-5d0477efc533-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cb5ed769-fbe1-4d5d-93fa-5d0477efc533" (UID: "cb5ed769-fbe1-4d5d-93fa-5d0477efc533"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 18:01:13 crc kubenswrapper[4758]: I0122 18:01:13.340233 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cn8tn\" (UniqueName: \"kubernetes.io/projected/cb5ed769-fbe1-4d5d-93fa-5d0477efc533-kube-api-access-cn8tn\") on node \"crc\" DevicePath \"\"" Jan 22 18:01:13 crc kubenswrapper[4758]: I0122 18:01:13.340371 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb5ed769-fbe1-4d5d-93fa-5d0477efc533-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 18:01:13 crc kubenswrapper[4758]: I0122 18:01:13.340435 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb5ed769-fbe1-4d5d-93fa-5d0477efc533-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 18:01:13 crc kubenswrapper[4758]: I0122 18:01:13.625344 4758 generic.go:334] "Generic (PLEG): container finished" podID="cb5ed769-fbe1-4d5d-93fa-5d0477efc533" containerID="17cc9e4013ecf4baec1037b1d43e413da08e2ed6c66053a2d1baadd66b218c04" exitCode=0 Jan 22 18:01:13 crc kubenswrapper[4758]: I0122 18:01:13.625449 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hk9ns" Jan 22 18:01:13 crc kubenswrapper[4758]: I0122 18:01:13.625446 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hk9ns" event={"ID":"cb5ed769-fbe1-4d5d-93fa-5d0477efc533","Type":"ContainerDied","Data":"17cc9e4013ecf4baec1037b1d43e413da08e2ed6c66053a2d1baadd66b218c04"} Jan 22 18:01:13 crc kubenswrapper[4758]: I0122 18:01:13.625543 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hk9ns" event={"ID":"cb5ed769-fbe1-4d5d-93fa-5d0477efc533","Type":"ContainerDied","Data":"ecf473d87bcdf50360895c0ae243f74050e3ad42b653bff3c91839892c50b861"} Jan 22 18:01:13 crc kubenswrapper[4758]: I0122 18:01:13.625584 4758 scope.go:117] "RemoveContainer" containerID="17cc9e4013ecf4baec1037b1d43e413da08e2ed6c66053a2d1baadd66b218c04" Jan 22 18:01:13 crc kubenswrapper[4758]: I0122 18:01:13.674422 4758 scope.go:117] "RemoveContainer" containerID="9f0734c4c0fc1f6ef41f2e660b10166def034df9090882111a8b81a5afc73132" Jan 22 18:01:13 crc kubenswrapper[4758]: I0122 18:01:13.676007 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hk9ns"] Jan 22 18:01:13 crc kubenswrapper[4758]: I0122 18:01:13.707049 4758 scope.go:117] "RemoveContainer" containerID="161c3951a32544a5191388b130cdcc87147f1afc012601e57300af41bd0d0443" Jan 22 18:01:13 crc kubenswrapper[4758]: I0122 18:01:13.707612 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hk9ns"] Jan 22 18:01:13 crc kubenswrapper[4758]: I0122 18:01:13.777248 4758 scope.go:117] "RemoveContainer" containerID="17cc9e4013ecf4baec1037b1d43e413da08e2ed6c66053a2d1baadd66b218c04" Jan 22 18:01:13 crc kubenswrapper[4758]: E0122 18:01:13.777666 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17cc9e4013ecf4baec1037b1d43e413da08e2ed6c66053a2d1baadd66b218c04\": container with ID starting with 17cc9e4013ecf4baec1037b1d43e413da08e2ed6c66053a2d1baadd66b218c04 not found: ID does not exist" containerID="17cc9e4013ecf4baec1037b1d43e413da08e2ed6c66053a2d1baadd66b218c04" Jan 22 18:01:13 crc kubenswrapper[4758]: I0122 18:01:13.777700 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17cc9e4013ecf4baec1037b1d43e413da08e2ed6c66053a2d1baadd66b218c04"} err="failed to get container status \"17cc9e4013ecf4baec1037b1d43e413da08e2ed6c66053a2d1baadd66b218c04\": rpc error: code = NotFound desc = could not find container \"17cc9e4013ecf4baec1037b1d43e413da08e2ed6c66053a2d1baadd66b218c04\": container with ID starting with 17cc9e4013ecf4baec1037b1d43e413da08e2ed6c66053a2d1baadd66b218c04 not found: ID does not exist" Jan 22 18:01:13 crc kubenswrapper[4758]: I0122 18:01:13.777725 4758 scope.go:117] "RemoveContainer" containerID="9f0734c4c0fc1f6ef41f2e660b10166def034df9090882111a8b81a5afc73132" Jan 22 18:01:13 crc kubenswrapper[4758]: E0122 18:01:13.778098 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f0734c4c0fc1f6ef41f2e660b10166def034df9090882111a8b81a5afc73132\": container with ID starting with 9f0734c4c0fc1f6ef41f2e660b10166def034df9090882111a8b81a5afc73132 not found: ID does not exist" containerID="9f0734c4c0fc1f6ef41f2e660b10166def034df9090882111a8b81a5afc73132" Jan 22 18:01:13 crc kubenswrapper[4758]: I0122 18:01:13.778130 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f0734c4c0fc1f6ef41f2e660b10166def034df9090882111a8b81a5afc73132"} err="failed to get container status \"9f0734c4c0fc1f6ef41f2e660b10166def034df9090882111a8b81a5afc73132\": rpc error: code = NotFound desc = could not find container \"9f0734c4c0fc1f6ef41f2e660b10166def034df9090882111a8b81a5afc73132\": container with ID starting with 9f0734c4c0fc1f6ef41f2e660b10166def034df9090882111a8b81a5afc73132 not found: ID does not exist" Jan 22 18:01:13 crc kubenswrapper[4758]: I0122 18:01:13.778149 4758 scope.go:117] "RemoveContainer" containerID="161c3951a32544a5191388b130cdcc87147f1afc012601e57300af41bd0d0443" Jan 22 18:01:13 crc kubenswrapper[4758]: E0122 18:01:13.778364 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"161c3951a32544a5191388b130cdcc87147f1afc012601e57300af41bd0d0443\": container with ID starting with 161c3951a32544a5191388b130cdcc87147f1afc012601e57300af41bd0d0443 not found: ID does not exist" containerID="161c3951a32544a5191388b130cdcc87147f1afc012601e57300af41bd0d0443" Jan 22 18:01:13 crc kubenswrapper[4758]: I0122 18:01:13.778391 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"161c3951a32544a5191388b130cdcc87147f1afc012601e57300af41bd0d0443"} err="failed to get container status \"161c3951a32544a5191388b130cdcc87147f1afc012601e57300af41bd0d0443\": rpc error: code = NotFound desc = could not find container \"161c3951a32544a5191388b130cdcc87147f1afc012601e57300af41bd0d0443\": container with ID starting with 161c3951a32544a5191388b130cdcc87147f1afc012601e57300af41bd0d0443 not found: ID does not exist" Jan 22 18:01:14 crc kubenswrapper[4758]: I0122 18:01:14.110397 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29485081-78sn6" Jan 22 18:01:14 crc kubenswrapper[4758]: I0122 18:01:14.261648 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnhvb\" (UniqueName: \"kubernetes.io/projected/a0a8915e-da6f-453e-bee3-3ef86673f477-kube-api-access-mnhvb\") pod \"a0a8915e-da6f-453e-bee3-3ef86673f477\" (UID: \"a0a8915e-da6f-453e-bee3-3ef86673f477\") " Jan 22 18:01:14 crc kubenswrapper[4758]: I0122 18:01:14.261834 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0a8915e-da6f-453e-bee3-3ef86673f477-combined-ca-bundle\") pod \"a0a8915e-da6f-453e-bee3-3ef86673f477\" (UID: \"a0a8915e-da6f-453e-bee3-3ef86673f477\") " Jan 22 18:01:14 crc kubenswrapper[4758]: I0122 18:01:14.261871 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0a8915e-da6f-453e-bee3-3ef86673f477-config-data\") pod \"a0a8915e-da6f-453e-bee3-3ef86673f477\" (UID: \"a0a8915e-da6f-453e-bee3-3ef86673f477\") " Jan 22 18:01:14 crc kubenswrapper[4758]: I0122 18:01:14.261977 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a0a8915e-da6f-453e-bee3-3ef86673f477-fernet-keys\") pod \"a0a8915e-da6f-453e-bee3-3ef86673f477\" (UID: \"a0a8915e-da6f-453e-bee3-3ef86673f477\") " Jan 22 18:01:14 crc kubenswrapper[4758]: I0122 18:01:14.267650 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0a8915e-da6f-453e-bee3-3ef86673f477-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a0a8915e-da6f-453e-bee3-3ef86673f477" (UID: "a0a8915e-da6f-453e-bee3-3ef86673f477"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 18:01:14 crc kubenswrapper[4758]: I0122 18:01:14.273260 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0a8915e-da6f-453e-bee3-3ef86673f477-kube-api-access-mnhvb" (OuterVolumeSpecName: "kube-api-access-mnhvb") pod "a0a8915e-da6f-453e-bee3-3ef86673f477" (UID: "a0a8915e-da6f-453e-bee3-3ef86673f477"). InnerVolumeSpecName "kube-api-access-mnhvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 18:01:14 crc kubenswrapper[4758]: I0122 18:01:14.300675 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0a8915e-da6f-453e-bee3-3ef86673f477-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a0a8915e-da6f-453e-bee3-3ef86673f477" (UID: "a0a8915e-da6f-453e-bee3-3ef86673f477"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 18:01:14 crc kubenswrapper[4758]: I0122 18:01:14.323448 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0a8915e-da6f-453e-bee3-3ef86673f477-config-data" (OuterVolumeSpecName: "config-data") pod "a0a8915e-da6f-453e-bee3-3ef86673f477" (UID: "a0a8915e-da6f-453e-bee3-3ef86673f477"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 18:01:14 crc kubenswrapper[4758]: I0122 18:01:14.364775 4758 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a0a8915e-da6f-453e-bee3-3ef86673f477-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 22 18:01:14 crc kubenswrapper[4758]: I0122 18:01:14.364819 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnhvb\" (UniqueName: \"kubernetes.io/projected/a0a8915e-da6f-453e-bee3-3ef86673f477-kube-api-access-mnhvb\") on node \"crc\" DevicePath \"\"" Jan 22 18:01:14 crc kubenswrapper[4758]: I0122 18:01:14.364835 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0a8915e-da6f-453e-bee3-3ef86673f477-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 18:01:14 crc kubenswrapper[4758]: I0122 18:01:14.364849 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0a8915e-da6f-453e-bee3-3ef86673f477-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 18:01:14 crc kubenswrapper[4758]: I0122 18:01:14.645997 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29485081-78sn6" event={"ID":"a0a8915e-da6f-453e-bee3-3ef86673f477","Type":"ContainerDied","Data":"9649480e919536d498fbfec536f3e221284afcc7cef49d3eda9ecd41aa279714"} Jan 22 18:01:14 crc kubenswrapper[4758]: I0122 18:01:14.646045 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9649480e919536d498fbfec536f3e221284afcc7cef49d3eda9ecd41aa279714" Jan 22 18:01:14 crc kubenswrapper[4758]: I0122 18:01:14.646140 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29485081-78sn6" Jan 22 18:01:14 crc kubenswrapper[4758]: I0122 18:01:14.819034 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb5ed769-fbe1-4d5d-93fa-5d0477efc533" path="/var/lib/kubelet/pods/cb5ed769-fbe1-4d5d-93fa-5d0477efc533/volumes" Jan 22 18:01:21 crc kubenswrapper[4758]: I0122 18:01:21.809661 4758 scope.go:117] "RemoveContainer" containerID="71d0f9a93a1f198cee3e61be87dac5fd13220229181dc2ee3ad7a9d1aecf76fb" Jan 22 18:01:21 crc kubenswrapper[4758]: E0122 18:01:21.810455 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:01:23 crc kubenswrapper[4758]: I0122 18:01:23.831162 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-58fc8b87c6-qmw5r" Jan 22 18:01:30 crc kubenswrapper[4758]: I0122 18:01:30.591851 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" podUID="9deedfb3-0e0e-4287-81de-8131aac4b6b0" containerName="oauth-openshift" containerID="cri-o://fc184b4d0a6ffa3c0042ec525d291d32b21dad822f07345d3dd2db1dfc4585ba" gracePeriod=15 Jan 22 18:01:30 crc kubenswrapper[4758]: I0122 18:01:30.850031 4758 generic.go:334] "Generic (PLEG): container finished" podID="9deedfb3-0e0e-4287-81de-8131aac4b6b0" containerID="fc184b4d0a6ffa3c0042ec525d291d32b21dad822f07345d3dd2db1dfc4585ba" exitCode=0 Jan 22 18:01:30 crc kubenswrapper[4758]: I0122 18:01:30.850340 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" event={"ID":"9deedfb3-0e0e-4287-81de-8131aac4b6b0","Type":"ContainerDied","Data":"fc184b4d0a6ffa3c0042ec525d291d32b21dad822f07345d3dd2db1dfc4585ba"} Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.397368 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.440621 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9deedfb3-0e0e-4287-81de-8131aac4b6b0-audit-dir\") pod \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.441242 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-system-service-ca\") pod \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.441367 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-system-trusted-ca-bundle\") pod \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.441457 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-user-template-provider-selection\") pod \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.441608 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-user-template-login\") pod \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.441716 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-system-ocp-branding-template\") pod \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.440728 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9deedfb3-0e0e-4287-81de-8131aac4b6b0-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "9deedfb3-0e0e-4287-81de-8131aac4b6b0" (UID: "9deedfb3-0e0e-4287-81de-8131aac4b6b0"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.441366 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5548c468ff-42zhn"] Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.441823 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-system-session\") pod \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.442132 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-system-serving-cert\") pod \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.442218 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9deedfb3-0e0e-4287-81de-8131aac4b6b0-audit-policies\") pod \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.442261 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-user-idp-0-file-data\") pod \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " Jan 22 18:01:31 crc kubenswrapper[4758]: E0122 18:01:31.442326 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9deedfb3-0e0e-4287-81de-8131aac4b6b0" containerName="oauth-openshift" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.442353 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="9deedfb3-0e0e-4287-81de-8131aac4b6b0" containerName="oauth-openshift" Jan 22 18:01:31 crc kubenswrapper[4758]: E0122 18:01:31.442372 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0a8915e-da6f-453e-bee3-3ef86673f477" containerName="keystone-cron" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.442381 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0a8915e-da6f-453e-bee3-3ef86673f477" containerName="keystone-cron" Jan 22 18:01:31 crc kubenswrapper[4758]: E0122 18:01:31.442396 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb5ed769-fbe1-4d5d-93fa-5d0477efc533" containerName="extract-utilities" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.442405 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb5ed769-fbe1-4d5d-93fa-5d0477efc533" containerName="extract-utilities" Jan 22 18:01:31 crc kubenswrapper[4758]: E0122 18:01:31.442421 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d567df43-3774-4bac-b052-b1171edaa044" containerName="collect-profiles" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.442429 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="d567df43-3774-4bac-b052-b1171edaa044" containerName="collect-profiles" Jan 22 18:01:31 crc kubenswrapper[4758]: E0122 18:01:31.442466 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb5ed769-fbe1-4d5d-93fa-5d0477efc533" containerName="extract-content" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.442475 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb5ed769-fbe1-4d5d-93fa-5d0477efc533" containerName="extract-content" Jan 22 18:01:31 crc kubenswrapper[4758]: E0122 18:01:31.442515 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb5ed769-fbe1-4d5d-93fa-5d0477efc533" containerName="registry-server" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.442525 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb5ed769-fbe1-4d5d-93fa-5d0477efc533" containerName="registry-server" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.442795 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="d567df43-3774-4bac-b052-b1171edaa044" containerName="collect-profiles" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.442820 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="9deedfb3-0e0e-4287-81de-8131aac4b6b0" containerName="oauth-openshift" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.442823 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "9deedfb3-0e0e-4287-81de-8131aac4b6b0" (UID: "9deedfb3-0e0e-4287-81de-8131aac4b6b0"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.442844 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb5ed769-fbe1-4d5d-93fa-5d0477efc533" containerName="registry-server" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.442859 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0a8915e-da6f-453e-bee3-3ef86673f477" containerName="keystone-cron" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.442862 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "9deedfb3-0e0e-4287-81de-8131aac4b6b0" (UID: "9deedfb3-0e0e-4287-81de-8131aac4b6b0"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.442331 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-system-cliconfig\") pod \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.443083 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-system-router-certs\") pod \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.443155 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-user-template-error\") pod \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.443191 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qr9wk\" (UniqueName: \"kubernetes.io/projected/9deedfb3-0e0e-4287-81de-8131aac4b6b0-kube-api-access-qr9wk\") pod \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\" (UID: \"9deedfb3-0e0e-4287-81de-8131aac4b6b0\") " Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.443565 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9deedfb3-0e0e-4287-81de-8131aac4b6b0-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "9deedfb3-0e0e-4287-81de-8131aac4b6b0" (UID: "9deedfb3-0e0e-4287-81de-8131aac4b6b0"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.443728 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5548c468ff-42zhn" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.443978 4758 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9deedfb3-0e0e-4287-81de-8131aac4b6b0-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.444047 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.444123 4758 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9deedfb3-0e0e-4287-81de-8131aac4b6b0-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.444183 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.449241 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9deedfb3-0e0e-4287-81de-8131aac4b6b0-kube-api-access-qr9wk" (OuterVolumeSpecName: "kube-api-access-qr9wk") pod "9deedfb3-0e0e-4287-81de-8131aac4b6b0" (UID: "9deedfb3-0e0e-4287-81de-8131aac4b6b0"). InnerVolumeSpecName "kube-api-access-qr9wk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.450990 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "9deedfb3-0e0e-4287-81de-8131aac4b6b0" (UID: "9deedfb3-0e0e-4287-81de-8131aac4b6b0"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.451285 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "9deedfb3-0e0e-4287-81de-8131aac4b6b0" (UID: "9deedfb3-0e0e-4287-81de-8131aac4b6b0"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.451255 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "9deedfb3-0e0e-4287-81de-8131aac4b6b0" (UID: "9deedfb3-0e0e-4287-81de-8131aac4b6b0"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.454375 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "9deedfb3-0e0e-4287-81de-8131aac4b6b0" (UID: "9deedfb3-0e0e-4287-81de-8131aac4b6b0"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.454910 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "9deedfb3-0e0e-4287-81de-8131aac4b6b0" (UID: "9deedfb3-0e0e-4287-81de-8131aac4b6b0"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.455434 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "9deedfb3-0e0e-4287-81de-8131aac4b6b0" (UID: "9deedfb3-0e0e-4287-81de-8131aac4b6b0"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.460669 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5548c468ff-42zhn"] Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.487959 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "9deedfb3-0e0e-4287-81de-8131aac4b6b0" (UID: "9deedfb3-0e0e-4287-81de-8131aac4b6b0"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.490851 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "9deedfb3-0e0e-4287-81de-8131aac4b6b0" (UID: "9deedfb3-0e0e-4287-81de-8131aac4b6b0"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.492210 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "9deedfb3-0e0e-4287-81de-8131aac4b6b0" (UID: "9deedfb3-0e0e-4287-81de-8131aac4b6b0"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.546306 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b3da5971-ff11-43e1-b2de-b07f26df226f-v4-0-config-system-service-ca\") pod \"oauth-openshift-5548c468ff-42zhn\" (UID: \"b3da5971-ff11-43e1-b2de-b07f26df226f\") " pod="openshift-authentication/oauth-openshift-5548c468ff-42zhn" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.546366 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b3da5971-ff11-43e1-b2de-b07f26df226f-v4-0-config-system-router-certs\") pod \"oauth-openshift-5548c468ff-42zhn\" (UID: \"b3da5971-ff11-43e1-b2de-b07f26df226f\") " pod="openshift-authentication/oauth-openshift-5548c468ff-42zhn" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.546410 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b3da5971-ff11-43e1-b2de-b07f26df226f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5548c468ff-42zhn\" (UID: \"b3da5971-ff11-43e1-b2de-b07f26df226f\") " pod="openshift-authentication/oauth-openshift-5548c468ff-42zhn" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.546443 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3da5971-ff11-43e1-b2de-b07f26df226f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5548c468ff-42zhn\" (UID: \"b3da5971-ff11-43e1-b2de-b07f26df226f\") " pod="openshift-authentication/oauth-openshift-5548c468ff-42zhn" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.546461 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b3da5971-ff11-43e1-b2de-b07f26df226f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5548c468ff-42zhn\" (UID: \"b3da5971-ff11-43e1-b2de-b07f26df226f\") " pod="openshift-authentication/oauth-openshift-5548c468ff-42zhn" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.546491 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rv2k\" (UniqueName: \"kubernetes.io/projected/b3da5971-ff11-43e1-b2de-b07f26df226f-kube-api-access-2rv2k\") pod \"oauth-openshift-5548c468ff-42zhn\" (UID: \"b3da5971-ff11-43e1-b2de-b07f26df226f\") " pod="openshift-authentication/oauth-openshift-5548c468ff-42zhn" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.546570 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b3da5971-ff11-43e1-b2de-b07f26df226f-v4-0-config-user-template-login\") pod \"oauth-openshift-5548c468ff-42zhn\" (UID: \"b3da5971-ff11-43e1-b2de-b07f26df226f\") " pod="openshift-authentication/oauth-openshift-5548c468ff-42zhn" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.546602 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b3da5971-ff11-43e1-b2de-b07f26df226f-v4-0-config-system-session\") pod \"oauth-openshift-5548c468ff-42zhn\" (UID: \"b3da5971-ff11-43e1-b2de-b07f26df226f\") " pod="openshift-authentication/oauth-openshift-5548c468ff-42zhn" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.546631 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b3da5971-ff11-43e1-b2de-b07f26df226f-audit-dir\") pod \"oauth-openshift-5548c468ff-42zhn\" (UID: \"b3da5971-ff11-43e1-b2de-b07f26df226f\") " pod="openshift-authentication/oauth-openshift-5548c468ff-42zhn" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.546650 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b3da5971-ff11-43e1-b2de-b07f26df226f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5548c468ff-42zhn\" (UID: \"b3da5971-ff11-43e1-b2de-b07f26df226f\") " pod="openshift-authentication/oauth-openshift-5548c468ff-42zhn" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.546668 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b3da5971-ff11-43e1-b2de-b07f26df226f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5548c468ff-42zhn\" (UID: \"b3da5971-ff11-43e1-b2de-b07f26df226f\") " pod="openshift-authentication/oauth-openshift-5548c468ff-42zhn" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.546685 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b3da5971-ff11-43e1-b2de-b07f26df226f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5548c468ff-42zhn\" (UID: \"b3da5971-ff11-43e1-b2de-b07f26df226f\") " pod="openshift-authentication/oauth-openshift-5548c468ff-42zhn" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.546732 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b3da5971-ff11-43e1-b2de-b07f26df226f-v4-0-config-user-template-error\") pod \"oauth-openshift-5548c468ff-42zhn\" (UID: \"b3da5971-ff11-43e1-b2de-b07f26df226f\") " pod="openshift-authentication/oauth-openshift-5548c468ff-42zhn" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.546809 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b3da5971-ff11-43e1-b2de-b07f26df226f-audit-policies\") pod \"oauth-openshift-5548c468ff-42zhn\" (UID: \"b3da5971-ff11-43e1-b2de-b07f26df226f\") " pod="openshift-authentication/oauth-openshift-5548c468ff-42zhn" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.546904 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.546920 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.546932 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.546944 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.546956 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.546967 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.546978 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.546989 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.547000 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9deedfb3-0e0e-4287-81de-8131aac4b6b0-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.547011 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qr9wk\" (UniqueName: \"kubernetes.io/projected/9deedfb3-0e0e-4287-81de-8131aac4b6b0-kube-api-access-qr9wk\") on node \"crc\" DevicePath \"\"" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.648887 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b3da5971-ff11-43e1-b2de-b07f26df226f-v4-0-config-system-session\") pod \"oauth-openshift-5548c468ff-42zhn\" (UID: \"b3da5971-ff11-43e1-b2de-b07f26df226f\") " pod="openshift-authentication/oauth-openshift-5548c468ff-42zhn" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.648953 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b3da5971-ff11-43e1-b2de-b07f26df226f-audit-dir\") pod \"oauth-openshift-5548c468ff-42zhn\" (UID: \"b3da5971-ff11-43e1-b2de-b07f26df226f\") " pod="openshift-authentication/oauth-openshift-5548c468ff-42zhn" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.648987 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b3da5971-ff11-43e1-b2de-b07f26df226f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5548c468ff-42zhn\" (UID: \"b3da5971-ff11-43e1-b2de-b07f26df226f\") " pod="openshift-authentication/oauth-openshift-5548c468ff-42zhn" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.649014 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b3da5971-ff11-43e1-b2de-b07f26df226f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5548c468ff-42zhn\" (UID: \"b3da5971-ff11-43e1-b2de-b07f26df226f\") " pod="openshift-authentication/oauth-openshift-5548c468ff-42zhn" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.649041 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b3da5971-ff11-43e1-b2de-b07f26df226f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5548c468ff-42zhn\" (UID: \"b3da5971-ff11-43e1-b2de-b07f26df226f\") " pod="openshift-authentication/oauth-openshift-5548c468ff-42zhn" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.649094 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b3da5971-ff11-43e1-b2de-b07f26df226f-v4-0-config-user-template-error\") pod \"oauth-openshift-5548c468ff-42zhn\" (UID: \"b3da5971-ff11-43e1-b2de-b07f26df226f\") " pod="openshift-authentication/oauth-openshift-5548c468ff-42zhn" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.649159 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b3da5971-ff11-43e1-b2de-b07f26df226f-audit-policies\") pod \"oauth-openshift-5548c468ff-42zhn\" (UID: \"b3da5971-ff11-43e1-b2de-b07f26df226f\") " pod="openshift-authentication/oauth-openshift-5548c468ff-42zhn" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.649204 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b3da5971-ff11-43e1-b2de-b07f26df226f-v4-0-config-system-service-ca\") pod \"oauth-openshift-5548c468ff-42zhn\" (UID: \"b3da5971-ff11-43e1-b2de-b07f26df226f\") " pod="openshift-authentication/oauth-openshift-5548c468ff-42zhn" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.649234 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b3da5971-ff11-43e1-b2de-b07f26df226f-v4-0-config-system-router-certs\") pod \"oauth-openshift-5548c468ff-42zhn\" (UID: \"b3da5971-ff11-43e1-b2de-b07f26df226f\") " pod="openshift-authentication/oauth-openshift-5548c468ff-42zhn" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.649277 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b3da5971-ff11-43e1-b2de-b07f26df226f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5548c468ff-42zhn\" (UID: \"b3da5971-ff11-43e1-b2de-b07f26df226f\") " pod="openshift-authentication/oauth-openshift-5548c468ff-42zhn" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.649316 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3da5971-ff11-43e1-b2de-b07f26df226f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5548c468ff-42zhn\" (UID: \"b3da5971-ff11-43e1-b2de-b07f26df226f\") " pod="openshift-authentication/oauth-openshift-5548c468ff-42zhn" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.649344 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b3da5971-ff11-43e1-b2de-b07f26df226f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5548c468ff-42zhn\" (UID: \"b3da5971-ff11-43e1-b2de-b07f26df226f\") " pod="openshift-authentication/oauth-openshift-5548c468ff-42zhn" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.649372 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rv2k\" (UniqueName: \"kubernetes.io/projected/b3da5971-ff11-43e1-b2de-b07f26df226f-kube-api-access-2rv2k\") pod \"oauth-openshift-5548c468ff-42zhn\" (UID: \"b3da5971-ff11-43e1-b2de-b07f26df226f\") " pod="openshift-authentication/oauth-openshift-5548c468ff-42zhn" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.649438 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b3da5971-ff11-43e1-b2de-b07f26df226f-v4-0-config-user-template-login\") pod \"oauth-openshift-5548c468ff-42zhn\" (UID: \"b3da5971-ff11-43e1-b2de-b07f26df226f\") " pod="openshift-authentication/oauth-openshift-5548c468ff-42zhn" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.650275 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b3da5971-ff11-43e1-b2de-b07f26df226f-audit-policies\") pod \"oauth-openshift-5548c468ff-42zhn\" (UID: \"b3da5971-ff11-43e1-b2de-b07f26df226f\") " pod="openshift-authentication/oauth-openshift-5548c468ff-42zhn" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.650505 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b3da5971-ff11-43e1-b2de-b07f26df226f-audit-dir\") pod \"oauth-openshift-5548c468ff-42zhn\" (UID: \"b3da5971-ff11-43e1-b2de-b07f26df226f\") " pod="openshift-authentication/oauth-openshift-5548c468ff-42zhn" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.650982 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b3da5971-ff11-43e1-b2de-b07f26df226f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5548c468ff-42zhn\" (UID: \"b3da5971-ff11-43e1-b2de-b07f26df226f\") " pod="openshift-authentication/oauth-openshift-5548c468ff-42zhn" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.651261 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b3da5971-ff11-43e1-b2de-b07f26df226f-v4-0-config-system-service-ca\") pod \"oauth-openshift-5548c468ff-42zhn\" (UID: \"b3da5971-ff11-43e1-b2de-b07f26df226f\") " pod="openshift-authentication/oauth-openshift-5548c468ff-42zhn" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.651856 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3da5971-ff11-43e1-b2de-b07f26df226f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5548c468ff-42zhn\" (UID: \"b3da5971-ff11-43e1-b2de-b07f26df226f\") " pod="openshift-authentication/oauth-openshift-5548c468ff-42zhn" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.653677 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b3da5971-ff11-43e1-b2de-b07f26df226f-v4-0-config-user-template-login\") pod \"oauth-openshift-5548c468ff-42zhn\" (UID: \"b3da5971-ff11-43e1-b2de-b07f26df226f\") " pod="openshift-authentication/oauth-openshift-5548c468ff-42zhn" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.654538 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b3da5971-ff11-43e1-b2de-b07f26df226f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5548c468ff-42zhn\" (UID: \"b3da5971-ff11-43e1-b2de-b07f26df226f\") " pod="openshift-authentication/oauth-openshift-5548c468ff-42zhn" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.656039 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b3da5971-ff11-43e1-b2de-b07f26df226f-v4-0-config-system-router-certs\") pod \"oauth-openshift-5548c468ff-42zhn\" (UID: \"b3da5971-ff11-43e1-b2de-b07f26df226f\") " pod="openshift-authentication/oauth-openshift-5548c468ff-42zhn" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.656796 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b3da5971-ff11-43e1-b2de-b07f26df226f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5548c468ff-42zhn\" (UID: \"b3da5971-ff11-43e1-b2de-b07f26df226f\") " pod="openshift-authentication/oauth-openshift-5548c468ff-42zhn" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.657036 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b3da5971-ff11-43e1-b2de-b07f26df226f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5548c468ff-42zhn\" (UID: \"b3da5971-ff11-43e1-b2de-b07f26df226f\") " pod="openshift-authentication/oauth-openshift-5548c468ff-42zhn" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.657525 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b3da5971-ff11-43e1-b2de-b07f26df226f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5548c468ff-42zhn\" (UID: \"b3da5971-ff11-43e1-b2de-b07f26df226f\") " pod="openshift-authentication/oauth-openshift-5548c468ff-42zhn" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.658210 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b3da5971-ff11-43e1-b2de-b07f26df226f-v4-0-config-system-session\") pod \"oauth-openshift-5548c468ff-42zhn\" (UID: \"b3da5971-ff11-43e1-b2de-b07f26df226f\") " pod="openshift-authentication/oauth-openshift-5548c468ff-42zhn" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.664140 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b3da5971-ff11-43e1-b2de-b07f26df226f-v4-0-config-user-template-error\") pod \"oauth-openshift-5548c468ff-42zhn\" (UID: \"b3da5971-ff11-43e1-b2de-b07f26df226f\") " pod="openshift-authentication/oauth-openshift-5548c468ff-42zhn" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.669787 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rv2k\" (UniqueName: \"kubernetes.io/projected/b3da5971-ff11-43e1-b2de-b07f26df226f-kube-api-access-2rv2k\") pod \"oauth-openshift-5548c468ff-42zhn\" (UID: \"b3da5971-ff11-43e1-b2de-b07f26df226f\") " pod="openshift-authentication/oauth-openshift-5548c468ff-42zhn" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.866518 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" event={"ID":"9deedfb3-0e0e-4287-81de-8131aac4b6b0","Type":"ContainerDied","Data":"31c7878860a43c0e87efbe64ae4c7904fda5c58eafbdea82749e9d63b92cd61a"} Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.866577 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-65454647d6-pr5dd" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.866983 4758 scope.go:117] "RemoveContainer" containerID="fc184b4d0a6ffa3c0042ec525d291d32b21dad822f07345d3dd2db1dfc4585ba" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.884516 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5548c468ff-42zhn" Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.956590 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-65454647d6-pr5dd"] Jan 22 18:01:31 crc kubenswrapper[4758]: I0122 18:01:31.970164 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-65454647d6-pr5dd"] Jan 22 18:01:32 crc kubenswrapper[4758]: I0122 18:01:32.456239 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5548c468ff-42zhn"] Jan 22 18:01:32 crc kubenswrapper[4758]: I0122 18:01:32.845776 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9deedfb3-0e0e-4287-81de-8131aac4b6b0" path="/var/lib/kubelet/pods/9deedfb3-0e0e-4287-81de-8131aac4b6b0/volumes" Jan 22 18:01:32 crc kubenswrapper[4758]: I0122 18:01:32.879105 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5548c468ff-42zhn" event={"ID":"b3da5971-ff11-43e1-b2de-b07f26df226f","Type":"ContainerStarted","Data":"b4fd8f4ce1582cb98335f197d467e9d58320bad3ebf6eafa3e940281a30c4bc1"} Jan 22 18:01:33 crc kubenswrapper[4758]: I0122 18:01:33.808705 4758 scope.go:117] "RemoveContainer" containerID="71d0f9a93a1f198cee3e61be87dac5fd13220229181dc2ee3ad7a9d1aecf76fb" Jan 22 18:01:33 crc kubenswrapper[4758]: E0122 18:01:33.809762 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:01:33 crc kubenswrapper[4758]: I0122 18:01:33.892484 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5548c468ff-42zhn" event={"ID":"b3da5971-ff11-43e1-b2de-b07f26df226f","Type":"ContainerStarted","Data":"dc421fec82801a2c1a2302798ff5eb48ce2516085ab2712a11736e6055734781"} Jan 22 18:01:33 crc kubenswrapper[4758]: I0122 18:01:33.894388 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5548c468ff-42zhn" Jan 22 18:01:33 crc kubenswrapper[4758]: I0122 18:01:33.917694 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5548c468ff-42zhn" Jan 22 18:01:33 crc kubenswrapper[4758]: I0122 18:01:33.931062 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5548c468ff-42zhn" podStartSLOduration=28.931042042 podStartE2EDuration="28.931042042s" podCreationTimestamp="2026-01-22 18:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 18:01:33.924106812 +0000 UTC m=+5515.407446097" watchObservedRunningTime="2026-01-22 18:01:33.931042042 +0000 UTC m=+5515.414381317" Jan 22 18:01:48 crc kubenswrapper[4758]: I0122 18:01:48.814270 4758 scope.go:117] "RemoveContainer" containerID="71d0f9a93a1f198cee3e61be87dac5fd13220229181dc2ee3ad7a9d1aecf76fb" Jan 22 18:01:48 crc kubenswrapper[4758]: E0122 18:01:48.815064 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:01:59 crc kubenswrapper[4758]: I0122 18:01:59.808358 4758 scope.go:117] "RemoveContainer" containerID="71d0f9a93a1f198cee3e61be87dac5fd13220229181dc2ee3ad7a9d1aecf76fb" Jan 22 18:01:59 crc kubenswrapper[4758]: E0122 18:01:59.809586 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:02:03 crc kubenswrapper[4758]: I0122 18:02:03.639218 4758 scope.go:117] "RemoveContainer" containerID="35ec117f9c484d69b152a3eaba3229c7b0ea74ffb48c0b003079715239cdcb7a" Jan 22 18:02:13 crc kubenswrapper[4758]: I0122 18:02:13.808712 4758 scope.go:117] "RemoveContainer" containerID="71d0f9a93a1f198cee3e61be87dac5fd13220229181dc2ee3ad7a9d1aecf76fb" Jan 22 18:02:13 crc kubenswrapper[4758]: E0122 18:02:13.810605 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:02:26 crc kubenswrapper[4758]: I0122 18:02:26.809764 4758 scope.go:117] "RemoveContainer" containerID="71d0f9a93a1f198cee3e61be87dac5fd13220229181dc2ee3ad7a9d1aecf76fb" Jan 22 18:02:26 crc kubenswrapper[4758]: E0122 18:02:26.810800 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:02:37 crc kubenswrapper[4758]: I0122 18:02:37.808700 4758 scope.go:117] "RemoveContainer" containerID="71d0f9a93a1f198cee3e61be87dac5fd13220229181dc2ee3ad7a9d1aecf76fb" Jan 22 18:02:37 crc kubenswrapper[4758]: E0122 18:02:37.811121 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:02:50 crc kubenswrapper[4758]: I0122 18:02:50.809709 4758 scope.go:117] "RemoveContainer" containerID="71d0f9a93a1f198cee3e61be87dac5fd13220229181dc2ee3ad7a9d1aecf76fb" Jan 22 18:02:51 crc kubenswrapper[4758]: I0122 18:02:51.733105 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerStarted","Data":"968895627d8cf4bbc9c0c35c061cd3b0ead5fb8b196a6743b3a3d1f845f146b6"} Jan 22 18:05:13 crc kubenswrapper[4758]: I0122 18:05:13.837511 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 18:05:13 crc kubenswrapper[4758]: I0122 18:05:13.838295 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 18:05:43 crc kubenswrapper[4758]: I0122 18:05:43.837079 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 18:05:43 crc kubenswrapper[4758]: I0122 18:05:43.837530 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 18:06:13 crc kubenswrapper[4758]: I0122 18:06:13.837333 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 18:06:13 crc kubenswrapper[4758]: I0122 18:06:13.838441 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 18:06:13 crc kubenswrapper[4758]: I0122 18:06:13.838733 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" Jan 22 18:06:13 crc kubenswrapper[4758]: I0122 18:06:13.840489 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"968895627d8cf4bbc9c0c35c061cd3b0ead5fb8b196a6743b3a3d1f845f146b6"} pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 18:06:13 crc kubenswrapper[4758]: I0122 18:06:13.840647 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" containerID="cri-o://968895627d8cf4bbc9c0c35c061cd3b0ead5fb8b196a6743b3a3d1f845f146b6" gracePeriod=600 Jan 22 18:06:13 crc kubenswrapper[4758]: I0122 18:06:13.977553 4758 generic.go:334] "Generic (PLEG): container finished" podID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerID="968895627d8cf4bbc9c0c35c061cd3b0ead5fb8b196a6743b3a3d1f845f146b6" exitCode=0 Jan 22 18:06:13 crc kubenswrapper[4758]: I0122 18:06:13.977658 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerDied","Data":"968895627d8cf4bbc9c0c35c061cd3b0ead5fb8b196a6743b3a3d1f845f146b6"} Jan 22 18:06:13 crc kubenswrapper[4758]: I0122 18:06:13.977709 4758 scope.go:117] "RemoveContainer" containerID="71d0f9a93a1f198cee3e61be87dac5fd13220229181dc2ee3ad7a9d1aecf76fb" Jan 22 18:06:14 crc kubenswrapper[4758]: I0122 18:06:14.988885 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerStarted","Data":"8b5490c4b3e8158c20032f7b8e64df047dabd62fdeacf2f33c9dc2a8709aa51e"} Jan 22 18:08:43 crc kubenswrapper[4758]: I0122 18:08:43.837096 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 18:08:43 crc kubenswrapper[4758]: I0122 18:08:43.837929 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 18:09:13 crc kubenswrapper[4758]: I0122 18:09:13.838084 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 18:09:13 crc kubenswrapper[4758]: I0122 18:09:13.838798 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 18:09:43 crc kubenswrapper[4758]: I0122 18:09:43.837893 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 18:09:43 crc kubenswrapper[4758]: I0122 18:09:43.838667 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 18:09:43 crc kubenswrapper[4758]: I0122 18:09:43.838798 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" Jan 22 18:09:43 crc kubenswrapper[4758]: I0122 18:09:43.840074 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8b5490c4b3e8158c20032f7b8e64df047dabd62fdeacf2f33c9dc2a8709aa51e"} pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 18:09:43 crc kubenswrapper[4758]: I0122 18:09:43.840206 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" containerID="cri-o://8b5490c4b3e8158c20032f7b8e64df047dabd62fdeacf2f33c9dc2a8709aa51e" gracePeriod=600 Jan 22 18:09:44 crc kubenswrapper[4758]: E0122 18:09:44.636337 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:09:44 crc kubenswrapper[4758]: I0122 18:09:44.941992 4758 generic.go:334] "Generic (PLEG): container finished" podID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerID="8b5490c4b3e8158c20032f7b8e64df047dabd62fdeacf2f33c9dc2a8709aa51e" exitCode=0 Jan 22 18:09:44 crc kubenswrapper[4758]: I0122 18:09:44.942086 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerDied","Data":"8b5490c4b3e8158c20032f7b8e64df047dabd62fdeacf2f33c9dc2a8709aa51e"} Jan 22 18:09:44 crc kubenswrapper[4758]: I0122 18:09:44.942161 4758 scope.go:117] "RemoveContainer" containerID="968895627d8cf4bbc9c0c35c061cd3b0ead5fb8b196a6743b3a3d1f845f146b6" Jan 22 18:09:44 crc kubenswrapper[4758]: I0122 18:09:44.943458 4758 scope.go:117] "RemoveContainer" containerID="8b5490c4b3e8158c20032f7b8e64df047dabd62fdeacf2f33c9dc2a8709aa51e" Jan 22 18:09:44 crc kubenswrapper[4758]: E0122 18:09:44.943876 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:09:56 crc kubenswrapper[4758]: I0122 18:09:56.808539 4758 scope.go:117] "RemoveContainer" containerID="8b5490c4b3e8158c20032f7b8e64df047dabd62fdeacf2f33c9dc2a8709aa51e" Jan 22 18:09:56 crc kubenswrapper[4758]: E0122 18:09:56.809592 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:10:09 crc kubenswrapper[4758]: I0122 18:10:09.808956 4758 scope.go:117] "RemoveContainer" containerID="8b5490c4b3e8158c20032f7b8e64df047dabd62fdeacf2f33c9dc2a8709aa51e" Jan 22 18:10:09 crc kubenswrapper[4758]: E0122 18:10:09.810197 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:10:16 crc kubenswrapper[4758]: I0122 18:10:16.001045 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sqztp"] Jan 22 18:10:16 crc kubenswrapper[4758]: I0122 18:10:16.005462 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sqztp" Jan 22 18:10:16 crc kubenswrapper[4758]: I0122 18:10:16.016231 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p76vx\" (UniqueName: \"kubernetes.io/projected/6b160272-25c8-4550-980f-34beb75bb611-kube-api-access-p76vx\") pod \"redhat-marketplace-sqztp\" (UID: \"6b160272-25c8-4550-980f-34beb75bb611\") " pod="openshift-marketplace/redhat-marketplace-sqztp" Jan 22 18:10:16 crc kubenswrapper[4758]: I0122 18:10:16.016409 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b160272-25c8-4550-980f-34beb75bb611-catalog-content\") pod \"redhat-marketplace-sqztp\" (UID: \"6b160272-25c8-4550-980f-34beb75bb611\") " pod="openshift-marketplace/redhat-marketplace-sqztp" Jan 22 18:10:16 crc kubenswrapper[4758]: I0122 18:10:16.016543 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b160272-25c8-4550-980f-34beb75bb611-utilities\") pod \"redhat-marketplace-sqztp\" (UID: \"6b160272-25c8-4550-980f-34beb75bb611\") " pod="openshift-marketplace/redhat-marketplace-sqztp" Jan 22 18:10:16 crc kubenswrapper[4758]: I0122 18:10:16.049049 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sqztp"] Jan 22 18:10:16 crc kubenswrapper[4758]: I0122 18:10:16.118556 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b160272-25c8-4550-980f-34beb75bb611-catalog-content\") pod \"redhat-marketplace-sqztp\" (UID: \"6b160272-25c8-4550-980f-34beb75bb611\") " pod="openshift-marketplace/redhat-marketplace-sqztp" Jan 22 18:10:16 crc kubenswrapper[4758]: I0122 18:10:16.118922 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b160272-25c8-4550-980f-34beb75bb611-utilities\") pod \"redhat-marketplace-sqztp\" (UID: \"6b160272-25c8-4550-980f-34beb75bb611\") " pod="openshift-marketplace/redhat-marketplace-sqztp" Jan 22 18:10:16 crc kubenswrapper[4758]: I0122 18:10:16.119041 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p76vx\" (UniqueName: \"kubernetes.io/projected/6b160272-25c8-4550-980f-34beb75bb611-kube-api-access-p76vx\") pod \"redhat-marketplace-sqztp\" (UID: \"6b160272-25c8-4550-980f-34beb75bb611\") " pod="openshift-marketplace/redhat-marketplace-sqztp" Jan 22 18:10:16 crc kubenswrapper[4758]: I0122 18:10:16.119874 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b160272-25c8-4550-980f-34beb75bb611-utilities\") pod \"redhat-marketplace-sqztp\" (UID: \"6b160272-25c8-4550-980f-34beb75bb611\") " pod="openshift-marketplace/redhat-marketplace-sqztp" Jan 22 18:10:16 crc kubenswrapper[4758]: I0122 18:10:16.119908 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b160272-25c8-4550-980f-34beb75bb611-catalog-content\") pod \"redhat-marketplace-sqztp\" (UID: \"6b160272-25c8-4550-980f-34beb75bb611\") " pod="openshift-marketplace/redhat-marketplace-sqztp" Jan 22 18:10:16 crc kubenswrapper[4758]: I0122 18:10:16.148587 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p76vx\" (UniqueName: \"kubernetes.io/projected/6b160272-25c8-4550-980f-34beb75bb611-kube-api-access-p76vx\") pod \"redhat-marketplace-sqztp\" (UID: \"6b160272-25c8-4550-980f-34beb75bb611\") " pod="openshift-marketplace/redhat-marketplace-sqztp" Jan 22 18:10:16 crc kubenswrapper[4758]: I0122 18:10:16.343486 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sqztp" Jan 22 18:10:16 crc kubenswrapper[4758]: I0122 18:10:16.958877 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sqztp"] Jan 22 18:10:16 crc kubenswrapper[4758]: W0122 18:10:16.969531 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b160272_25c8_4550_980f_34beb75bb611.slice/crio-c85f13c104c9a9bc5f615b15e3852a42634d293b52ac017f905d866eccc237fc WatchSource:0}: Error finding container c85f13c104c9a9bc5f615b15e3852a42634d293b52ac017f905d866eccc237fc: Status 404 returned error can't find the container with id c85f13c104c9a9bc5f615b15e3852a42634d293b52ac017f905d866eccc237fc Jan 22 18:10:17 crc kubenswrapper[4758]: I0122 18:10:17.305266 4758 generic.go:334] "Generic (PLEG): container finished" podID="6b160272-25c8-4550-980f-34beb75bb611" containerID="bff8ac0e7efef485da7e9bf225248f46ed4b1f02469fe7a73a2c2dd02e69d9db" exitCode=0 Jan 22 18:10:17 crc kubenswrapper[4758]: I0122 18:10:17.305318 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqztp" event={"ID":"6b160272-25c8-4550-980f-34beb75bb611","Type":"ContainerDied","Data":"bff8ac0e7efef485da7e9bf225248f46ed4b1f02469fe7a73a2c2dd02e69d9db"} Jan 22 18:10:17 crc kubenswrapper[4758]: I0122 18:10:17.305343 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqztp" event={"ID":"6b160272-25c8-4550-980f-34beb75bb611","Type":"ContainerStarted","Data":"c85f13c104c9a9bc5f615b15e3852a42634d293b52ac017f905d866eccc237fc"} Jan 22 18:10:17 crc kubenswrapper[4758]: I0122 18:10:17.308386 4758 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 18:10:18 crc kubenswrapper[4758]: I0122 18:10:18.315345 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqztp" event={"ID":"6b160272-25c8-4550-980f-34beb75bb611","Type":"ContainerStarted","Data":"fa9c2a8c0a75153ce7510b1e50181856e22a151889f9f25c07d9fcb8332a1bb5"} Jan 22 18:10:19 crc kubenswrapper[4758]: I0122 18:10:19.327102 4758 generic.go:334] "Generic (PLEG): container finished" podID="6b160272-25c8-4550-980f-34beb75bb611" containerID="fa9c2a8c0a75153ce7510b1e50181856e22a151889f9f25c07d9fcb8332a1bb5" exitCode=0 Jan 22 18:10:19 crc kubenswrapper[4758]: I0122 18:10:19.327142 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqztp" event={"ID":"6b160272-25c8-4550-980f-34beb75bb611","Type":"ContainerDied","Data":"fa9c2a8c0a75153ce7510b1e50181856e22a151889f9f25c07d9fcb8332a1bb5"} Jan 22 18:10:20 crc kubenswrapper[4758]: I0122 18:10:20.349898 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqztp" event={"ID":"6b160272-25c8-4550-980f-34beb75bb611","Type":"ContainerStarted","Data":"0a058ce50efa5eb728bb2f3242faca0dac31a8249b1e7dd5c5cdc51728887094"} Jan 22 18:10:20 crc kubenswrapper[4758]: I0122 18:10:20.372694 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sqztp" podStartSLOduration=2.9385000420000003 podStartE2EDuration="5.372667314s" podCreationTimestamp="2026-01-22 18:10:15 +0000 UTC" firstStartedPulling="2026-01-22 18:10:17.308020627 +0000 UTC m=+6038.791359912" lastFinishedPulling="2026-01-22 18:10:19.742187899 +0000 UTC m=+6041.225527184" observedRunningTime="2026-01-22 18:10:20.372043637 +0000 UTC m=+6041.855382932" watchObservedRunningTime="2026-01-22 18:10:20.372667314 +0000 UTC m=+6041.856006599" Jan 22 18:10:23 crc kubenswrapper[4758]: I0122 18:10:23.367912 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lcpkj"] Jan 22 18:10:23 crc kubenswrapper[4758]: I0122 18:10:23.372104 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lcpkj" Jan 22 18:10:23 crc kubenswrapper[4758]: I0122 18:10:23.396829 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lcpkj"] Jan 22 18:10:23 crc kubenswrapper[4758]: I0122 18:10:23.483510 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9ec2a89-3a43-4a38-bd50-5d2eae130e92-catalog-content\") pod \"redhat-operators-lcpkj\" (UID: \"b9ec2a89-3a43-4a38-bd50-5d2eae130e92\") " pod="openshift-marketplace/redhat-operators-lcpkj" Jan 22 18:10:23 crc kubenswrapper[4758]: I0122 18:10:23.483599 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk28h\" (UniqueName: \"kubernetes.io/projected/b9ec2a89-3a43-4a38-bd50-5d2eae130e92-kube-api-access-dk28h\") pod \"redhat-operators-lcpkj\" (UID: \"b9ec2a89-3a43-4a38-bd50-5d2eae130e92\") " pod="openshift-marketplace/redhat-operators-lcpkj" Jan 22 18:10:23 crc kubenswrapper[4758]: I0122 18:10:23.483710 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9ec2a89-3a43-4a38-bd50-5d2eae130e92-utilities\") pod \"redhat-operators-lcpkj\" (UID: \"b9ec2a89-3a43-4a38-bd50-5d2eae130e92\") " pod="openshift-marketplace/redhat-operators-lcpkj" Jan 22 18:10:23 crc kubenswrapper[4758]: I0122 18:10:23.586149 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9ec2a89-3a43-4a38-bd50-5d2eae130e92-catalog-content\") pod \"redhat-operators-lcpkj\" (UID: \"b9ec2a89-3a43-4a38-bd50-5d2eae130e92\") " pod="openshift-marketplace/redhat-operators-lcpkj" Jan 22 18:10:23 crc kubenswrapper[4758]: I0122 18:10:23.586233 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk28h\" (UniqueName: \"kubernetes.io/projected/b9ec2a89-3a43-4a38-bd50-5d2eae130e92-kube-api-access-dk28h\") pod \"redhat-operators-lcpkj\" (UID: \"b9ec2a89-3a43-4a38-bd50-5d2eae130e92\") " pod="openshift-marketplace/redhat-operators-lcpkj" Jan 22 18:10:23 crc kubenswrapper[4758]: I0122 18:10:23.586338 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9ec2a89-3a43-4a38-bd50-5d2eae130e92-utilities\") pod \"redhat-operators-lcpkj\" (UID: \"b9ec2a89-3a43-4a38-bd50-5d2eae130e92\") " pod="openshift-marketplace/redhat-operators-lcpkj" Jan 22 18:10:23 crc kubenswrapper[4758]: I0122 18:10:23.586842 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9ec2a89-3a43-4a38-bd50-5d2eae130e92-utilities\") pod \"redhat-operators-lcpkj\" (UID: \"b9ec2a89-3a43-4a38-bd50-5d2eae130e92\") " pod="openshift-marketplace/redhat-operators-lcpkj" Jan 22 18:10:23 crc kubenswrapper[4758]: I0122 18:10:23.586842 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9ec2a89-3a43-4a38-bd50-5d2eae130e92-catalog-content\") pod \"redhat-operators-lcpkj\" (UID: \"b9ec2a89-3a43-4a38-bd50-5d2eae130e92\") " pod="openshift-marketplace/redhat-operators-lcpkj" Jan 22 18:10:23 crc kubenswrapper[4758]: I0122 18:10:23.637724 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk28h\" (UniqueName: \"kubernetes.io/projected/b9ec2a89-3a43-4a38-bd50-5d2eae130e92-kube-api-access-dk28h\") pod \"redhat-operators-lcpkj\" (UID: \"b9ec2a89-3a43-4a38-bd50-5d2eae130e92\") " pod="openshift-marketplace/redhat-operators-lcpkj" Jan 22 18:10:23 crc kubenswrapper[4758]: I0122 18:10:23.694436 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lcpkj" Jan 22 18:10:24 crc kubenswrapper[4758]: I0122 18:10:24.249583 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lcpkj"] Jan 22 18:10:24 crc kubenswrapper[4758]: I0122 18:10:24.398925 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lcpkj" event={"ID":"b9ec2a89-3a43-4a38-bd50-5d2eae130e92","Type":"ContainerStarted","Data":"c8be710d87b81a7aa654f390b84bac26d44751c6d2a20e850ac39636fd6bd4fb"} Jan 22 18:10:24 crc kubenswrapper[4758]: I0122 18:10:24.809147 4758 scope.go:117] "RemoveContainer" containerID="8b5490c4b3e8158c20032f7b8e64df047dabd62fdeacf2f33c9dc2a8709aa51e" Jan 22 18:10:24 crc kubenswrapper[4758]: E0122 18:10:24.809652 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:10:25 crc kubenswrapper[4758]: I0122 18:10:25.409869 4758 generic.go:334] "Generic (PLEG): container finished" podID="b9ec2a89-3a43-4a38-bd50-5d2eae130e92" containerID="30d60e5172c0bace78fc7dceee0230dd2c79a951fcd135ab338b4cd3835d9d86" exitCode=0 Jan 22 18:10:25 crc kubenswrapper[4758]: I0122 18:10:25.409982 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lcpkj" event={"ID":"b9ec2a89-3a43-4a38-bd50-5d2eae130e92","Type":"ContainerDied","Data":"30d60e5172c0bace78fc7dceee0230dd2c79a951fcd135ab338b4cd3835d9d86"} Jan 22 18:10:26 crc kubenswrapper[4758]: I0122 18:10:26.153501 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6ssxk"] Jan 22 18:10:26 crc kubenswrapper[4758]: I0122 18:10:26.155908 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6ssxk" Jan 22 18:10:26 crc kubenswrapper[4758]: I0122 18:10:26.169861 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6ssxk"] Jan 22 18:10:26 crc kubenswrapper[4758]: I0122 18:10:26.249468 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea08d26e-73e6-4758-8d0e-7635e3fdbdc7-utilities\") pod \"certified-operators-6ssxk\" (UID: \"ea08d26e-73e6-4758-8d0e-7635e3fdbdc7\") " pod="openshift-marketplace/certified-operators-6ssxk" Jan 22 18:10:26 crc kubenswrapper[4758]: I0122 18:10:26.250257 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea08d26e-73e6-4758-8d0e-7635e3fdbdc7-catalog-content\") pod \"certified-operators-6ssxk\" (UID: \"ea08d26e-73e6-4758-8d0e-7635e3fdbdc7\") " pod="openshift-marketplace/certified-operators-6ssxk" Jan 22 18:10:26 crc kubenswrapper[4758]: I0122 18:10:26.250508 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz978\" (UniqueName: \"kubernetes.io/projected/ea08d26e-73e6-4758-8d0e-7635e3fdbdc7-kube-api-access-dz978\") pod \"certified-operators-6ssxk\" (UID: \"ea08d26e-73e6-4758-8d0e-7635e3fdbdc7\") " pod="openshift-marketplace/certified-operators-6ssxk" Jan 22 18:10:26 crc kubenswrapper[4758]: I0122 18:10:26.344054 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sqztp" Jan 22 18:10:26 crc kubenswrapper[4758]: I0122 18:10:26.344499 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sqztp" Jan 22 18:10:26 crc kubenswrapper[4758]: I0122 18:10:26.352511 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea08d26e-73e6-4758-8d0e-7635e3fdbdc7-catalog-content\") pod \"certified-operators-6ssxk\" (UID: \"ea08d26e-73e6-4758-8d0e-7635e3fdbdc7\") " pod="openshift-marketplace/certified-operators-6ssxk" Jan 22 18:10:26 crc kubenswrapper[4758]: I0122 18:10:26.352616 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz978\" (UniqueName: \"kubernetes.io/projected/ea08d26e-73e6-4758-8d0e-7635e3fdbdc7-kube-api-access-dz978\") pod \"certified-operators-6ssxk\" (UID: \"ea08d26e-73e6-4758-8d0e-7635e3fdbdc7\") " pod="openshift-marketplace/certified-operators-6ssxk" Jan 22 18:10:26 crc kubenswrapper[4758]: I0122 18:10:26.352687 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea08d26e-73e6-4758-8d0e-7635e3fdbdc7-utilities\") pod \"certified-operators-6ssxk\" (UID: \"ea08d26e-73e6-4758-8d0e-7635e3fdbdc7\") " pod="openshift-marketplace/certified-operators-6ssxk" Jan 22 18:10:26 crc kubenswrapper[4758]: I0122 18:10:26.353038 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea08d26e-73e6-4758-8d0e-7635e3fdbdc7-catalog-content\") pod \"certified-operators-6ssxk\" (UID: \"ea08d26e-73e6-4758-8d0e-7635e3fdbdc7\") " pod="openshift-marketplace/certified-operators-6ssxk" Jan 22 18:10:26 crc kubenswrapper[4758]: I0122 18:10:26.353376 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea08d26e-73e6-4758-8d0e-7635e3fdbdc7-utilities\") pod \"certified-operators-6ssxk\" (UID: \"ea08d26e-73e6-4758-8d0e-7635e3fdbdc7\") " pod="openshift-marketplace/certified-operators-6ssxk" Jan 22 18:10:26 crc kubenswrapper[4758]: I0122 18:10:26.374213 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz978\" (UniqueName: \"kubernetes.io/projected/ea08d26e-73e6-4758-8d0e-7635e3fdbdc7-kube-api-access-dz978\") pod \"certified-operators-6ssxk\" (UID: \"ea08d26e-73e6-4758-8d0e-7635e3fdbdc7\") " pod="openshift-marketplace/certified-operators-6ssxk" Jan 22 18:10:26 crc kubenswrapper[4758]: I0122 18:10:26.410717 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sqztp" Jan 22 18:10:26 crc kubenswrapper[4758]: I0122 18:10:26.477882 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6ssxk" Jan 22 18:10:26 crc kubenswrapper[4758]: I0122 18:10:26.478991 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sqztp" Jan 22 18:10:28 crc kubenswrapper[4758]: I0122 18:10:28.404795 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6ssxk"] Jan 22 18:10:28 crc kubenswrapper[4758]: I0122 18:10:28.452518 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6ssxk" event={"ID":"ea08d26e-73e6-4758-8d0e-7635e3fdbdc7","Type":"ContainerStarted","Data":"c322505d9c43757a872bab332c17d7545b3d5bf1b375ef72fbdae36a304fe57f"} Jan 22 18:10:29 crc kubenswrapper[4758]: I0122 18:10:29.145450 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sqztp"] Jan 22 18:10:29 crc kubenswrapper[4758]: I0122 18:10:29.464903 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lcpkj" event={"ID":"b9ec2a89-3a43-4a38-bd50-5d2eae130e92","Type":"ContainerStarted","Data":"cc115281c075977f69e32a0981ba9b7b03c8c745495b6e762010f50213041a7c"} Jan 22 18:10:29 crc kubenswrapper[4758]: I0122 18:10:29.468897 4758 generic.go:334] "Generic (PLEG): container finished" podID="ea08d26e-73e6-4758-8d0e-7635e3fdbdc7" containerID="2ccf5657d2936b003fb5dbbd8ffddd003810e07e94e027f37d534a120e9abeb7" exitCode=0 Jan 22 18:10:29 crc kubenswrapper[4758]: I0122 18:10:29.469221 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sqztp" podUID="6b160272-25c8-4550-980f-34beb75bb611" containerName="registry-server" containerID="cri-o://0a058ce50efa5eb728bb2f3242faca0dac31a8249b1e7dd5c5cdc51728887094" gracePeriod=2 Jan 22 18:10:29 crc kubenswrapper[4758]: I0122 18:10:29.469610 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6ssxk" event={"ID":"ea08d26e-73e6-4758-8d0e-7635e3fdbdc7","Type":"ContainerDied","Data":"2ccf5657d2936b003fb5dbbd8ffddd003810e07e94e027f37d534a120e9abeb7"} Jan 22 18:10:29 crc kubenswrapper[4758]: I0122 18:10:29.956708 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sqztp" Jan 22 18:10:30 crc kubenswrapper[4758]: I0122 18:10:30.045104 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b160272-25c8-4550-980f-34beb75bb611-utilities\") pod \"6b160272-25c8-4550-980f-34beb75bb611\" (UID: \"6b160272-25c8-4550-980f-34beb75bb611\") " Jan 22 18:10:30 crc kubenswrapper[4758]: I0122 18:10:30.045257 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p76vx\" (UniqueName: \"kubernetes.io/projected/6b160272-25c8-4550-980f-34beb75bb611-kube-api-access-p76vx\") pod \"6b160272-25c8-4550-980f-34beb75bb611\" (UID: \"6b160272-25c8-4550-980f-34beb75bb611\") " Jan 22 18:10:30 crc kubenswrapper[4758]: I0122 18:10:30.045289 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b160272-25c8-4550-980f-34beb75bb611-catalog-content\") pod \"6b160272-25c8-4550-980f-34beb75bb611\" (UID: \"6b160272-25c8-4550-980f-34beb75bb611\") " Jan 22 18:10:30 crc kubenswrapper[4758]: I0122 18:10:30.045933 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b160272-25c8-4550-980f-34beb75bb611-utilities" (OuterVolumeSpecName: "utilities") pod "6b160272-25c8-4550-980f-34beb75bb611" (UID: "6b160272-25c8-4550-980f-34beb75bb611"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 18:10:30 crc kubenswrapper[4758]: I0122 18:10:30.054405 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b160272-25c8-4550-980f-34beb75bb611-kube-api-access-p76vx" (OuterVolumeSpecName: "kube-api-access-p76vx") pod "6b160272-25c8-4550-980f-34beb75bb611" (UID: "6b160272-25c8-4550-980f-34beb75bb611"). InnerVolumeSpecName "kube-api-access-p76vx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 18:10:30 crc kubenswrapper[4758]: I0122 18:10:30.067217 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b160272-25c8-4550-980f-34beb75bb611-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6b160272-25c8-4550-980f-34beb75bb611" (UID: "6b160272-25c8-4550-980f-34beb75bb611"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 18:10:30 crc kubenswrapper[4758]: I0122 18:10:30.148262 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b160272-25c8-4550-980f-34beb75bb611-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 18:10:30 crc kubenswrapper[4758]: I0122 18:10:30.148293 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p76vx\" (UniqueName: \"kubernetes.io/projected/6b160272-25c8-4550-980f-34beb75bb611-kube-api-access-p76vx\") on node \"crc\" DevicePath \"\"" Jan 22 18:10:30 crc kubenswrapper[4758]: I0122 18:10:30.148306 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b160272-25c8-4550-980f-34beb75bb611-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 18:10:30 crc kubenswrapper[4758]: I0122 18:10:30.482134 4758 generic.go:334] "Generic (PLEG): container finished" podID="6b160272-25c8-4550-980f-34beb75bb611" containerID="0a058ce50efa5eb728bb2f3242faca0dac31a8249b1e7dd5c5cdc51728887094" exitCode=0 Jan 22 18:10:30 crc kubenswrapper[4758]: I0122 18:10:30.482884 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sqztp" Jan 22 18:10:30 crc kubenswrapper[4758]: I0122 18:10:30.483143 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqztp" event={"ID":"6b160272-25c8-4550-980f-34beb75bb611","Type":"ContainerDied","Data":"0a058ce50efa5eb728bb2f3242faca0dac31a8249b1e7dd5c5cdc51728887094"} Jan 22 18:10:30 crc kubenswrapper[4758]: I0122 18:10:30.483175 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqztp" event={"ID":"6b160272-25c8-4550-980f-34beb75bb611","Type":"ContainerDied","Data":"c85f13c104c9a9bc5f615b15e3852a42634d293b52ac017f905d866eccc237fc"} Jan 22 18:10:30 crc kubenswrapper[4758]: I0122 18:10:30.483193 4758 scope.go:117] "RemoveContainer" containerID="0a058ce50efa5eb728bb2f3242faca0dac31a8249b1e7dd5c5cdc51728887094" Jan 22 18:10:30 crc kubenswrapper[4758]: I0122 18:10:30.504394 4758 scope.go:117] "RemoveContainer" containerID="fa9c2a8c0a75153ce7510b1e50181856e22a151889f9f25c07d9fcb8332a1bb5" Jan 22 18:10:30 crc kubenswrapper[4758]: I0122 18:10:30.536749 4758 scope.go:117] "RemoveContainer" containerID="bff8ac0e7efef485da7e9bf225248f46ed4b1f02469fe7a73a2c2dd02e69d9db" Jan 22 18:10:30 crc kubenswrapper[4758]: I0122 18:10:30.538323 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sqztp"] Jan 22 18:10:30 crc kubenswrapper[4758]: I0122 18:10:30.555010 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sqztp"] Jan 22 18:10:30 crc kubenswrapper[4758]: I0122 18:10:30.582977 4758 scope.go:117] "RemoveContainer" containerID="0a058ce50efa5eb728bb2f3242faca0dac31a8249b1e7dd5c5cdc51728887094" Jan 22 18:10:30 crc kubenswrapper[4758]: E0122 18:10:30.584963 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a058ce50efa5eb728bb2f3242faca0dac31a8249b1e7dd5c5cdc51728887094\": container with ID starting with 0a058ce50efa5eb728bb2f3242faca0dac31a8249b1e7dd5c5cdc51728887094 not found: ID does not exist" containerID="0a058ce50efa5eb728bb2f3242faca0dac31a8249b1e7dd5c5cdc51728887094" Jan 22 18:10:30 crc kubenswrapper[4758]: I0122 18:10:30.585038 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a058ce50efa5eb728bb2f3242faca0dac31a8249b1e7dd5c5cdc51728887094"} err="failed to get container status \"0a058ce50efa5eb728bb2f3242faca0dac31a8249b1e7dd5c5cdc51728887094\": rpc error: code = NotFound desc = could not find container \"0a058ce50efa5eb728bb2f3242faca0dac31a8249b1e7dd5c5cdc51728887094\": container with ID starting with 0a058ce50efa5eb728bb2f3242faca0dac31a8249b1e7dd5c5cdc51728887094 not found: ID does not exist" Jan 22 18:10:30 crc kubenswrapper[4758]: I0122 18:10:30.585068 4758 scope.go:117] "RemoveContainer" containerID="fa9c2a8c0a75153ce7510b1e50181856e22a151889f9f25c07d9fcb8332a1bb5" Jan 22 18:10:30 crc kubenswrapper[4758]: E0122 18:10:30.585569 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa9c2a8c0a75153ce7510b1e50181856e22a151889f9f25c07d9fcb8332a1bb5\": container with ID starting with fa9c2a8c0a75153ce7510b1e50181856e22a151889f9f25c07d9fcb8332a1bb5 not found: ID does not exist" containerID="fa9c2a8c0a75153ce7510b1e50181856e22a151889f9f25c07d9fcb8332a1bb5" Jan 22 18:10:30 crc kubenswrapper[4758]: I0122 18:10:30.585648 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa9c2a8c0a75153ce7510b1e50181856e22a151889f9f25c07d9fcb8332a1bb5"} err="failed to get container status \"fa9c2a8c0a75153ce7510b1e50181856e22a151889f9f25c07d9fcb8332a1bb5\": rpc error: code = NotFound desc = could not find container \"fa9c2a8c0a75153ce7510b1e50181856e22a151889f9f25c07d9fcb8332a1bb5\": container with ID starting with fa9c2a8c0a75153ce7510b1e50181856e22a151889f9f25c07d9fcb8332a1bb5 not found: ID does not exist" Jan 22 18:10:30 crc kubenswrapper[4758]: I0122 18:10:30.585706 4758 scope.go:117] "RemoveContainer" containerID="bff8ac0e7efef485da7e9bf225248f46ed4b1f02469fe7a73a2c2dd02e69d9db" Jan 22 18:10:30 crc kubenswrapper[4758]: E0122 18:10:30.586049 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bff8ac0e7efef485da7e9bf225248f46ed4b1f02469fe7a73a2c2dd02e69d9db\": container with ID starting with bff8ac0e7efef485da7e9bf225248f46ed4b1f02469fe7a73a2c2dd02e69d9db not found: ID does not exist" containerID="bff8ac0e7efef485da7e9bf225248f46ed4b1f02469fe7a73a2c2dd02e69d9db" Jan 22 18:10:30 crc kubenswrapper[4758]: I0122 18:10:30.586070 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bff8ac0e7efef485da7e9bf225248f46ed4b1f02469fe7a73a2c2dd02e69d9db"} err="failed to get container status \"bff8ac0e7efef485da7e9bf225248f46ed4b1f02469fe7a73a2c2dd02e69d9db\": rpc error: code = NotFound desc = could not find container \"bff8ac0e7efef485da7e9bf225248f46ed4b1f02469fe7a73a2c2dd02e69d9db\": container with ID starting with bff8ac0e7efef485da7e9bf225248f46ed4b1f02469fe7a73a2c2dd02e69d9db not found: ID does not exist" Jan 22 18:10:30 crc kubenswrapper[4758]: I0122 18:10:30.820166 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b160272-25c8-4550-980f-34beb75bb611" path="/var/lib/kubelet/pods/6b160272-25c8-4550-980f-34beb75bb611/volumes" Jan 22 18:10:34 crc kubenswrapper[4758]: I0122 18:10:34.529921 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6ssxk" event={"ID":"ea08d26e-73e6-4758-8d0e-7635e3fdbdc7","Type":"ContainerStarted","Data":"aa52da43ecf02e15ca22b0502e4c4936a0fbcd640a074b8ddfd6b7bd1aee5233"} Jan 22 18:10:35 crc kubenswrapper[4758]: I0122 18:10:35.809331 4758 scope.go:117] "RemoveContainer" containerID="8b5490c4b3e8158c20032f7b8e64df047dabd62fdeacf2f33c9dc2a8709aa51e" Jan 22 18:10:35 crc kubenswrapper[4758]: E0122 18:10:35.809911 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:10:37 crc kubenswrapper[4758]: I0122 18:10:37.563393 4758 generic.go:334] "Generic (PLEG): container finished" podID="b9ec2a89-3a43-4a38-bd50-5d2eae130e92" containerID="cc115281c075977f69e32a0981ba9b7b03c8c745495b6e762010f50213041a7c" exitCode=0 Jan 22 18:10:37 crc kubenswrapper[4758]: I0122 18:10:37.563475 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lcpkj" event={"ID":"b9ec2a89-3a43-4a38-bd50-5d2eae130e92","Type":"ContainerDied","Data":"cc115281c075977f69e32a0981ba9b7b03c8c745495b6e762010f50213041a7c"} Jan 22 18:10:39 crc kubenswrapper[4758]: I0122 18:10:39.587561 4758 generic.go:334] "Generic (PLEG): container finished" podID="ea08d26e-73e6-4758-8d0e-7635e3fdbdc7" containerID="aa52da43ecf02e15ca22b0502e4c4936a0fbcd640a074b8ddfd6b7bd1aee5233" exitCode=0 Jan 22 18:10:39 crc kubenswrapper[4758]: I0122 18:10:39.587618 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6ssxk" event={"ID":"ea08d26e-73e6-4758-8d0e-7635e3fdbdc7","Type":"ContainerDied","Data":"aa52da43ecf02e15ca22b0502e4c4936a0fbcd640a074b8ddfd6b7bd1aee5233"} Jan 22 18:10:40 crc kubenswrapper[4758]: I0122 18:10:40.600043 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lcpkj" event={"ID":"b9ec2a89-3a43-4a38-bd50-5d2eae130e92","Type":"ContainerStarted","Data":"00f8eaa3003e250aa41c0b4f0406f2e1983cde36f44d305705bc3c3251f3b3e1"} Jan 22 18:10:40 crc kubenswrapper[4758]: I0122 18:10:40.625812 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lcpkj" podStartSLOduration=3.600981066 podStartE2EDuration="17.625793474s" podCreationTimestamp="2026-01-22 18:10:23 +0000 UTC" firstStartedPulling="2026-01-22 18:10:25.412707196 +0000 UTC m=+6046.896046481" lastFinishedPulling="2026-01-22 18:10:39.437519604 +0000 UTC m=+6060.920858889" observedRunningTime="2026-01-22 18:10:40.617499108 +0000 UTC m=+6062.100838403" watchObservedRunningTime="2026-01-22 18:10:40.625793474 +0000 UTC m=+6062.109132759" Jan 22 18:10:41 crc kubenswrapper[4758]: I0122 18:10:41.613037 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6ssxk" event={"ID":"ea08d26e-73e6-4758-8d0e-7635e3fdbdc7","Type":"ContainerStarted","Data":"a222eaf64281da98ec3fac7022f91cff51c653cacbc34f900d6a13c77496e54d"} Jan 22 18:10:41 crc kubenswrapper[4758]: I0122 18:10:41.644729 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6ssxk" podStartSLOduration=4.929687048 podStartE2EDuration="15.644711608s" podCreationTimestamp="2026-01-22 18:10:26 +0000 UTC" firstStartedPulling="2026-01-22 18:10:29.471971993 +0000 UTC m=+6050.955311278" lastFinishedPulling="2026-01-22 18:10:40.186996553 +0000 UTC m=+6061.670335838" observedRunningTime="2026-01-22 18:10:41.632568176 +0000 UTC m=+6063.115907461" watchObservedRunningTime="2026-01-22 18:10:41.644711608 +0000 UTC m=+6063.128050893" Jan 22 18:10:43 crc kubenswrapper[4758]: I0122 18:10:43.695608 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lcpkj" Jan 22 18:10:43 crc kubenswrapper[4758]: I0122 18:10:43.695954 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lcpkj" Jan 22 18:10:44 crc kubenswrapper[4758]: I0122 18:10:44.746854 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lcpkj" podUID="b9ec2a89-3a43-4a38-bd50-5d2eae130e92" containerName="registry-server" probeResult="failure" output=< Jan 22 18:10:44 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 22 18:10:44 crc kubenswrapper[4758]: > Jan 22 18:10:45 crc kubenswrapper[4758]: I0122 18:10:45.621805 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8cp4n"] Jan 22 18:10:45 crc kubenswrapper[4758]: E0122 18:10:45.622219 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b160272-25c8-4550-980f-34beb75bb611" containerName="extract-content" Jan 22 18:10:45 crc kubenswrapper[4758]: I0122 18:10:45.622244 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b160272-25c8-4550-980f-34beb75bb611" containerName="extract-content" Jan 22 18:10:45 crc kubenswrapper[4758]: E0122 18:10:45.622259 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b160272-25c8-4550-980f-34beb75bb611" containerName="registry-server" Jan 22 18:10:45 crc kubenswrapper[4758]: I0122 18:10:45.622265 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b160272-25c8-4550-980f-34beb75bb611" containerName="registry-server" Jan 22 18:10:45 crc kubenswrapper[4758]: E0122 18:10:45.622285 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b160272-25c8-4550-980f-34beb75bb611" containerName="extract-utilities" Jan 22 18:10:45 crc kubenswrapper[4758]: I0122 18:10:45.622292 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b160272-25c8-4550-980f-34beb75bb611" containerName="extract-utilities" Jan 22 18:10:45 crc kubenswrapper[4758]: I0122 18:10:45.622517 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b160272-25c8-4550-980f-34beb75bb611" containerName="registry-server" Jan 22 18:10:45 crc kubenswrapper[4758]: I0122 18:10:45.623967 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8cp4n" Jan 22 18:10:45 crc kubenswrapper[4758]: I0122 18:10:45.647511 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8cp4n"] Jan 22 18:10:45 crc kubenswrapper[4758]: I0122 18:10:45.716252 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v29j9\" (UniqueName: \"kubernetes.io/projected/1e5feab4-64a2-4939-94e8-8cc542606942-kube-api-access-v29j9\") pod \"community-operators-8cp4n\" (UID: \"1e5feab4-64a2-4939-94e8-8cc542606942\") " pod="openshift-marketplace/community-operators-8cp4n" Jan 22 18:10:45 crc kubenswrapper[4758]: I0122 18:10:45.716417 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e5feab4-64a2-4939-94e8-8cc542606942-catalog-content\") pod \"community-operators-8cp4n\" (UID: \"1e5feab4-64a2-4939-94e8-8cc542606942\") " pod="openshift-marketplace/community-operators-8cp4n" Jan 22 18:10:45 crc kubenswrapper[4758]: I0122 18:10:45.716503 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e5feab4-64a2-4939-94e8-8cc542606942-utilities\") pod \"community-operators-8cp4n\" (UID: \"1e5feab4-64a2-4939-94e8-8cc542606942\") " pod="openshift-marketplace/community-operators-8cp4n" Jan 22 18:10:45 crc kubenswrapper[4758]: I0122 18:10:45.818293 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v29j9\" (UniqueName: \"kubernetes.io/projected/1e5feab4-64a2-4939-94e8-8cc542606942-kube-api-access-v29j9\") pod \"community-operators-8cp4n\" (UID: \"1e5feab4-64a2-4939-94e8-8cc542606942\") " pod="openshift-marketplace/community-operators-8cp4n" Jan 22 18:10:45 crc kubenswrapper[4758]: I0122 18:10:45.818452 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e5feab4-64a2-4939-94e8-8cc542606942-catalog-content\") pod \"community-operators-8cp4n\" (UID: \"1e5feab4-64a2-4939-94e8-8cc542606942\") " pod="openshift-marketplace/community-operators-8cp4n" Jan 22 18:10:45 crc kubenswrapper[4758]: I0122 18:10:45.818514 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e5feab4-64a2-4939-94e8-8cc542606942-utilities\") pod \"community-operators-8cp4n\" (UID: \"1e5feab4-64a2-4939-94e8-8cc542606942\") " pod="openshift-marketplace/community-operators-8cp4n" Jan 22 18:10:45 crc kubenswrapper[4758]: I0122 18:10:45.819083 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e5feab4-64a2-4939-94e8-8cc542606942-utilities\") pod \"community-operators-8cp4n\" (UID: \"1e5feab4-64a2-4939-94e8-8cc542606942\") " pod="openshift-marketplace/community-operators-8cp4n" Jan 22 18:10:45 crc kubenswrapper[4758]: I0122 18:10:45.819508 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e5feab4-64a2-4939-94e8-8cc542606942-catalog-content\") pod \"community-operators-8cp4n\" (UID: \"1e5feab4-64a2-4939-94e8-8cc542606942\") " pod="openshift-marketplace/community-operators-8cp4n" Jan 22 18:10:45 crc kubenswrapper[4758]: I0122 18:10:45.846263 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v29j9\" (UniqueName: \"kubernetes.io/projected/1e5feab4-64a2-4939-94e8-8cc542606942-kube-api-access-v29j9\") pod \"community-operators-8cp4n\" (UID: \"1e5feab4-64a2-4939-94e8-8cc542606942\") " pod="openshift-marketplace/community-operators-8cp4n" Jan 22 18:10:45 crc kubenswrapper[4758]: I0122 18:10:45.943940 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8cp4n" Jan 22 18:10:46 crc kubenswrapper[4758]: I0122 18:10:46.478074 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6ssxk" Jan 22 18:10:46 crc kubenswrapper[4758]: I0122 18:10:46.481977 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6ssxk" Jan 22 18:10:46 crc kubenswrapper[4758]: I0122 18:10:46.534979 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6ssxk" Jan 22 18:10:46 crc kubenswrapper[4758]: I0122 18:10:46.713094 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6ssxk" Jan 22 18:10:46 crc kubenswrapper[4758]: I0122 18:10:46.789467 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8cp4n"] Jan 22 18:10:47 crc kubenswrapper[4758]: I0122 18:10:47.678988 4758 generic.go:334] "Generic (PLEG): container finished" podID="1e5feab4-64a2-4939-94e8-8cc542606942" containerID="3bb6bc7693c2abe73727752a2dce1ceb7bda1cc2f45f96307f945558d8c3f2d7" exitCode=0 Jan 22 18:10:47 crc kubenswrapper[4758]: I0122 18:10:47.679679 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8cp4n" event={"ID":"1e5feab4-64a2-4939-94e8-8cc542606942","Type":"ContainerDied","Data":"3bb6bc7693c2abe73727752a2dce1ceb7bda1cc2f45f96307f945558d8c3f2d7"} Jan 22 18:10:47 crc kubenswrapper[4758]: I0122 18:10:47.680575 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8cp4n" event={"ID":"1e5feab4-64a2-4939-94e8-8cc542606942","Type":"ContainerStarted","Data":"5cc3a188151068773adaa3932d1ad6a9a399031451de3942bd73fd5701d3112f"} Jan 22 18:10:49 crc kubenswrapper[4758]: I0122 18:10:49.701945 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8cp4n" event={"ID":"1e5feab4-64a2-4939-94e8-8cc542606942","Type":"ContainerStarted","Data":"e9eacfbac3c1fd9d93765db13f4ccc015f2e9843d456b6cc6fc673d97c388bb5"} Jan 22 18:10:49 crc kubenswrapper[4758]: I0122 18:10:49.987393 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6ssxk"] Jan 22 18:10:49 crc kubenswrapper[4758]: I0122 18:10:49.988012 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6ssxk" podUID="ea08d26e-73e6-4758-8d0e-7635e3fdbdc7" containerName="registry-server" containerID="cri-o://a222eaf64281da98ec3fac7022f91cff51c653cacbc34f900d6a13c77496e54d" gracePeriod=2 Jan 22 18:10:50 crc kubenswrapper[4758]: I0122 18:10:50.807944 4758 scope.go:117] "RemoveContainer" containerID="8b5490c4b3e8158c20032f7b8e64df047dabd62fdeacf2f33c9dc2a8709aa51e" Jan 22 18:10:50 crc kubenswrapper[4758]: E0122 18:10:50.808275 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:10:51 crc kubenswrapper[4758]: I0122 18:10:51.749417 4758 generic.go:334] "Generic (PLEG): container finished" podID="1e5feab4-64a2-4939-94e8-8cc542606942" containerID="e9eacfbac3c1fd9d93765db13f4ccc015f2e9843d456b6cc6fc673d97c388bb5" exitCode=0 Jan 22 18:10:51 crc kubenswrapper[4758]: I0122 18:10:51.749512 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8cp4n" event={"ID":"1e5feab4-64a2-4939-94e8-8cc542606942","Type":"ContainerDied","Data":"e9eacfbac3c1fd9d93765db13f4ccc015f2e9843d456b6cc6fc673d97c388bb5"} Jan 22 18:10:51 crc kubenswrapper[4758]: I0122 18:10:51.759781 4758 generic.go:334] "Generic (PLEG): container finished" podID="ea08d26e-73e6-4758-8d0e-7635e3fdbdc7" containerID="a222eaf64281da98ec3fac7022f91cff51c653cacbc34f900d6a13c77496e54d" exitCode=0 Jan 22 18:10:51 crc kubenswrapper[4758]: I0122 18:10:51.759867 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6ssxk" event={"ID":"ea08d26e-73e6-4758-8d0e-7635e3fdbdc7","Type":"ContainerDied","Data":"a222eaf64281da98ec3fac7022f91cff51c653cacbc34f900d6a13c77496e54d"} Jan 22 18:10:52 crc kubenswrapper[4758]: I0122 18:10:52.410808 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6ssxk" Jan 22 18:10:52 crc kubenswrapper[4758]: I0122 18:10:52.464506 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea08d26e-73e6-4758-8d0e-7635e3fdbdc7-utilities\") pod \"ea08d26e-73e6-4758-8d0e-7635e3fdbdc7\" (UID: \"ea08d26e-73e6-4758-8d0e-7635e3fdbdc7\") " Jan 22 18:10:52 crc kubenswrapper[4758]: I0122 18:10:52.464928 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dz978\" (UniqueName: \"kubernetes.io/projected/ea08d26e-73e6-4758-8d0e-7635e3fdbdc7-kube-api-access-dz978\") pod \"ea08d26e-73e6-4758-8d0e-7635e3fdbdc7\" (UID: \"ea08d26e-73e6-4758-8d0e-7635e3fdbdc7\") " Jan 22 18:10:52 crc kubenswrapper[4758]: I0122 18:10:52.464985 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea08d26e-73e6-4758-8d0e-7635e3fdbdc7-catalog-content\") pod \"ea08d26e-73e6-4758-8d0e-7635e3fdbdc7\" (UID: \"ea08d26e-73e6-4758-8d0e-7635e3fdbdc7\") " Jan 22 18:10:52 crc kubenswrapper[4758]: I0122 18:10:52.465809 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea08d26e-73e6-4758-8d0e-7635e3fdbdc7-utilities" (OuterVolumeSpecName: "utilities") pod "ea08d26e-73e6-4758-8d0e-7635e3fdbdc7" (UID: "ea08d26e-73e6-4758-8d0e-7635e3fdbdc7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 18:10:52 crc kubenswrapper[4758]: I0122 18:10:52.466653 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea08d26e-73e6-4758-8d0e-7635e3fdbdc7-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 18:10:52 crc kubenswrapper[4758]: I0122 18:10:52.481356 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea08d26e-73e6-4758-8d0e-7635e3fdbdc7-kube-api-access-dz978" (OuterVolumeSpecName: "kube-api-access-dz978") pod "ea08d26e-73e6-4758-8d0e-7635e3fdbdc7" (UID: "ea08d26e-73e6-4758-8d0e-7635e3fdbdc7"). InnerVolumeSpecName "kube-api-access-dz978". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 18:10:52 crc kubenswrapper[4758]: I0122 18:10:52.520568 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea08d26e-73e6-4758-8d0e-7635e3fdbdc7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ea08d26e-73e6-4758-8d0e-7635e3fdbdc7" (UID: "ea08d26e-73e6-4758-8d0e-7635e3fdbdc7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 18:10:52 crc kubenswrapper[4758]: I0122 18:10:52.569607 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dz978\" (UniqueName: \"kubernetes.io/projected/ea08d26e-73e6-4758-8d0e-7635e3fdbdc7-kube-api-access-dz978\") on node \"crc\" DevicePath \"\"" Jan 22 18:10:52 crc kubenswrapper[4758]: I0122 18:10:52.570135 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea08d26e-73e6-4758-8d0e-7635e3fdbdc7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 18:10:52 crc kubenswrapper[4758]: I0122 18:10:52.781483 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8cp4n" event={"ID":"1e5feab4-64a2-4939-94e8-8cc542606942","Type":"ContainerStarted","Data":"1491537cd37a4a1079118da07de0825c5b507651c786dd08ad63d5ec12198bb0"} Jan 22 18:10:52 crc kubenswrapper[4758]: I0122 18:10:52.786288 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6ssxk" event={"ID":"ea08d26e-73e6-4758-8d0e-7635e3fdbdc7","Type":"ContainerDied","Data":"c322505d9c43757a872bab332c17d7545b3d5bf1b375ef72fbdae36a304fe57f"} Jan 22 18:10:52 crc kubenswrapper[4758]: I0122 18:10:52.786401 4758 scope.go:117] "RemoveContainer" containerID="a222eaf64281da98ec3fac7022f91cff51c653cacbc34f900d6a13c77496e54d" Jan 22 18:10:52 crc kubenswrapper[4758]: I0122 18:10:52.788955 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6ssxk" Jan 22 18:10:52 crc kubenswrapper[4758]: I0122 18:10:52.815248 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8cp4n" podStartSLOduration=3.310662808 podStartE2EDuration="7.815221304s" podCreationTimestamp="2026-01-22 18:10:45 +0000 UTC" firstStartedPulling="2026-01-22 18:10:47.682057904 +0000 UTC m=+6069.165397189" lastFinishedPulling="2026-01-22 18:10:52.1866164 +0000 UTC m=+6073.669955685" observedRunningTime="2026-01-22 18:10:52.801567272 +0000 UTC m=+6074.284906567" watchObservedRunningTime="2026-01-22 18:10:52.815221304 +0000 UTC m=+6074.298560589" Jan 22 18:10:52 crc kubenswrapper[4758]: I0122 18:10:52.854317 4758 scope.go:117] "RemoveContainer" containerID="aa52da43ecf02e15ca22b0502e4c4936a0fbcd640a074b8ddfd6b7bd1aee5233" Jan 22 18:10:52 crc kubenswrapper[4758]: I0122 18:10:52.863766 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6ssxk"] Jan 22 18:10:52 crc kubenswrapper[4758]: I0122 18:10:52.894769 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6ssxk"] Jan 22 18:10:52 crc kubenswrapper[4758]: I0122 18:10:52.927787 4758 scope.go:117] "RemoveContainer" containerID="2ccf5657d2936b003fb5dbbd8ffddd003810e07e94e027f37d534a120e9abeb7" Jan 22 18:10:53 crc kubenswrapper[4758]: I0122 18:10:53.759108 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lcpkj" Jan 22 18:10:53 crc kubenswrapper[4758]: I0122 18:10:53.837481 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lcpkj" Jan 22 18:10:54 crc kubenswrapper[4758]: I0122 18:10:54.859194 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea08d26e-73e6-4758-8d0e-7635e3fdbdc7" path="/var/lib/kubelet/pods/ea08d26e-73e6-4758-8d0e-7635e3fdbdc7/volumes" Jan 22 18:10:55 crc kubenswrapper[4758]: I0122 18:10:55.945133 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8cp4n" Jan 22 18:10:55 crc kubenswrapper[4758]: I0122 18:10:55.945491 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8cp4n" Jan 22 18:10:56 crc kubenswrapper[4758]: I0122 18:10:56.005787 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8cp4n" Jan 22 18:10:56 crc kubenswrapper[4758]: I0122 18:10:56.182789 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lcpkj"] Jan 22 18:10:56 crc kubenswrapper[4758]: I0122 18:10:56.183034 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lcpkj" podUID="b9ec2a89-3a43-4a38-bd50-5d2eae130e92" containerName="registry-server" containerID="cri-o://00f8eaa3003e250aa41c0b4f0406f2e1983cde36f44d305705bc3c3251f3b3e1" gracePeriod=2 Jan 22 18:10:56 crc kubenswrapper[4758]: I0122 18:10:56.776892 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lcpkj" Jan 22 18:10:56 crc kubenswrapper[4758]: I0122 18:10:56.885694 4758 generic.go:334] "Generic (PLEG): container finished" podID="b9ec2a89-3a43-4a38-bd50-5d2eae130e92" containerID="00f8eaa3003e250aa41c0b4f0406f2e1983cde36f44d305705bc3c3251f3b3e1" exitCode=0 Jan 22 18:10:56 crc kubenswrapper[4758]: I0122 18:10:56.885812 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lcpkj" Jan 22 18:10:56 crc kubenswrapper[4758]: I0122 18:10:56.885777 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lcpkj" event={"ID":"b9ec2a89-3a43-4a38-bd50-5d2eae130e92","Type":"ContainerDied","Data":"00f8eaa3003e250aa41c0b4f0406f2e1983cde36f44d305705bc3c3251f3b3e1"} Jan 22 18:10:56 crc kubenswrapper[4758]: I0122 18:10:56.885938 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lcpkj" event={"ID":"b9ec2a89-3a43-4a38-bd50-5d2eae130e92","Type":"ContainerDied","Data":"c8be710d87b81a7aa654f390b84bac26d44751c6d2a20e850ac39636fd6bd4fb"} Jan 22 18:10:56 crc kubenswrapper[4758]: I0122 18:10:56.885987 4758 scope.go:117] "RemoveContainer" containerID="00f8eaa3003e250aa41c0b4f0406f2e1983cde36f44d305705bc3c3251f3b3e1" Jan 22 18:10:56 crc kubenswrapper[4758]: I0122 18:10:56.914593 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9ec2a89-3a43-4a38-bd50-5d2eae130e92-utilities\") pod \"b9ec2a89-3a43-4a38-bd50-5d2eae130e92\" (UID: \"b9ec2a89-3a43-4a38-bd50-5d2eae130e92\") " Jan 22 18:10:56 crc kubenswrapper[4758]: I0122 18:10:56.914655 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9ec2a89-3a43-4a38-bd50-5d2eae130e92-catalog-content\") pod \"b9ec2a89-3a43-4a38-bd50-5d2eae130e92\" (UID: \"b9ec2a89-3a43-4a38-bd50-5d2eae130e92\") " Jan 22 18:10:56 crc kubenswrapper[4758]: I0122 18:10:56.914780 4758 scope.go:117] "RemoveContainer" containerID="cc115281c075977f69e32a0981ba9b7b03c8c745495b6e762010f50213041a7c" Jan 22 18:10:56 crc kubenswrapper[4758]: I0122 18:10:56.914875 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dk28h\" (UniqueName: \"kubernetes.io/projected/b9ec2a89-3a43-4a38-bd50-5d2eae130e92-kube-api-access-dk28h\") pod \"b9ec2a89-3a43-4a38-bd50-5d2eae130e92\" (UID: \"b9ec2a89-3a43-4a38-bd50-5d2eae130e92\") " Jan 22 18:10:56 crc kubenswrapper[4758]: I0122 18:10:56.916972 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9ec2a89-3a43-4a38-bd50-5d2eae130e92-utilities" (OuterVolumeSpecName: "utilities") pod "b9ec2a89-3a43-4a38-bd50-5d2eae130e92" (UID: "b9ec2a89-3a43-4a38-bd50-5d2eae130e92"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 18:10:56 crc kubenswrapper[4758]: I0122 18:10:56.927186 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9ec2a89-3a43-4a38-bd50-5d2eae130e92-kube-api-access-dk28h" (OuterVolumeSpecName: "kube-api-access-dk28h") pod "b9ec2a89-3a43-4a38-bd50-5d2eae130e92" (UID: "b9ec2a89-3a43-4a38-bd50-5d2eae130e92"). InnerVolumeSpecName "kube-api-access-dk28h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 18:10:56 crc kubenswrapper[4758]: I0122 18:10:56.998681 4758 scope.go:117] "RemoveContainer" containerID="30d60e5172c0bace78fc7dceee0230dd2c79a951fcd135ab338b4cd3835d9d86" Jan 22 18:10:57 crc kubenswrapper[4758]: I0122 18:10:57.018629 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dk28h\" (UniqueName: \"kubernetes.io/projected/b9ec2a89-3a43-4a38-bd50-5d2eae130e92-kube-api-access-dk28h\") on node \"crc\" DevicePath \"\"" Jan 22 18:10:57 crc kubenswrapper[4758]: I0122 18:10:57.019189 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9ec2a89-3a43-4a38-bd50-5d2eae130e92-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 18:10:57 crc kubenswrapper[4758]: I0122 18:10:57.044448 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9ec2a89-3a43-4a38-bd50-5d2eae130e92-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b9ec2a89-3a43-4a38-bd50-5d2eae130e92" (UID: "b9ec2a89-3a43-4a38-bd50-5d2eae130e92"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 18:10:57 crc kubenswrapper[4758]: I0122 18:10:57.053797 4758 scope.go:117] "RemoveContainer" containerID="00f8eaa3003e250aa41c0b4f0406f2e1983cde36f44d305705bc3c3251f3b3e1" Jan 22 18:10:57 crc kubenswrapper[4758]: E0122 18:10:57.054495 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00f8eaa3003e250aa41c0b4f0406f2e1983cde36f44d305705bc3c3251f3b3e1\": container with ID starting with 00f8eaa3003e250aa41c0b4f0406f2e1983cde36f44d305705bc3c3251f3b3e1 not found: ID does not exist" containerID="00f8eaa3003e250aa41c0b4f0406f2e1983cde36f44d305705bc3c3251f3b3e1" Jan 22 18:10:57 crc kubenswrapper[4758]: I0122 18:10:57.054584 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00f8eaa3003e250aa41c0b4f0406f2e1983cde36f44d305705bc3c3251f3b3e1"} err="failed to get container status \"00f8eaa3003e250aa41c0b4f0406f2e1983cde36f44d305705bc3c3251f3b3e1\": rpc error: code = NotFound desc = could not find container \"00f8eaa3003e250aa41c0b4f0406f2e1983cde36f44d305705bc3c3251f3b3e1\": container with ID starting with 00f8eaa3003e250aa41c0b4f0406f2e1983cde36f44d305705bc3c3251f3b3e1 not found: ID does not exist" Jan 22 18:10:57 crc kubenswrapper[4758]: I0122 18:10:57.054637 4758 scope.go:117] "RemoveContainer" containerID="cc115281c075977f69e32a0981ba9b7b03c8c745495b6e762010f50213041a7c" Jan 22 18:10:57 crc kubenswrapper[4758]: E0122 18:10:57.055138 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc115281c075977f69e32a0981ba9b7b03c8c745495b6e762010f50213041a7c\": container with ID starting with cc115281c075977f69e32a0981ba9b7b03c8c745495b6e762010f50213041a7c not found: ID does not exist" containerID="cc115281c075977f69e32a0981ba9b7b03c8c745495b6e762010f50213041a7c" Jan 22 18:10:57 crc kubenswrapper[4758]: I0122 18:10:57.055226 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc115281c075977f69e32a0981ba9b7b03c8c745495b6e762010f50213041a7c"} err="failed to get container status \"cc115281c075977f69e32a0981ba9b7b03c8c745495b6e762010f50213041a7c\": rpc error: code = NotFound desc = could not find container \"cc115281c075977f69e32a0981ba9b7b03c8c745495b6e762010f50213041a7c\": container with ID starting with cc115281c075977f69e32a0981ba9b7b03c8c745495b6e762010f50213041a7c not found: ID does not exist" Jan 22 18:10:57 crc kubenswrapper[4758]: I0122 18:10:57.055275 4758 scope.go:117] "RemoveContainer" containerID="30d60e5172c0bace78fc7dceee0230dd2c79a951fcd135ab338b4cd3835d9d86" Jan 22 18:10:57 crc kubenswrapper[4758]: E0122 18:10:57.055865 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30d60e5172c0bace78fc7dceee0230dd2c79a951fcd135ab338b4cd3835d9d86\": container with ID starting with 30d60e5172c0bace78fc7dceee0230dd2c79a951fcd135ab338b4cd3835d9d86 not found: ID does not exist" containerID="30d60e5172c0bace78fc7dceee0230dd2c79a951fcd135ab338b4cd3835d9d86" Jan 22 18:10:57 crc kubenswrapper[4758]: I0122 18:10:57.055894 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30d60e5172c0bace78fc7dceee0230dd2c79a951fcd135ab338b4cd3835d9d86"} err="failed to get container status \"30d60e5172c0bace78fc7dceee0230dd2c79a951fcd135ab338b4cd3835d9d86\": rpc error: code = NotFound desc = could not find container \"30d60e5172c0bace78fc7dceee0230dd2c79a951fcd135ab338b4cd3835d9d86\": container with ID starting with 30d60e5172c0bace78fc7dceee0230dd2c79a951fcd135ab338b4cd3835d9d86 not found: ID does not exist" Jan 22 18:10:57 crc kubenswrapper[4758]: I0122 18:10:57.121975 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9ec2a89-3a43-4a38-bd50-5d2eae130e92-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 18:10:57 crc kubenswrapper[4758]: I0122 18:10:57.231154 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lcpkj"] Jan 22 18:10:57 crc kubenswrapper[4758]: I0122 18:10:57.247308 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lcpkj"] Jan 22 18:10:58 crc kubenswrapper[4758]: I0122 18:10:58.827375 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9ec2a89-3a43-4a38-bd50-5d2eae130e92" path="/var/lib/kubelet/pods/b9ec2a89-3a43-4a38-bd50-5d2eae130e92/volumes" Jan 22 18:11:03 crc kubenswrapper[4758]: I0122 18:11:03.808622 4758 scope.go:117] "RemoveContainer" containerID="8b5490c4b3e8158c20032f7b8e64df047dabd62fdeacf2f33c9dc2a8709aa51e" Jan 22 18:11:03 crc kubenswrapper[4758]: E0122 18:11:03.809560 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:11:05 crc kubenswrapper[4758]: I0122 18:11:05.993006 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8cp4n" Jan 22 18:11:06 crc kubenswrapper[4758]: I0122 18:11:06.046773 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8cp4n"] Jan 22 18:11:07 crc kubenswrapper[4758]: I0122 18:11:07.005611 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8cp4n" podUID="1e5feab4-64a2-4939-94e8-8cc542606942" containerName="registry-server" containerID="cri-o://1491537cd37a4a1079118da07de0825c5b507651c786dd08ad63d5ec12198bb0" gracePeriod=2 Jan 22 18:11:07 crc kubenswrapper[4758]: I0122 18:11:07.509488 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8cp4n" Jan 22 18:11:07 crc kubenswrapper[4758]: I0122 18:11:07.660349 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e5feab4-64a2-4939-94e8-8cc542606942-catalog-content\") pod \"1e5feab4-64a2-4939-94e8-8cc542606942\" (UID: \"1e5feab4-64a2-4939-94e8-8cc542606942\") " Jan 22 18:11:07 crc kubenswrapper[4758]: I0122 18:11:07.660961 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e5feab4-64a2-4939-94e8-8cc542606942-utilities\") pod \"1e5feab4-64a2-4939-94e8-8cc542606942\" (UID: \"1e5feab4-64a2-4939-94e8-8cc542606942\") " Jan 22 18:11:07 crc kubenswrapper[4758]: I0122 18:11:07.661132 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v29j9\" (UniqueName: \"kubernetes.io/projected/1e5feab4-64a2-4939-94e8-8cc542606942-kube-api-access-v29j9\") pod \"1e5feab4-64a2-4939-94e8-8cc542606942\" (UID: \"1e5feab4-64a2-4939-94e8-8cc542606942\") " Jan 22 18:11:07 crc kubenswrapper[4758]: I0122 18:11:07.661810 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e5feab4-64a2-4939-94e8-8cc542606942-utilities" (OuterVolumeSpecName: "utilities") pod "1e5feab4-64a2-4939-94e8-8cc542606942" (UID: "1e5feab4-64a2-4939-94e8-8cc542606942"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 18:11:07 crc kubenswrapper[4758]: I0122 18:11:07.668170 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e5feab4-64a2-4939-94e8-8cc542606942-kube-api-access-v29j9" (OuterVolumeSpecName: "kube-api-access-v29j9") pod "1e5feab4-64a2-4939-94e8-8cc542606942" (UID: "1e5feab4-64a2-4939-94e8-8cc542606942"). InnerVolumeSpecName "kube-api-access-v29j9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 18:11:07 crc kubenswrapper[4758]: I0122 18:11:07.722164 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e5feab4-64a2-4939-94e8-8cc542606942-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1e5feab4-64a2-4939-94e8-8cc542606942" (UID: "1e5feab4-64a2-4939-94e8-8cc542606942"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 18:11:07 crc kubenswrapper[4758]: I0122 18:11:07.763343 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e5feab4-64a2-4939-94e8-8cc542606942-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 18:11:07 crc kubenswrapper[4758]: I0122 18:11:07.763385 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e5feab4-64a2-4939-94e8-8cc542606942-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 18:11:07 crc kubenswrapper[4758]: I0122 18:11:07.763398 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v29j9\" (UniqueName: \"kubernetes.io/projected/1e5feab4-64a2-4939-94e8-8cc542606942-kube-api-access-v29j9\") on node \"crc\" DevicePath \"\"" Jan 22 18:11:08 crc kubenswrapper[4758]: I0122 18:11:08.016048 4758 generic.go:334] "Generic (PLEG): container finished" podID="1e5feab4-64a2-4939-94e8-8cc542606942" containerID="1491537cd37a4a1079118da07de0825c5b507651c786dd08ad63d5ec12198bb0" exitCode=0 Jan 22 18:11:08 crc kubenswrapper[4758]: I0122 18:11:08.016102 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8cp4n" event={"ID":"1e5feab4-64a2-4939-94e8-8cc542606942","Type":"ContainerDied","Data":"1491537cd37a4a1079118da07de0825c5b507651c786dd08ad63d5ec12198bb0"} Jan 22 18:11:08 crc kubenswrapper[4758]: I0122 18:11:08.016134 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8cp4n" event={"ID":"1e5feab4-64a2-4939-94e8-8cc542606942","Type":"ContainerDied","Data":"5cc3a188151068773adaa3932d1ad6a9a399031451de3942bd73fd5701d3112f"} Jan 22 18:11:08 crc kubenswrapper[4758]: I0122 18:11:08.016153 4758 scope.go:117] "RemoveContainer" containerID="1491537cd37a4a1079118da07de0825c5b507651c786dd08ad63d5ec12198bb0" Jan 22 18:11:08 crc kubenswrapper[4758]: I0122 18:11:08.016192 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8cp4n" Jan 22 18:11:08 crc kubenswrapper[4758]: I0122 18:11:08.042338 4758 scope.go:117] "RemoveContainer" containerID="e9eacfbac3c1fd9d93765db13f4ccc015f2e9843d456b6cc6fc673d97c388bb5" Jan 22 18:11:08 crc kubenswrapper[4758]: I0122 18:11:08.069606 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8cp4n"] Jan 22 18:11:08 crc kubenswrapper[4758]: I0122 18:11:08.078597 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8cp4n"] Jan 22 18:11:08 crc kubenswrapper[4758]: I0122 18:11:08.080685 4758 scope.go:117] "RemoveContainer" containerID="3bb6bc7693c2abe73727752a2dce1ceb7bda1cc2f45f96307f945558d8c3f2d7" Jan 22 18:11:08 crc kubenswrapper[4758]: I0122 18:11:08.119360 4758 scope.go:117] "RemoveContainer" containerID="1491537cd37a4a1079118da07de0825c5b507651c786dd08ad63d5ec12198bb0" Jan 22 18:11:08 crc kubenswrapper[4758]: E0122 18:11:08.119983 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1491537cd37a4a1079118da07de0825c5b507651c786dd08ad63d5ec12198bb0\": container with ID starting with 1491537cd37a4a1079118da07de0825c5b507651c786dd08ad63d5ec12198bb0 not found: ID does not exist" containerID="1491537cd37a4a1079118da07de0825c5b507651c786dd08ad63d5ec12198bb0" Jan 22 18:11:08 crc kubenswrapper[4758]: I0122 18:11:08.120057 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1491537cd37a4a1079118da07de0825c5b507651c786dd08ad63d5ec12198bb0"} err="failed to get container status \"1491537cd37a4a1079118da07de0825c5b507651c786dd08ad63d5ec12198bb0\": rpc error: code = NotFound desc = could not find container \"1491537cd37a4a1079118da07de0825c5b507651c786dd08ad63d5ec12198bb0\": container with ID starting with 1491537cd37a4a1079118da07de0825c5b507651c786dd08ad63d5ec12198bb0 not found: ID does not exist" Jan 22 18:11:08 crc kubenswrapper[4758]: I0122 18:11:08.120089 4758 scope.go:117] "RemoveContainer" containerID="e9eacfbac3c1fd9d93765db13f4ccc015f2e9843d456b6cc6fc673d97c388bb5" Jan 22 18:11:08 crc kubenswrapper[4758]: E0122 18:11:08.120620 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9eacfbac3c1fd9d93765db13f4ccc015f2e9843d456b6cc6fc673d97c388bb5\": container with ID starting with e9eacfbac3c1fd9d93765db13f4ccc015f2e9843d456b6cc6fc673d97c388bb5 not found: ID does not exist" containerID="e9eacfbac3c1fd9d93765db13f4ccc015f2e9843d456b6cc6fc673d97c388bb5" Jan 22 18:11:08 crc kubenswrapper[4758]: I0122 18:11:08.120660 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9eacfbac3c1fd9d93765db13f4ccc015f2e9843d456b6cc6fc673d97c388bb5"} err="failed to get container status \"e9eacfbac3c1fd9d93765db13f4ccc015f2e9843d456b6cc6fc673d97c388bb5\": rpc error: code = NotFound desc = could not find container \"e9eacfbac3c1fd9d93765db13f4ccc015f2e9843d456b6cc6fc673d97c388bb5\": container with ID starting with e9eacfbac3c1fd9d93765db13f4ccc015f2e9843d456b6cc6fc673d97c388bb5 not found: ID does not exist" Jan 22 18:11:08 crc kubenswrapper[4758]: I0122 18:11:08.120692 4758 scope.go:117] "RemoveContainer" containerID="3bb6bc7693c2abe73727752a2dce1ceb7bda1cc2f45f96307f945558d8c3f2d7" Jan 22 18:11:08 crc kubenswrapper[4758]: E0122 18:11:08.121098 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bb6bc7693c2abe73727752a2dce1ceb7bda1cc2f45f96307f945558d8c3f2d7\": container with ID starting with 3bb6bc7693c2abe73727752a2dce1ceb7bda1cc2f45f96307f945558d8c3f2d7 not found: ID does not exist" containerID="3bb6bc7693c2abe73727752a2dce1ceb7bda1cc2f45f96307f945558d8c3f2d7" Jan 22 18:11:08 crc kubenswrapper[4758]: I0122 18:11:08.121129 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bb6bc7693c2abe73727752a2dce1ceb7bda1cc2f45f96307f945558d8c3f2d7"} err="failed to get container status \"3bb6bc7693c2abe73727752a2dce1ceb7bda1cc2f45f96307f945558d8c3f2d7\": rpc error: code = NotFound desc = could not find container \"3bb6bc7693c2abe73727752a2dce1ceb7bda1cc2f45f96307f945558d8c3f2d7\": container with ID starting with 3bb6bc7693c2abe73727752a2dce1ceb7bda1cc2f45f96307f945558d8c3f2d7 not found: ID does not exist" Jan 22 18:11:08 crc kubenswrapper[4758]: I0122 18:11:08.821130 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e5feab4-64a2-4939-94e8-8cc542606942" path="/var/lib/kubelet/pods/1e5feab4-64a2-4939-94e8-8cc542606942/volumes" Jan 22 18:11:17 crc kubenswrapper[4758]: I0122 18:11:17.807964 4758 scope.go:117] "RemoveContainer" containerID="8b5490c4b3e8158c20032f7b8e64df047dabd62fdeacf2f33c9dc2a8709aa51e" Jan 22 18:11:17 crc kubenswrapper[4758]: E0122 18:11:17.808834 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:11:30 crc kubenswrapper[4758]: I0122 18:11:30.813522 4758 scope.go:117] "RemoveContainer" containerID="8b5490c4b3e8158c20032f7b8e64df047dabd62fdeacf2f33c9dc2a8709aa51e" Jan 22 18:11:30 crc kubenswrapper[4758]: E0122 18:11:30.816130 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:11:42 crc kubenswrapper[4758]: I0122 18:11:42.808864 4758 scope.go:117] "RemoveContainer" containerID="8b5490c4b3e8158c20032f7b8e64df047dabd62fdeacf2f33c9dc2a8709aa51e" Jan 22 18:11:42 crc kubenswrapper[4758]: E0122 18:11:42.810264 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:11:53 crc kubenswrapper[4758]: I0122 18:11:53.808369 4758 scope.go:117] "RemoveContainer" containerID="8b5490c4b3e8158c20032f7b8e64df047dabd62fdeacf2f33c9dc2a8709aa51e" Jan 22 18:11:53 crc kubenswrapper[4758]: E0122 18:11:53.809301 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:12:07 crc kubenswrapper[4758]: I0122 18:12:07.808438 4758 scope.go:117] "RemoveContainer" containerID="8b5490c4b3e8158c20032f7b8e64df047dabd62fdeacf2f33c9dc2a8709aa51e" Jan 22 18:12:07 crc kubenswrapper[4758]: E0122 18:12:07.809402 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:12:22 crc kubenswrapper[4758]: I0122 18:12:22.807678 4758 scope.go:117] "RemoveContainer" containerID="8b5490c4b3e8158c20032f7b8e64df047dabd62fdeacf2f33c9dc2a8709aa51e" Jan 22 18:12:22 crc kubenswrapper[4758]: E0122 18:12:22.808630 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:12:37 crc kubenswrapper[4758]: I0122 18:12:37.809188 4758 scope.go:117] "RemoveContainer" containerID="8b5490c4b3e8158c20032f7b8e64df047dabd62fdeacf2f33c9dc2a8709aa51e" Jan 22 18:12:37 crc kubenswrapper[4758]: E0122 18:12:37.810860 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:12:50 crc kubenswrapper[4758]: I0122 18:12:50.807804 4758 scope.go:117] "RemoveContainer" containerID="8b5490c4b3e8158c20032f7b8e64df047dabd62fdeacf2f33c9dc2a8709aa51e" Jan 22 18:12:50 crc kubenswrapper[4758]: E0122 18:12:50.808580 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:13:03 crc kubenswrapper[4758]: I0122 18:13:03.808636 4758 scope.go:117] "RemoveContainer" containerID="8b5490c4b3e8158c20032f7b8e64df047dabd62fdeacf2f33c9dc2a8709aa51e" Jan 22 18:13:03 crc kubenswrapper[4758]: E0122 18:13:03.809506 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:13:15 crc kubenswrapper[4758]: I0122 18:13:15.808490 4758 scope.go:117] "RemoveContainer" containerID="8b5490c4b3e8158c20032f7b8e64df047dabd62fdeacf2f33c9dc2a8709aa51e" Jan 22 18:13:15 crc kubenswrapper[4758]: E0122 18:13:15.809577 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:13:30 crc kubenswrapper[4758]: I0122 18:13:30.809680 4758 scope.go:117] "RemoveContainer" containerID="8b5490c4b3e8158c20032f7b8e64df047dabd62fdeacf2f33c9dc2a8709aa51e" Jan 22 18:13:30 crc kubenswrapper[4758]: E0122 18:13:30.811246 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:13:44 crc kubenswrapper[4758]: I0122 18:13:44.808128 4758 scope.go:117] "RemoveContainer" containerID="8b5490c4b3e8158c20032f7b8e64df047dabd62fdeacf2f33c9dc2a8709aa51e" Jan 22 18:13:44 crc kubenswrapper[4758]: E0122 18:13:44.808992 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:13:56 crc kubenswrapper[4758]: I0122 18:13:56.810768 4758 scope.go:117] "RemoveContainer" containerID="8b5490c4b3e8158c20032f7b8e64df047dabd62fdeacf2f33c9dc2a8709aa51e" Jan 22 18:13:56 crc kubenswrapper[4758]: E0122 18:13:56.811610 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:14:09 crc kubenswrapper[4758]: I0122 18:14:09.808892 4758 scope.go:117] "RemoveContainer" containerID="8b5490c4b3e8158c20032f7b8e64df047dabd62fdeacf2f33c9dc2a8709aa51e" Jan 22 18:14:09 crc kubenswrapper[4758]: E0122 18:14:09.812255 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:14:21 crc kubenswrapper[4758]: I0122 18:14:21.808533 4758 scope.go:117] "RemoveContainer" containerID="8b5490c4b3e8158c20032f7b8e64df047dabd62fdeacf2f33c9dc2a8709aa51e" Jan 22 18:14:21 crc kubenswrapper[4758]: E0122 18:14:21.809620 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:14:32 crc kubenswrapper[4758]: I0122 18:14:32.809724 4758 scope.go:117] "RemoveContainer" containerID="8b5490c4b3e8158c20032f7b8e64df047dabd62fdeacf2f33c9dc2a8709aa51e" Jan 22 18:14:32 crc kubenswrapper[4758]: E0122 18:14:32.813280 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:14:47 crc kubenswrapper[4758]: I0122 18:14:47.809689 4758 scope.go:117] "RemoveContainer" containerID="8b5490c4b3e8158c20032f7b8e64df047dabd62fdeacf2f33c9dc2a8709aa51e" Jan 22 18:14:48 crc kubenswrapper[4758]: I0122 18:14:48.732774 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerStarted","Data":"1dd7b74fe085345e4dd7f1349a4ed2a791f1166d5bbada9f9c2c704283055387"} Jan 22 18:15:00 crc kubenswrapper[4758]: I0122 18:15:00.154101 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485095-5rcdq"] Jan 22 18:15:00 crc kubenswrapper[4758]: E0122 18:15:00.155236 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e5feab4-64a2-4939-94e8-8cc542606942" containerName="registry-server" Jan 22 18:15:00 crc kubenswrapper[4758]: I0122 18:15:00.155254 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e5feab4-64a2-4939-94e8-8cc542606942" containerName="registry-server" Jan 22 18:15:00 crc kubenswrapper[4758]: E0122 18:15:00.155280 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea08d26e-73e6-4758-8d0e-7635e3fdbdc7" containerName="extract-content" Jan 22 18:15:00 crc kubenswrapper[4758]: I0122 18:15:00.155288 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea08d26e-73e6-4758-8d0e-7635e3fdbdc7" containerName="extract-content" Jan 22 18:15:00 crc kubenswrapper[4758]: E0122 18:15:00.155310 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9ec2a89-3a43-4a38-bd50-5d2eae130e92" containerName="extract-utilities" Jan 22 18:15:00 crc kubenswrapper[4758]: I0122 18:15:00.155319 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9ec2a89-3a43-4a38-bd50-5d2eae130e92" containerName="extract-utilities" Jan 22 18:15:00 crc kubenswrapper[4758]: E0122 18:15:00.155333 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e5feab4-64a2-4939-94e8-8cc542606942" containerName="extract-content" Jan 22 18:15:00 crc kubenswrapper[4758]: I0122 18:15:00.155342 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e5feab4-64a2-4939-94e8-8cc542606942" containerName="extract-content" Jan 22 18:15:00 crc kubenswrapper[4758]: E0122 18:15:00.155431 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e5feab4-64a2-4939-94e8-8cc542606942" containerName="extract-utilities" Jan 22 18:15:00 crc kubenswrapper[4758]: I0122 18:15:00.155439 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e5feab4-64a2-4939-94e8-8cc542606942" containerName="extract-utilities" Jan 22 18:15:00 crc kubenswrapper[4758]: E0122 18:15:00.155450 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea08d26e-73e6-4758-8d0e-7635e3fdbdc7" containerName="registry-server" Jan 22 18:15:00 crc kubenswrapper[4758]: I0122 18:15:00.155459 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea08d26e-73e6-4758-8d0e-7635e3fdbdc7" containerName="registry-server" Jan 22 18:15:00 crc kubenswrapper[4758]: E0122 18:15:00.155468 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9ec2a89-3a43-4a38-bd50-5d2eae130e92" containerName="extract-content" Jan 22 18:15:00 crc kubenswrapper[4758]: I0122 18:15:00.155475 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9ec2a89-3a43-4a38-bd50-5d2eae130e92" containerName="extract-content" Jan 22 18:15:00 crc kubenswrapper[4758]: E0122 18:15:00.155490 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9ec2a89-3a43-4a38-bd50-5d2eae130e92" containerName="registry-server" Jan 22 18:15:00 crc kubenswrapper[4758]: I0122 18:15:00.155496 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9ec2a89-3a43-4a38-bd50-5d2eae130e92" containerName="registry-server" Jan 22 18:15:00 crc kubenswrapper[4758]: E0122 18:15:00.155511 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea08d26e-73e6-4758-8d0e-7635e3fdbdc7" containerName="extract-utilities" Jan 22 18:15:00 crc kubenswrapper[4758]: I0122 18:15:00.155518 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea08d26e-73e6-4758-8d0e-7635e3fdbdc7" containerName="extract-utilities" Jan 22 18:15:00 crc kubenswrapper[4758]: I0122 18:15:00.157568 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e5feab4-64a2-4939-94e8-8cc542606942" containerName="registry-server" Jan 22 18:15:00 crc kubenswrapper[4758]: I0122 18:15:00.157609 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9ec2a89-3a43-4a38-bd50-5d2eae130e92" containerName="registry-server" Jan 22 18:15:00 crc kubenswrapper[4758]: I0122 18:15:00.157637 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea08d26e-73e6-4758-8d0e-7635e3fdbdc7" containerName="registry-server" Jan 22 18:15:00 crc kubenswrapper[4758]: I0122 18:15:00.158780 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485095-5rcdq" Jan 22 18:15:00 crc kubenswrapper[4758]: I0122 18:15:00.162622 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 18:15:00 crc kubenswrapper[4758]: I0122 18:15:00.163437 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 18:15:00 crc kubenswrapper[4758]: I0122 18:15:00.178052 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485095-5rcdq"] Jan 22 18:15:00 crc kubenswrapper[4758]: I0122 18:15:00.182352 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9c9439c7-043e-45a5-9bb1-c7f754c3186d-secret-volume\") pod \"collect-profiles-29485095-5rcdq\" (UID: \"9c9439c7-043e-45a5-9bb1-c7f754c3186d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485095-5rcdq" Jan 22 18:15:00 crc kubenswrapper[4758]: I0122 18:15:00.182456 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c9439c7-043e-45a5-9bb1-c7f754c3186d-config-volume\") pod \"collect-profiles-29485095-5rcdq\" (UID: \"9c9439c7-043e-45a5-9bb1-c7f754c3186d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485095-5rcdq" Jan 22 18:15:00 crc kubenswrapper[4758]: I0122 18:15:00.182631 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl5bp\" (UniqueName: \"kubernetes.io/projected/9c9439c7-043e-45a5-9bb1-c7f754c3186d-kube-api-access-jl5bp\") pod \"collect-profiles-29485095-5rcdq\" (UID: \"9c9439c7-043e-45a5-9bb1-c7f754c3186d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485095-5rcdq" Jan 22 18:15:00 crc kubenswrapper[4758]: I0122 18:15:00.291229 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c9439c7-043e-45a5-9bb1-c7f754c3186d-config-volume\") pod \"collect-profiles-29485095-5rcdq\" (UID: \"9c9439c7-043e-45a5-9bb1-c7f754c3186d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485095-5rcdq" Jan 22 18:15:00 crc kubenswrapper[4758]: I0122 18:15:00.291660 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl5bp\" (UniqueName: \"kubernetes.io/projected/9c9439c7-043e-45a5-9bb1-c7f754c3186d-kube-api-access-jl5bp\") pod \"collect-profiles-29485095-5rcdq\" (UID: \"9c9439c7-043e-45a5-9bb1-c7f754c3186d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485095-5rcdq" Jan 22 18:15:00 crc kubenswrapper[4758]: I0122 18:15:00.291868 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9c9439c7-043e-45a5-9bb1-c7f754c3186d-secret-volume\") pod \"collect-profiles-29485095-5rcdq\" (UID: \"9c9439c7-043e-45a5-9bb1-c7f754c3186d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485095-5rcdq" Jan 22 18:15:00 crc kubenswrapper[4758]: I0122 18:15:00.292518 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c9439c7-043e-45a5-9bb1-c7f754c3186d-config-volume\") pod \"collect-profiles-29485095-5rcdq\" (UID: \"9c9439c7-043e-45a5-9bb1-c7f754c3186d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485095-5rcdq" Jan 22 18:15:00 crc kubenswrapper[4758]: I0122 18:15:00.301190 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9c9439c7-043e-45a5-9bb1-c7f754c3186d-secret-volume\") pod \"collect-profiles-29485095-5rcdq\" (UID: \"9c9439c7-043e-45a5-9bb1-c7f754c3186d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485095-5rcdq" Jan 22 18:15:00 crc kubenswrapper[4758]: I0122 18:15:00.312020 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl5bp\" (UniqueName: \"kubernetes.io/projected/9c9439c7-043e-45a5-9bb1-c7f754c3186d-kube-api-access-jl5bp\") pod \"collect-profiles-29485095-5rcdq\" (UID: \"9c9439c7-043e-45a5-9bb1-c7f754c3186d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485095-5rcdq" Jan 22 18:15:00 crc kubenswrapper[4758]: I0122 18:15:00.595066 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485095-5rcdq" Jan 22 18:15:01 crc kubenswrapper[4758]: I0122 18:15:01.088884 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485095-5rcdq"] Jan 22 18:15:01 crc kubenswrapper[4758]: W0122 18:15:01.091664 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c9439c7_043e_45a5_9bb1_c7f754c3186d.slice/crio-fa2b087e06adef1e4f284299952afeaa81f972bade988ea40ad1495dea998d7f WatchSource:0}: Error finding container fa2b087e06adef1e4f284299952afeaa81f972bade988ea40ad1495dea998d7f: Status 404 returned error can't find the container with id fa2b087e06adef1e4f284299952afeaa81f972bade988ea40ad1495dea998d7f Jan 22 18:15:01 crc kubenswrapper[4758]: I0122 18:15:01.892474 4758 generic.go:334] "Generic (PLEG): container finished" podID="9c9439c7-043e-45a5-9bb1-c7f754c3186d" containerID="c436815129e4817193c376775dc1210386042ef5a4ec920d67dd4d1d0398e099" exitCode=0 Jan 22 18:15:01 crc kubenswrapper[4758]: I0122 18:15:01.892603 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485095-5rcdq" event={"ID":"9c9439c7-043e-45a5-9bb1-c7f754c3186d","Type":"ContainerDied","Data":"c436815129e4817193c376775dc1210386042ef5a4ec920d67dd4d1d0398e099"} Jan 22 18:15:01 crc kubenswrapper[4758]: I0122 18:15:01.892820 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485095-5rcdq" event={"ID":"9c9439c7-043e-45a5-9bb1-c7f754c3186d","Type":"ContainerStarted","Data":"fa2b087e06adef1e4f284299952afeaa81f972bade988ea40ad1495dea998d7f"} Jan 22 18:15:03 crc kubenswrapper[4758]: I0122 18:15:03.309923 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485095-5rcdq" Jan 22 18:15:03 crc kubenswrapper[4758]: I0122 18:15:03.354020 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9c9439c7-043e-45a5-9bb1-c7f754c3186d-secret-volume\") pod \"9c9439c7-043e-45a5-9bb1-c7f754c3186d\" (UID: \"9c9439c7-043e-45a5-9bb1-c7f754c3186d\") " Jan 22 18:15:03 crc kubenswrapper[4758]: I0122 18:15:03.354146 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c9439c7-043e-45a5-9bb1-c7f754c3186d-config-volume\") pod \"9c9439c7-043e-45a5-9bb1-c7f754c3186d\" (UID: \"9c9439c7-043e-45a5-9bb1-c7f754c3186d\") " Jan 22 18:15:03 crc kubenswrapper[4758]: I0122 18:15:03.354258 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jl5bp\" (UniqueName: \"kubernetes.io/projected/9c9439c7-043e-45a5-9bb1-c7f754c3186d-kube-api-access-jl5bp\") pod \"9c9439c7-043e-45a5-9bb1-c7f754c3186d\" (UID: \"9c9439c7-043e-45a5-9bb1-c7f754c3186d\") " Jan 22 18:15:03 crc kubenswrapper[4758]: I0122 18:15:03.357328 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c9439c7-043e-45a5-9bb1-c7f754c3186d-config-volume" (OuterVolumeSpecName: "config-volume") pod "9c9439c7-043e-45a5-9bb1-c7f754c3186d" (UID: "9c9439c7-043e-45a5-9bb1-c7f754c3186d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 18:15:03 crc kubenswrapper[4758]: I0122 18:15:03.360895 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c9439c7-043e-45a5-9bb1-c7f754c3186d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9c9439c7-043e-45a5-9bb1-c7f754c3186d" (UID: "9c9439c7-043e-45a5-9bb1-c7f754c3186d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 18:15:03 crc kubenswrapper[4758]: I0122 18:15:03.361791 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c9439c7-043e-45a5-9bb1-c7f754c3186d-kube-api-access-jl5bp" (OuterVolumeSpecName: "kube-api-access-jl5bp") pod "9c9439c7-043e-45a5-9bb1-c7f754c3186d" (UID: "9c9439c7-043e-45a5-9bb1-c7f754c3186d"). InnerVolumeSpecName "kube-api-access-jl5bp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 18:15:03 crc kubenswrapper[4758]: I0122 18:15:03.457330 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jl5bp\" (UniqueName: \"kubernetes.io/projected/9c9439c7-043e-45a5-9bb1-c7f754c3186d-kube-api-access-jl5bp\") on node \"crc\" DevicePath \"\"" Jan 22 18:15:03 crc kubenswrapper[4758]: I0122 18:15:03.457387 4758 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9c9439c7-043e-45a5-9bb1-c7f754c3186d-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 18:15:03 crc kubenswrapper[4758]: I0122 18:15:03.457400 4758 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c9439c7-043e-45a5-9bb1-c7f754c3186d-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 18:15:03 crc kubenswrapper[4758]: I0122 18:15:03.911940 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485095-5rcdq" event={"ID":"9c9439c7-043e-45a5-9bb1-c7f754c3186d","Type":"ContainerDied","Data":"fa2b087e06adef1e4f284299952afeaa81f972bade988ea40ad1495dea998d7f"} Jan 22 18:15:03 crc kubenswrapper[4758]: I0122 18:15:03.912001 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa2b087e06adef1e4f284299952afeaa81f972bade988ea40ad1495dea998d7f" Jan 22 18:15:03 crc kubenswrapper[4758]: I0122 18:15:03.912077 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485095-5rcdq" Jan 22 18:15:04 crc kubenswrapper[4758]: I0122 18:15:04.403141 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485050-t4brq"] Jan 22 18:15:04 crc kubenswrapper[4758]: I0122 18:15:04.414667 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485050-t4brq"] Jan 22 18:15:04 crc kubenswrapper[4758]: I0122 18:15:04.825211 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1366d10c-135c-489b-920a-3aef5896bbb6" path="/var/lib/kubelet/pods/1366d10c-135c-489b-920a-3aef5896bbb6/volumes" Jan 22 18:16:04 crc kubenswrapper[4758]: I0122 18:16:04.148889 4758 scope.go:117] "RemoveContainer" containerID="78d02e6c4f17eb6bf6983110aa8404fd5f5a2595921a8bfebef37f381564adc8" Jan 22 18:17:13 crc kubenswrapper[4758]: I0122 18:17:13.837584 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 18:17:13 crc kubenswrapper[4758]: I0122 18:17:13.838922 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 18:17:43 crc kubenswrapper[4758]: I0122 18:17:43.837023 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 18:17:43 crc kubenswrapper[4758]: I0122 18:17:43.838993 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 18:18:13 crc kubenswrapper[4758]: I0122 18:18:13.837260 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 18:18:13 crc kubenswrapper[4758]: I0122 18:18:13.837925 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 18:18:13 crc kubenswrapper[4758]: I0122 18:18:13.838004 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" Jan 22 18:18:13 crc kubenswrapper[4758]: I0122 18:18:13.838772 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1dd7b74fe085345e4dd7f1349a4ed2a791f1166d5bbada9f9c2c704283055387"} pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 18:18:13 crc kubenswrapper[4758]: I0122 18:18:13.838840 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" containerID="cri-o://1dd7b74fe085345e4dd7f1349a4ed2a791f1166d5bbada9f9c2c704283055387" gracePeriod=600 Jan 22 18:18:14 crc kubenswrapper[4758]: I0122 18:18:14.087733 4758 generic.go:334] "Generic (PLEG): container finished" podID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerID="1dd7b74fe085345e4dd7f1349a4ed2a791f1166d5bbada9f9c2c704283055387" exitCode=0 Jan 22 18:18:14 crc kubenswrapper[4758]: I0122 18:18:14.087798 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerDied","Data":"1dd7b74fe085345e4dd7f1349a4ed2a791f1166d5bbada9f9c2c704283055387"} Jan 22 18:18:14 crc kubenswrapper[4758]: I0122 18:18:14.088184 4758 scope.go:117] "RemoveContainer" containerID="8b5490c4b3e8158c20032f7b8e64df047dabd62fdeacf2f33c9dc2a8709aa51e" Jan 22 18:18:15 crc kubenswrapper[4758]: I0122 18:18:15.101272 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerStarted","Data":"91d7ffcf810ea3c608d14de0dc105ec1bbd1a1cbe5e8f5d16d87dab78b231a4d"} Jan 22 18:20:43 crc kubenswrapper[4758]: I0122 18:20:43.838080 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 18:20:43 crc kubenswrapper[4758]: I0122 18:20:43.838669 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 18:20:52 crc kubenswrapper[4758]: I0122 18:20:52.266449 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bqb69"] Jan 22 18:20:52 crc kubenswrapper[4758]: E0122 18:20:52.267664 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c9439c7-043e-45a5-9bb1-c7f754c3186d" containerName="collect-profiles" Jan 22 18:20:52 crc kubenswrapper[4758]: I0122 18:20:52.267683 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c9439c7-043e-45a5-9bb1-c7f754c3186d" containerName="collect-profiles" Jan 22 18:20:52 crc kubenswrapper[4758]: I0122 18:20:52.267997 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c9439c7-043e-45a5-9bb1-c7f754c3186d" containerName="collect-profiles" Jan 22 18:20:52 crc kubenswrapper[4758]: I0122 18:20:52.270278 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bqb69" Jan 22 18:20:52 crc kubenswrapper[4758]: I0122 18:20:52.281139 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bqb69"] Jan 22 18:20:52 crc kubenswrapper[4758]: I0122 18:20:52.313427 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdg48\" (UniqueName: \"kubernetes.io/projected/cbfbb6fe-ee18-4035-9991-c7d98984760a-kube-api-access-cdg48\") pod \"certified-operators-bqb69\" (UID: \"cbfbb6fe-ee18-4035-9991-c7d98984760a\") " pod="openshift-marketplace/certified-operators-bqb69" Jan 22 18:20:52 crc kubenswrapper[4758]: I0122 18:20:52.313498 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbfbb6fe-ee18-4035-9991-c7d98984760a-catalog-content\") pod \"certified-operators-bqb69\" (UID: \"cbfbb6fe-ee18-4035-9991-c7d98984760a\") " pod="openshift-marketplace/certified-operators-bqb69" Jan 22 18:20:52 crc kubenswrapper[4758]: I0122 18:20:52.313790 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbfbb6fe-ee18-4035-9991-c7d98984760a-utilities\") pod \"certified-operators-bqb69\" (UID: \"cbfbb6fe-ee18-4035-9991-c7d98984760a\") " pod="openshift-marketplace/certified-operators-bqb69" Jan 22 18:20:52 crc kubenswrapper[4758]: I0122 18:20:52.415487 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbfbb6fe-ee18-4035-9991-c7d98984760a-utilities\") pod \"certified-operators-bqb69\" (UID: \"cbfbb6fe-ee18-4035-9991-c7d98984760a\") " pod="openshift-marketplace/certified-operators-bqb69" Jan 22 18:20:52 crc kubenswrapper[4758]: I0122 18:20:52.415630 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdg48\" (UniqueName: \"kubernetes.io/projected/cbfbb6fe-ee18-4035-9991-c7d98984760a-kube-api-access-cdg48\") pod \"certified-operators-bqb69\" (UID: \"cbfbb6fe-ee18-4035-9991-c7d98984760a\") " pod="openshift-marketplace/certified-operators-bqb69" Jan 22 18:20:52 crc kubenswrapper[4758]: I0122 18:20:52.415659 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbfbb6fe-ee18-4035-9991-c7d98984760a-catalog-content\") pod \"certified-operators-bqb69\" (UID: \"cbfbb6fe-ee18-4035-9991-c7d98984760a\") " pod="openshift-marketplace/certified-operators-bqb69" Jan 22 18:20:52 crc kubenswrapper[4758]: I0122 18:20:52.416129 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbfbb6fe-ee18-4035-9991-c7d98984760a-utilities\") pod \"certified-operators-bqb69\" (UID: \"cbfbb6fe-ee18-4035-9991-c7d98984760a\") " pod="openshift-marketplace/certified-operators-bqb69" Jan 22 18:20:52 crc kubenswrapper[4758]: I0122 18:20:52.416178 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbfbb6fe-ee18-4035-9991-c7d98984760a-catalog-content\") pod \"certified-operators-bqb69\" (UID: \"cbfbb6fe-ee18-4035-9991-c7d98984760a\") " pod="openshift-marketplace/certified-operators-bqb69" Jan 22 18:20:52 crc kubenswrapper[4758]: I0122 18:20:52.439629 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdg48\" (UniqueName: \"kubernetes.io/projected/cbfbb6fe-ee18-4035-9991-c7d98984760a-kube-api-access-cdg48\") pod \"certified-operators-bqb69\" (UID: \"cbfbb6fe-ee18-4035-9991-c7d98984760a\") " pod="openshift-marketplace/certified-operators-bqb69" Jan 22 18:20:52 crc kubenswrapper[4758]: I0122 18:20:52.591871 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bqb69" Jan 22 18:20:53 crc kubenswrapper[4758]: I0122 18:20:53.171792 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bqb69"] Jan 22 18:20:53 crc kubenswrapper[4758]: I0122 18:20:53.825939 4758 generic.go:334] "Generic (PLEG): container finished" podID="cbfbb6fe-ee18-4035-9991-c7d98984760a" containerID="a97db24759d0118d7a7a862005fa6a36dd7b64a3dafe81c252a8e0eb661b4a2a" exitCode=0 Jan 22 18:20:53 crc kubenswrapper[4758]: I0122 18:20:53.825992 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bqb69" event={"ID":"cbfbb6fe-ee18-4035-9991-c7d98984760a","Type":"ContainerDied","Data":"a97db24759d0118d7a7a862005fa6a36dd7b64a3dafe81c252a8e0eb661b4a2a"} Jan 22 18:20:53 crc kubenswrapper[4758]: I0122 18:20:53.826468 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bqb69" event={"ID":"cbfbb6fe-ee18-4035-9991-c7d98984760a","Type":"ContainerStarted","Data":"cfa5cd71a8a0217dc64b0bba0b88f2073afb7025d5e45be75ea25e8b9ee3690b"} Jan 22 18:20:53 crc kubenswrapper[4758]: I0122 18:20:53.828667 4758 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 18:20:55 crc kubenswrapper[4758]: I0122 18:20:55.847582 4758 generic.go:334] "Generic (PLEG): container finished" podID="cbfbb6fe-ee18-4035-9991-c7d98984760a" containerID="a32595cebd962646928a173c7f6d1e511d1183fdd40e884d3e9ed5a1d24b1e05" exitCode=0 Jan 22 18:20:55 crc kubenswrapper[4758]: I0122 18:20:55.847663 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bqb69" event={"ID":"cbfbb6fe-ee18-4035-9991-c7d98984760a","Type":"ContainerDied","Data":"a32595cebd962646928a173c7f6d1e511d1183fdd40e884d3e9ed5a1d24b1e05"} Jan 22 18:20:56 crc kubenswrapper[4758]: I0122 18:20:56.862630 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bqb69" event={"ID":"cbfbb6fe-ee18-4035-9991-c7d98984760a","Type":"ContainerStarted","Data":"7053239dd89d9ca7c5468b495727a1c2d4eeaf9dfdd9fa70085b6a58ecd2e328"} Jan 22 18:20:56 crc kubenswrapper[4758]: I0122 18:20:56.885639 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bqb69" podStartSLOduration=2.176855928 podStartE2EDuration="4.885583494s" podCreationTimestamp="2026-01-22 18:20:52 +0000 UTC" firstStartedPulling="2026-01-22 18:20:53.828311986 +0000 UTC m=+6675.311651281" lastFinishedPulling="2026-01-22 18:20:56.537039562 +0000 UTC m=+6678.020378847" observedRunningTime="2026-01-22 18:20:56.880981079 +0000 UTC m=+6678.364320384" watchObservedRunningTime="2026-01-22 18:20:56.885583494 +0000 UTC m=+6678.368922779" Jan 22 18:21:02 crc kubenswrapper[4758]: I0122 18:21:02.242661 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-649jt"] Jan 22 18:21:02 crc kubenswrapper[4758]: I0122 18:21:02.247939 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-649jt" Jan 22 18:21:02 crc kubenswrapper[4758]: I0122 18:21:02.255942 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-649jt"] Jan 22 18:21:02 crc kubenswrapper[4758]: I0122 18:21:02.342472 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79006ad0-17c0-49fa-b521-035fc1420c3a-catalog-content\") pod \"redhat-operators-649jt\" (UID: \"79006ad0-17c0-49fa-b521-035fc1420c3a\") " pod="openshift-marketplace/redhat-operators-649jt" Jan 22 18:21:02 crc kubenswrapper[4758]: I0122 18:21:02.342545 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5r78\" (UniqueName: \"kubernetes.io/projected/79006ad0-17c0-49fa-b521-035fc1420c3a-kube-api-access-g5r78\") pod \"redhat-operators-649jt\" (UID: \"79006ad0-17c0-49fa-b521-035fc1420c3a\") " pod="openshift-marketplace/redhat-operators-649jt" Jan 22 18:21:02 crc kubenswrapper[4758]: I0122 18:21:02.342683 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79006ad0-17c0-49fa-b521-035fc1420c3a-utilities\") pod \"redhat-operators-649jt\" (UID: \"79006ad0-17c0-49fa-b521-035fc1420c3a\") " pod="openshift-marketplace/redhat-operators-649jt" Jan 22 18:21:02 crc kubenswrapper[4758]: I0122 18:21:02.445097 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79006ad0-17c0-49fa-b521-035fc1420c3a-catalog-content\") pod \"redhat-operators-649jt\" (UID: \"79006ad0-17c0-49fa-b521-035fc1420c3a\") " pod="openshift-marketplace/redhat-operators-649jt" Jan 22 18:21:02 crc kubenswrapper[4758]: I0122 18:21:02.445198 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5r78\" (UniqueName: \"kubernetes.io/projected/79006ad0-17c0-49fa-b521-035fc1420c3a-kube-api-access-g5r78\") pod \"redhat-operators-649jt\" (UID: \"79006ad0-17c0-49fa-b521-035fc1420c3a\") " pod="openshift-marketplace/redhat-operators-649jt" Jan 22 18:21:02 crc kubenswrapper[4758]: I0122 18:21:02.445250 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79006ad0-17c0-49fa-b521-035fc1420c3a-utilities\") pod \"redhat-operators-649jt\" (UID: \"79006ad0-17c0-49fa-b521-035fc1420c3a\") " pod="openshift-marketplace/redhat-operators-649jt" Jan 22 18:21:02 crc kubenswrapper[4758]: I0122 18:21:02.445689 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79006ad0-17c0-49fa-b521-035fc1420c3a-utilities\") pod \"redhat-operators-649jt\" (UID: \"79006ad0-17c0-49fa-b521-035fc1420c3a\") " pod="openshift-marketplace/redhat-operators-649jt" Jan 22 18:21:02 crc kubenswrapper[4758]: I0122 18:21:02.445980 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79006ad0-17c0-49fa-b521-035fc1420c3a-catalog-content\") pod \"redhat-operators-649jt\" (UID: \"79006ad0-17c0-49fa-b521-035fc1420c3a\") " pod="openshift-marketplace/redhat-operators-649jt" Jan 22 18:21:02 crc kubenswrapper[4758]: I0122 18:21:02.469667 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5r78\" (UniqueName: \"kubernetes.io/projected/79006ad0-17c0-49fa-b521-035fc1420c3a-kube-api-access-g5r78\") pod \"redhat-operators-649jt\" (UID: \"79006ad0-17c0-49fa-b521-035fc1420c3a\") " pod="openshift-marketplace/redhat-operators-649jt" Jan 22 18:21:02 crc kubenswrapper[4758]: I0122 18:21:02.583055 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-649jt" Jan 22 18:21:02 crc kubenswrapper[4758]: I0122 18:21:02.592231 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bqb69" Jan 22 18:21:02 crc kubenswrapper[4758]: I0122 18:21:02.592432 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bqb69" Jan 22 18:21:02 crc kubenswrapper[4758]: I0122 18:21:02.661847 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bqb69" Jan 22 18:21:02 crc kubenswrapper[4758]: I0122 18:21:02.981491 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bqb69" Jan 22 18:21:03 crc kubenswrapper[4758]: I0122 18:21:03.095288 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-649jt"] Jan 22 18:21:03 crc kubenswrapper[4758]: I0122 18:21:03.942732 4758 generic.go:334] "Generic (PLEG): container finished" podID="79006ad0-17c0-49fa-b521-035fc1420c3a" containerID="b0df61cf308b926628471d44cadba77c15d8474bd658ec5bb0321a5b7cb09914" exitCode=0 Jan 22 18:21:03 crc kubenswrapper[4758]: I0122 18:21:03.942961 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-649jt" event={"ID":"79006ad0-17c0-49fa-b521-035fc1420c3a","Type":"ContainerDied","Data":"b0df61cf308b926628471d44cadba77c15d8474bd658ec5bb0321a5b7cb09914"} Jan 22 18:21:03 crc kubenswrapper[4758]: I0122 18:21:03.943321 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-649jt" event={"ID":"79006ad0-17c0-49fa-b521-035fc1420c3a","Type":"ContainerStarted","Data":"fcbd0139bf92a0a29869d40b2584a357a97471cf38254f38c77aabc21a43638e"} Jan 22 18:21:04 crc kubenswrapper[4758]: I0122 18:21:04.959019 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-649jt" event={"ID":"79006ad0-17c0-49fa-b521-035fc1420c3a","Type":"ContainerStarted","Data":"4170b6975d0c6e4e43cf3358352d418bc82bf62e2e1a18f0808d1c2824cf6390"} Jan 22 18:21:05 crc kubenswrapper[4758]: I0122 18:21:05.014837 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bqb69"] Jan 22 18:21:05 crc kubenswrapper[4758]: I0122 18:21:05.969520 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bqb69" podUID="cbfbb6fe-ee18-4035-9991-c7d98984760a" containerName="registry-server" containerID="cri-o://7053239dd89d9ca7c5468b495727a1c2d4eeaf9dfdd9fa70085b6a58ecd2e328" gracePeriod=2 Jan 22 18:21:06 crc kubenswrapper[4758]: I0122 18:21:06.980387 4758 generic.go:334] "Generic (PLEG): container finished" podID="cbfbb6fe-ee18-4035-9991-c7d98984760a" containerID="7053239dd89d9ca7c5468b495727a1c2d4eeaf9dfdd9fa70085b6a58ecd2e328" exitCode=0 Jan 22 18:21:06 crc kubenswrapper[4758]: I0122 18:21:06.980576 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bqb69" event={"ID":"cbfbb6fe-ee18-4035-9991-c7d98984760a","Type":"ContainerDied","Data":"7053239dd89d9ca7c5468b495727a1c2d4eeaf9dfdd9fa70085b6a58ecd2e328"} Jan 22 18:21:06 crc kubenswrapper[4758]: I0122 18:21:06.980771 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bqb69" event={"ID":"cbfbb6fe-ee18-4035-9991-c7d98984760a","Type":"ContainerDied","Data":"cfa5cd71a8a0217dc64b0bba0b88f2073afb7025d5e45be75ea25e8b9ee3690b"} Jan 22 18:21:06 crc kubenswrapper[4758]: I0122 18:21:06.980799 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfa5cd71a8a0217dc64b0bba0b88f2073afb7025d5e45be75ea25e8b9ee3690b" Jan 22 18:21:07 crc kubenswrapper[4758]: I0122 18:21:07.040459 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bqb69" Jan 22 18:21:07 crc kubenswrapper[4758]: I0122 18:21:07.167528 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbfbb6fe-ee18-4035-9991-c7d98984760a-catalog-content\") pod \"cbfbb6fe-ee18-4035-9991-c7d98984760a\" (UID: \"cbfbb6fe-ee18-4035-9991-c7d98984760a\") " Jan 22 18:21:07 crc kubenswrapper[4758]: I0122 18:21:07.167681 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbfbb6fe-ee18-4035-9991-c7d98984760a-utilities\") pod \"cbfbb6fe-ee18-4035-9991-c7d98984760a\" (UID: \"cbfbb6fe-ee18-4035-9991-c7d98984760a\") " Jan 22 18:21:07 crc kubenswrapper[4758]: I0122 18:21:07.167823 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdg48\" (UniqueName: \"kubernetes.io/projected/cbfbb6fe-ee18-4035-9991-c7d98984760a-kube-api-access-cdg48\") pod \"cbfbb6fe-ee18-4035-9991-c7d98984760a\" (UID: \"cbfbb6fe-ee18-4035-9991-c7d98984760a\") " Jan 22 18:21:07 crc kubenswrapper[4758]: I0122 18:21:07.169500 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbfbb6fe-ee18-4035-9991-c7d98984760a-utilities" (OuterVolumeSpecName: "utilities") pod "cbfbb6fe-ee18-4035-9991-c7d98984760a" (UID: "cbfbb6fe-ee18-4035-9991-c7d98984760a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 18:21:07 crc kubenswrapper[4758]: I0122 18:21:07.174268 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbfbb6fe-ee18-4035-9991-c7d98984760a-kube-api-access-cdg48" (OuterVolumeSpecName: "kube-api-access-cdg48") pod "cbfbb6fe-ee18-4035-9991-c7d98984760a" (UID: "cbfbb6fe-ee18-4035-9991-c7d98984760a"). InnerVolumeSpecName "kube-api-access-cdg48". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 18:21:07 crc kubenswrapper[4758]: I0122 18:21:07.217093 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbfbb6fe-ee18-4035-9991-c7d98984760a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cbfbb6fe-ee18-4035-9991-c7d98984760a" (UID: "cbfbb6fe-ee18-4035-9991-c7d98984760a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 18:21:07 crc kubenswrapper[4758]: I0122 18:21:07.270315 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbfbb6fe-ee18-4035-9991-c7d98984760a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 18:21:07 crc kubenswrapper[4758]: I0122 18:21:07.270362 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbfbb6fe-ee18-4035-9991-c7d98984760a-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 18:21:07 crc kubenswrapper[4758]: I0122 18:21:07.270377 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdg48\" (UniqueName: \"kubernetes.io/projected/cbfbb6fe-ee18-4035-9991-c7d98984760a-kube-api-access-cdg48\") on node \"crc\" DevicePath \"\"" Jan 22 18:21:07 crc kubenswrapper[4758]: I0122 18:21:07.989045 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bqb69" Jan 22 18:21:08 crc kubenswrapper[4758]: I0122 18:21:08.024390 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bqb69"] Jan 22 18:21:08 crc kubenswrapper[4758]: I0122 18:21:08.034991 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bqb69"] Jan 22 18:21:08 crc kubenswrapper[4758]: I0122 18:21:08.830571 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbfbb6fe-ee18-4035-9991-c7d98984760a" path="/var/lib/kubelet/pods/cbfbb6fe-ee18-4035-9991-c7d98984760a/volumes" Jan 22 18:21:09 crc kubenswrapper[4758]: I0122 18:21:09.000841 4758 generic.go:334] "Generic (PLEG): container finished" podID="79006ad0-17c0-49fa-b521-035fc1420c3a" containerID="4170b6975d0c6e4e43cf3358352d418bc82bf62e2e1a18f0808d1c2824cf6390" exitCode=0 Jan 22 18:21:09 crc kubenswrapper[4758]: I0122 18:21:09.000924 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-649jt" event={"ID":"79006ad0-17c0-49fa-b521-035fc1420c3a","Type":"ContainerDied","Data":"4170b6975d0c6e4e43cf3358352d418bc82bf62e2e1a18f0808d1c2824cf6390"} Jan 22 18:21:10 crc kubenswrapper[4758]: I0122 18:21:10.018868 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-649jt" event={"ID":"79006ad0-17c0-49fa-b521-035fc1420c3a","Type":"ContainerStarted","Data":"dba4818a1d407b0041486511d8183237ad5506a1d1087f1708ce83a6f0cc0e27"} Jan 22 18:21:10 crc kubenswrapper[4758]: I0122 18:21:10.048460 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-649jt" podStartSLOduration=2.571963147 podStartE2EDuration="8.048434588s" podCreationTimestamp="2026-01-22 18:21:02 +0000 UTC" firstStartedPulling="2026-01-22 18:21:03.945445166 +0000 UTC m=+6685.428784451" lastFinishedPulling="2026-01-22 18:21:09.421916607 +0000 UTC m=+6690.905255892" observedRunningTime="2026-01-22 18:21:10.036099983 +0000 UTC m=+6691.519439268" watchObservedRunningTime="2026-01-22 18:21:10.048434588 +0000 UTC m=+6691.531773873" Jan 22 18:21:12 crc kubenswrapper[4758]: I0122 18:21:12.584298 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-649jt" Jan 22 18:21:12 crc kubenswrapper[4758]: I0122 18:21:12.584699 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-649jt" Jan 22 18:21:13 crc kubenswrapper[4758]: I0122 18:21:13.634989 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-649jt" podUID="79006ad0-17c0-49fa-b521-035fc1420c3a" containerName="registry-server" probeResult="failure" output=< Jan 22 18:21:13 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 22 18:21:13 crc kubenswrapper[4758]: > Jan 22 18:21:13 crc kubenswrapper[4758]: I0122 18:21:13.837103 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 18:21:13 crc kubenswrapper[4758]: I0122 18:21:13.837160 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 18:21:22 crc kubenswrapper[4758]: I0122 18:21:22.636673 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-649jt" Jan 22 18:21:22 crc kubenswrapper[4758]: I0122 18:21:22.702373 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-649jt" Jan 22 18:21:23 crc kubenswrapper[4758]: I0122 18:21:23.483321 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-649jt"] Jan 22 18:21:24 crc kubenswrapper[4758]: I0122 18:21:24.153160 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-649jt" podUID="79006ad0-17c0-49fa-b521-035fc1420c3a" containerName="registry-server" containerID="cri-o://dba4818a1d407b0041486511d8183237ad5506a1d1087f1708ce83a6f0cc0e27" gracePeriod=2 Jan 22 18:21:24 crc kubenswrapper[4758]: I0122 18:21:24.725913 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-649jt" Jan 22 18:21:24 crc kubenswrapper[4758]: I0122 18:21:24.772032 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5r78\" (UniqueName: \"kubernetes.io/projected/79006ad0-17c0-49fa-b521-035fc1420c3a-kube-api-access-g5r78\") pod \"79006ad0-17c0-49fa-b521-035fc1420c3a\" (UID: \"79006ad0-17c0-49fa-b521-035fc1420c3a\") " Jan 22 18:21:24 crc kubenswrapper[4758]: I0122 18:21:24.773339 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79006ad0-17c0-49fa-b521-035fc1420c3a-catalog-content\") pod \"79006ad0-17c0-49fa-b521-035fc1420c3a\" (UID: \"79006ad0-17c0-49fa-b521-035fc1420c3a\") " Jan 22 18:21:24 crc kubenswrapper[4758]: I0122 18:21:24.773493 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79006ad0-17c0-49fa-b521-035fc1420c3a-utilities\") pod \"79006ad0-17c0-49fa-b521-035fc1420c3a\" (UID: \"79006ad0-17c0-49fa-b521-035fc1420c3a\") " Jan 22 18:21:24 crc kubenswrapper[4758]: I0122 18:21:24.774922 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79006ad0-17c0-49fa-b521-035fc1420c3a-utilities" (OuterVolumeSpecName: "utilities") pod "79006ad0-17c0-49fa-b521-035fc1420c3a" (UID: "79006ad0-17c0-49fa-b521-035fc1420c3a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 18:21:24 crc kubenswrapper[4758]: I0122 18:21:24.781882 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79006ad0-17c0-49fa-b521-035fc1420c3a-kube-api-access-g5r78" (OuterVolumeSpecName: "kube-api-access-g5r78") pod "79006ad0-17c0-49fa-b521-035fc1420c3a" (UID: "79006ad0-17c0-49fa-b521-035fc1420c3a"). InnerVolumeSpecName "kube-api-access-g5r78". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 18:21:24 crc kubenswrapper[4758]: I0122 18:21:24.878041 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79006ad0-17c0-49fa-b521-035fc1420c3a-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 18:21:24 crc kubenswrapper[4758]: I0122 18:21:24.878090 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5r78\" (UniqueName: \"kubernetes.io/projected/79006ad0-17c0-49fa-b521-035fc1420c3a-kube-api-access-g5r78\") on node \"crc\" DevicePath \"\"" Jan 22 18:21:24 crc kubenswrapper[4758]: I0122 18:21:24.912950 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79006ad0-17c0-49fa-b521-035fc1420c3a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "79006ad0-17c0-49fa-b521-035fc1420c3a" (UID: "79006ad0-17c0-49fa-b521-035fc1420c3a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 18:21:24 crc kubenswrapper[4758]: I0122 18:21:24.979628 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79006ad0-17c0-49fa-b521-035fc1420c3a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 18:21:25 crc kubenswrapper[4758]: I0122 18:21:25.172176 4758 generic.go:334] "Generic (PLEG): container finished" podID="79006ad0-17c0-49fa-b521-035fc1420c3a" containerID="dba4818a1d407b0041486511d8183237ad5506a1d1087f1708ce83a6f0cc0e27" exitCode=0 Jan 22 18:21:25 crc kubenswrapper[4758]: I0122 18:21:25.172245 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-649jt" event={"ID":"79006ad0-17c0-49fa-b521-035fc1420c3a","Type":"ContainerDied","Data":"dba4818a1d407b0041486511d8183237ad5506a1d1087f1708ce83a6f0cc0e27"} Jan 22 18:21:25 crc kubenswrapper[4758]: I0122 18:21:25.172290 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-649jt" event={"ID":"79006ad0-17c0-49fa-b521-035fc1420c3a","Type":"ContainerDied","Data":"fcbd0139bf92a0a29869d40b2584a357a97471cf38254f38c77aabc21a43638e"} Jan 22 18:21:25 crc kubenswrapper[4758]: I0122 18:21:25.172318 4758 scope.go:117] "RemoveContainer" containerID="dba4818a1d407b0041486511d8183237ad5506a1d1087f1708ce83a6f0cc0e27" Jan 22 18:21:25 crc kubenswrapper[4758]: I0122 18:21:25.172523 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-649jt" Jan 22 18:21:25 crc kubenswrapper[4758]: I0122 18:21:25.205117 4758 scope.go:117] "RemoveContainer" containerID="4170b6975d0c6e4e43cf3358352d418bc82bf62e2e1a18f0808d1c2824cf6390" Jan 22 18:21:25 crc kubenswrapper[4758]: I0122 18:21:25.235835 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-649jt"] Jan 22 18:21:25 crc kubenswrapper[4758]: I0122 18:21:25.242083 4758 scope.go:117] "RemoveContainer" containerID="b0df61cf308b926628471d44cadba77c15d8474bd658ec5bb0321a5b7cb09914" Jan 22 18:21:25 crc kubenswrapper[4758]: I0122 18:21:25.253663 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-649jt"] Jan 22 18:21:25 crc kubenswrapper[4758]: I0122 18:21:25.307217 4758 scope.go:117] "RemoveContainer" containerID="dba4818a1d407b0041486511d8183237ad5506a1d1087f1708ce83a6f0cc0e27" Jan 22 18:21:25 crc kubenswrapper[4758]: E0122 18:21:25.309117 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dba4818a1d407b0041486511d8183237ad5506a1d1087f1708ce83a6f0cc0e27\": container with ID starting with dba4818a1d407b0041486511d8183237ad5506a1d1087f1708ce83a6f0cc0e27 not found: ID does not exist" containerID="dba4818a1d407b0041486511d8183237ad5506a1d1087f1708ce83a6f0cc0e27" Jan 22 18:21:25 crc kubenswrapper[4758]: I0122 18:21:25.309266 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dba4818a1d407b0041486511d8183237ad5506a1d1087f1708ce83a6f0cc0e27"} err="failed to get container status \"dba4818a1d407b0041486511d8183237ad5506a1d1087f1708ce83a6f0cc0e27\": rpc error: code = NotFound desc = could not find container \"dba4818a1d407b0041486511d8183237ad5506a1d1087f1708ce83a6f0cc0e27\": container with ID starting with dba4818a1d407b0041486511d8183237ad5506a1d1087f1708ce83a6f0cc0e27 not found: ID does not exist" Jan 22 18:21:25 crc kubenswrapper[4758]: I0122 18:21:25.309339 4758 scope.go:117] "RemoveContainer" containerID="4170b6975d0c6e4e43cf3358352d418bc82bf62e2e1a18f0808d1c2824cf6390" Jan 22 18:21:25 crc kubenswrapper[4758]: E0122 18:21:25.310186 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4170b6975d0c6e4e43cf3358352d418bc82bf62e2e1a18f0808d1c2824cf6390\": container with ID starting with 4170b6975d0c6e4e43cf3358352d418bc82bf62e2e1a18f0808d1c2824cf6390 not found: ID does not exist" containerID="4170b6975d0c6e4e43cf3358352d418bc82bf62e2e1a18f0808d1c2824cf6390" Jan 22 18:21:25 crc kubenswrapper[4758]: I0122 18:21:25.310274 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4170b6975d0c6e4e43cf3358352d418bc82bf62e2e1a18f0808d1c2824cf6390"} err="failed to get container status \"4170b6975d0c6e4e43cf3358352d418bc82bf62e2e1a18f0808d1c2824cf6390\": rpc error: code = NotFound desc = could not find container \"4170b6975d0c6e4e43cf3358352d418bc82bf62e2e1a18f0808d1c2824cf6390\": container with ID starting with 4170b6975d0c6e4e43cf3358352d418bc82bf62e2e1a18f0808d1c2824cf6390 not found: ID does not exist" Jan 22 18:21:25 crc kubenswrapper[4758]: I0122 18:21:25.310345 4758 scope.go:117] "RemoveContainer" containerID="b0df61cf308b926628471d44cadba77c15d8474bd658ec5bb0321a5b7cb09914" Jan 22 18:21:25 crc kubenswrapper[4758]: E0122 18:21:25.310883 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0df61cf308b926628471d44cadba77c15d8474bd658ec5bb0321a5b7cb09914\": container with ID starting with b0df61cf308b926628471d44cadba77c15d8474bd658ec5bb0321a5b7cb09914 not found: ID does not exist" containerID="b0df61cf308b926628471d44cadba77c15d8474bd658ec5bb0321a5b7cb09914" Jan 22 18:21:25 crc kubenswrapper[4758]: I0122 18:21:25.310924 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0df61cf308b926628471d44cadba77c15d8474bd658ec5bb0321a5b7cb09914"} err="failed to get container status \"b0df61cf308b926628471d44cadba77c15d8474bd658ec5bb0321a5b7cb09914\": rpc error: code = NotFound desc = could not find container \"b0df61cf308b926628471d44cadba77c15d8474bd658ec5bb0321a5b7cb09914\": container with ID starting with b0df61cf308b926628471d44cadba77c15d8474bd658ec5bb0321a5b7cb09914 not found: ID does not exist" Jan 22 18:21:26 crc kubenswrapper[4758]: I0122 18:21:26.855252 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79006ad0-17c0-49fa-b521-035fc1420c3a" path="/var/lib/kubelet/pods/79006ad0-17c0-49fa-b521-035fc1420c3a/volumes" Jan 22 18:21:36 crc kubenswrapper[4758]: I0122 18:21:36.059817 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dvhvq"] Jan 22 18:21:36 crc kubenswrapper[4758]: E0122 18:21:36.060758 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbfbb6fe-ee18-4035-9991-c7d98984760a" containerName="registry-server" Jan 22 18:21:36 crc kubenswrapper[4758]: I0122 18:21:36.060771 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbfbb6fe-ee18-4035-9991-c7d98984760a" containerName="registry-server" Jan 22 18:21:36 crc kubenswrapper[4758]: E0122 18:21:36.060783 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbfbb6fe-ee18-4035-9991-c7d98984760a" containerName="extract-content" Jan 22 18:21:36 crc kubenswrapper[4758]: I0122 18:21:36.060789 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbfbb6fe-ee18-4035-9991-c7d98984760a" containerName="extract-content" Jan 22 18:21:36 crc kubenswrapper[4758]: E0122 18:21:36.060802 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79006ad0-17c0-49fa-b521-035fc1420c3a" containerName="extract-content" Jan 22 18:21:36 crc kubenswrapper[4758]: I0122 18:21:36.060809 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="79006ad0-17c0-49fa-b521-035fc1420c3a" containerName="extract-content" Jan 22 18:21:36 crc kubenswrapper[4758]: E0122 18:21:36.060823 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79006ad0-17c0-49fa-b521-035fc1420c3a" containerName="registry-server" Jan 22 18:21:36 crc kubenswrapper[4758]: I0122 18:21:36.060829 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="79006ad0-17c0-49fa-b521-035fc1420c3a" containerName="registry-server" Jan 22 18:21:36 crc kubenswrapper[4758]: E0122 18:21:36.060851 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79006ad0-17c0-49fa-b521-035fc1420c3a" containerName="extract-utilities" Jan 22 18:21:36 crc kubenswrapper[4758]: I0122 18:21:36.060857 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="79006ad0-17c0-49fa-b521-035fc1420c3a" containerName="extract-utilities" Jan 22 18:21:36 crc kubenswrapper[4758]: E0122 18:21:36.060871 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbfbb6fe-ee18-4035-9991-c7d98984760a" containerName="extract-utilities" Jan 22 18:21:36 crc kubenswrapper[4758]: I0122 18:21:36.060877 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbfbb6fe-ee18-4035-9991-c7d98984760a" containerName="extract-utilities" Jan 22 18:21:36 crc kubenswrapper[4758]: I0122 18:21:36.061050 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="79006ad0-17c0-49fa-b521-035fc1420c3a" containerName="registry-server" Jan 22 18:21:36 crc kubenswrapper[4758]: I0122 18:21:36.061067 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbfbb6fe-ee18-4035-9991-c7d98984760a" containerName="registry-server" Jan 22 18:21:36 crc kubenswrapper[4758]: I0122 18:21:36.062483 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dvhvq" Jan 22 18:21:36 crc kubenswrapper[4758]: I0122 18:21:36.075473 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dvhvq"] Jan 22 18:21:36 crc kubenswrapper[4758]: I0122 18:21:36.126830 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49cf8364-ea4f-4e5e-a6bf-93406d5bb76c-catalog-content\") pod \"community-operators-dvhvq\" (UID: \"49cf8364-ea4f-4e5e-a6bf-93406d5bb76c\") " pod="openshift-marketplace/community-operators-dvhvq" Jan 22 18:21:36 crc kubenswrapper[4758]: I0122 18:21:36.126887 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntbl8\" (UniqueName: \"kubernetes.io/projected/49cf8364-ea4f-4e5e-a6bf-93406d5bb76c-kube-api-access-ntbl8\") pod \"community-operators-dvhvq\" (UID: \"49cf8364-ea4f-4e5e-a6bf-93406d5bb76c\") " pod="openshift-marketplace/community-operators-dvhvq" Jan 22 18:21:36 crc kubenswrapper[4758]: I0122 18:21:36.127035 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49cf8364-ea4f-4e5e-a6bf-93406d5bb76c-utilities\") pod \"community-operators-dvhvq\" (UID: \"49cf8364-ea4f-4e5e-a6bf-93406d5bb76c\") " pod="openshift-marketplace/community-operators-dvhvq" Jan 22 18:21:36 crc kubenswrapper[4758]: I0122 18:21:36.228898 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntbl8\" (UniqueName: \"kubernetes.io/projected/49cf8364-ea4f-4e5e-a6bf-93406d5bb76c-kube-api-access-ntbl8\") pod \"community-operators-dvhvq\" (UID: \"49cf8364-ea4f-4e5e-a6bf-93406d5bb76c\") " pod="openshift-marketplace/community-operators-dvhvq" Jan 22 18:21:36 crc kubenswrapper[4758]: I0122 18:21:36.229016 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49cf8364-ea4f-4e5e-a6bf-93406d5bb76c-utilities\") pod \"community-operators-dvhvq\" (UID: \"49cf8364-ea4f-4e5e-a6bf-93406d5bb76c\") " pod="openshift-marketplace/community-operators-dvhvq" Jan 22 18:21:36 crc kubenswrapper[4758]: I0122 18:21:36.229159 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49cf8364-ea4f-4e5e-a6bf-93406d5bb76c-catalog-content\") pod \"community-operators-dvhvq\" (UID: \"49cf8364-ea4f-4e5e-a6bf-93406d5bb76c\") " pod="openshift-marketplace/community-operators-dvhvq" Jan 22 18:21:36 crc kubenswrapper[4758]: I0122 18:21:36.229496 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49cf8364-ea4f-4e5e-a6bf-93406d5bb76c-utilities\") pod \"community-operators-dvhvq\" (UID: \"49cf8364-ea4f-4e5e-a6bf-93406d5bb76c\") " pod="openshift-marketplace/community-operators-dvhvq" Jan 22 18:21:36 crc kubenswrapper[4758]: I0122 18:21:36.229540 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49cf8364-ea4f-4e5e-a6bf-93406d5bb76c-catalog-content\") pod \"community-operators-dvhvq\" (UID: \"49cf8364-ea4f-4e5e-a6bf-93406d5bb76c\") " pod="openshift-marketplace/community-operators-dvhvq" Jan 22 18:21:36 crc kubenswrapper[4758]: I0122 18:21:36.256968 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntbl8\" (UniqueName: \"kubernetes.io/projected/49cf8364-ea4f-4e5e-a6bf-93406d5bb76c-kube-api-access-ntbl8\") pod \"community-operators-dvhvq\" (UID: \"49cf8364-ea4f-4e5e-a6bf-93406d5bb76c\") " pod="openshift-marketplace/community-operators-dvhvq" Jan 22 18:21:36 crc kubenswrapper[4758]: I0122 18:21:36.432581 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dvhvq" Jan 22 18:21:37 crc kubenswrapper[4758]: I0122 18:21:37.020245 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dvhvq"] Jan 22 18:21:37 crc kubenswrapper[4758]: I0122 18:21:37.305977 4758 generic.go:334] "Generic (PLEG): container finished" podID="49cf8364-ea4f-4e5e-a6bf-93406d5bb76c" containerID="8feabc82157fa5ccde2c32e7336efb38cbc69ad3d80e8787d74012e20d794834" exitCode=0 Jan 22 18:21:37 crc kubenswrapper[4758]: I0122 18:21:37.306023 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dvhvq" event={"ID":"49cf8364-ea4f-4e5e-a6bf-93406d5bb76c","Type":"ContainerDied","Data":"8feabc82157fa5ccde2c32e7336efb38cbc69ad3d80e8787d74012e20d794834"} Jan 22 18:21:37 crc kubenswrapper[4758]: I0122 18:21:37.306055 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dvhvq" event={"ID":"49cf8364-ea4f-4e5e-a6bf-93406d5bb76c","Type":"ContainerStarted","Data":"8df3c84da63115e7e926ba5e0558571e62d3b0d8c92645111ed17893cb813a00"} Jan 22 18:21:38 crc kubenswrapper[4758]: I0122 18:21:38.316952 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dvhvq" event={"ID":"49cf8364-ea4f-4e5e-a6bf-93406d5bb76c","Type":"ContainerStarted","Data":"900c8a2fe936e51037550f78234123073641d4c2639cdaccfbfa2c02ce50cb4e"} Jan 22 18:21:39 crc kubenswrapper[4758]: I0122 18:21:39.333339 4758 generic.go:334] "Generic (PLEG): container finished" podID="49cf8364-ea4f-4e5e-a6bf-93406d5bb76c" containerID="900c8a2fe936e51037550f78234123073641d4c2639cdaccfbfa2c02ce50cb4e" exitCode=0 Jan 22 18:21:39 crc kubenswrapper[4758]: I0122 18:21:39.333697 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dvhvq" event={"ID":"49cf8364-ea4f-4e5e-a6bf-93406d5bb76c","Type":"ContainerDied","Data":"900c8a2fe936e51037550f78234123073641d4c2639cdaccfbfa2c02ce50cb4e"} Jan 22 18:21:40 crc kubenswrapper[4758]: I0122 18:21:40.348189 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dvhvq" event={"ID":"49cf8364-ea4f-4e5e-a6bf-93406d5bb76c","Type":"ContainerStarted","Data":"42c361e350b8f542eb82987af41feebe3df0b70e0903c6b0a6a501aae2bf149f"} Jan 22 18:21:40 crc kubenswrapper[4758]: I0122 18:21:40.367947 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dvhvq" podStartSLOduration=1.904409163 podStartE2EDuration="4.367923472s" podCreationTimestamp="2026-01-22 18:21:36 +0000 UTC" firstStartedPulling="2026-01-22 18:21:37.307560459 +0000 UTC m=+6718.790899744" lastFinishedPulling="2026-01-22 18:21:39.771074768 +0000 UTC m=+6721.254414053" observedRunningTime="2026-01-22 18:21:40.36714752 +0000 UTC m=+6721.850486815" watchObservedRunningTime="2026-01-22 18:21:40.367923472 +0000 UTC m=+6721.851262757" Jan 22 18:21:43 crc kubenswrapper[4758]: I0122 18:21:43.837320 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 18:21:43 crc kubenswrapper[4758]: I0122 18:21:43.837665 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 18:21:43 crc kubenswrapper[4758]: I0122 18:21:43.837707 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" Jan 22 18:21:43 crc kubenswrapper[4758]: I0122 18:21:43.838545 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"91d7ffcf810ea3c608d14de0dc105ec1bbd1a1cbe5e8f5d16d87dab78b231a4d"} pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 18:21:43 crc kubenswrapper[4758]: I0122 18:21:43.838602 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" containerID="cri-o://91d7ffcf810ea3c608d14de0dc105ec1bbd1a1cbe5e8f5d16d87dab78b231a4d" gracePeriod=600 Jan 22 18:21:44 crc kubenswrapper[4758]: E0122 18:21:44.471717 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:21:45 crc kubenswrapper[4758]: I0122 18:21:45.405580 4758 generic.go:334] "Generic (PLEG): container finished" podID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerID="91d7ffcf810ea3c608d14de0dc105ec1bbd1a1cbe5e8f5d16d87dab78b231a4d" exitCode=0 Jan 22 18:21:45 crc kubenswrapper[4758]: I0122 18:21:45.405630 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerDied","Data":"91d7ffcf810ea3c608d14de0dc105ec1bbd1a1cbe5e8f5d16d87dab78b231a4d"} Jan 22 18:21:45 crc kubenswrapper[4758]: I0122 18:21:45.405665 4758 scope.go:117] "RemoveContainer" containerID="1dd7b74fe085345e4dd7f1349a4ed2a791f1166d5bbada9f9c2c704283055387" Jan 22 18:21:45 crc kubenswrapper[4758]: I0122 18:21:45.406589 4758 scope.go:117] "RemoveContainer" containerID="91d7ffcf810ea3c608d14de0dc105ec1bbd1a1cbe5e8f5d16d87dab78b231a4d" Jan 22 18:21:45 crc kubenswrapper[4758]: E0122 18:21:45.407117 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:21:46 crc kubenswrapper[4758]: I0122 18:21:46.432896 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dvhvq" Jan 22 18:21:46 crc kubenswrapper[4758]: I0122 18:21:46.433457 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dvhvq" Jan 22 18:21:46 crc kubenswrapper[4758]: I0122 18:21:46.484799 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dvhvq" Jan 22 18:21:47 crc kubenswrapper[4758]: I0122 18:21:47.494036 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dvhvq" Jan 22 18:21:47 crc kubenswrapper[4758]: I0122 18:21:47.567492 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dvhvq"] Jan 22 18:21:49 crc kubenswrapper[4758]: I0122 18:21:49.451370 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dvhvq" podUID="49cf8364-ea4f-4e5e-a6bf-93406d5bb76c" containerName="registry-server" containerID="cri-o://42c361e350b8f542eb82987af41feebe3df0b70e0903c6b0a6a501aae2bf149f" gracePeriod=2 Jan 22 18:21:49 crc kubenswrapper[4758]: I0122 18:21:49.926670 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dvhvq" Jan 22 18:21:50 crc kubenswrapper[4758]: I0122 18:21:50.101295 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49cf8364-ea4f-4e5e-a6bf-93406d5bb76c-catalog-content\") pod \"49cf8364-ea4f-4e5e-a6bf-93406d5bb76c\" (UID: \"49cf8364-ea4f-4e5e-a6bf-93406d5bb76c\") " Jan 22 18:21:50 crc kubenswrapper[4758]: I0122 18:21:50.101775 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49cf8364-ea4f-4e5e-a6bf-93406d5bb76c-utilities\") pod \"49cf8364-ea4f-4e5e-a6bf-93406d5bb76c\" (UID: \"49cf8364-ea4f-4e5e-a6bf-93406d5bb76c\") " Jan 22 18:21:50 crc kubenswrapper[4758]: I0122 18:21:50.101898 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntbl8\" (UniqueName: \"kubernetes.io/projected/49cf8364-ea4f-4e5e-a6bf-93406d5bb76c-kube-api-access-ntbl8\") pod \"49cf8364-ea4f-4e5e-a6bf-93406d5bb76c\" (UID: \"49cf8364-ea4f-4e5e-a6bf-93406d5bb76c\") " Jan 22 18:21:50 crc kubenswrapper[4758]: I0122 18:21:50.102880 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49cf8364-ea4f-4e5e-a6bf-93406d5bb76c-utilities" (OuterVolumeSpecName: "utilities") pod "49cf8364-ea4f-4e5e-a6bf-93406d5bb76c" (UID: "49cf8364-ea4f-4e5e-a6bf-93406d5bb76c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 18:21:50 crc kubenswrapper[4758]: I0122 18:21:50.109980 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49cf8364-ea4f-4e5e-a6bf-93406d5bb76c-kube-api-access-ntbl8" (OuterVolumeSpecName: "kube-api-access-ntbl8") pod "49cf8364-ea4f-4e5e-a6bf-93406d5bb76c" (UID: "49cf8364-ea4f-4e5e-a6bf-93406d5bb76c"). InnerVolumeSpecName "kube-api-access-ntbl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 18:21:50 crc kubenswrapper[4758]: I0122 18:21:50.154019 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49cf8364-ea4f-4e5e-a6bf-93406d5bb76c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "49cf8364-ea4f-4e5e-a6bf-93406d5bb76c" (UID: "49cf8364-ea4f-4e5e-a6bf-93406d5bb76c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 18:21:50 crc kubenswrapper[4758]: I0122 18:21:50.204368 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntbl8\" (UniqueName: \"kubernetes.io/projected/49cf8364-ea4f-4e5e-a6bf-93406d5bb76c-kube-api-access-ntbl8\") on node \"crc\" DevicePath \"\"" Jan 22 18:21:50 crc kubenswrapper[4758]: I0122 18:21:50.204403 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49cf8364-ea4f-4e5e-a6bf-93406d5bb76c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 18:21:50 crc kubenswrapper[4758]: I0122 18:21:50.204413 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49cf8364-ea4f-4e5e-a6bf-93406d5bb76c-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 18:21:50 crc kubenswrapper[4758]: I0122 18:21:50.465547 4758 generic.go:334] "Generic (PLEG): container finished" podID="49cf8364-ea4f-4e5e-a6bf-93406d5bb76c" containerID="42c361e350b8f542eb82987af41feebe3df0b70e0903c6b0a6a501aae2bf149f" exitCode=0 Jan 22 18:21:50 crc kubenswrapper[4758]: I0122 18:21:50.465597 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dvhvq" event={"ID":"49cf8364-ea4f-4e5e-a6bf-93406d5bb76c","Type":"ContainerDied","Data":"42c361e350b8f542eb82987af41feebe3df0b70e0903c6b0a6a501aae2bf149f"} Jan 22 18:21:50 crc kubenswrapper[4758]: I0122 18:21:50.465630 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dvhvq" event={"ID":"49cf8364-ea4f-4e5e-a6bf-93406d5bb76c","Type":"ContainerDied","Data":"8df3c84da63115e7e926ba5e0558571e62d3b0d8c92645111ed17893cb813a00"} Jan 22 18:21:50 crc kubenswrapper[4758]: I0122 18:21:50.465652 4758 scope.go:117] "RemoveContainer" containerID="42c361e350b8f542eb82987af41feebe3df0b70e0903c6b0a6a501aae2bf149f" Jan 22 18:21:50 crc kubenswrapper[4758]: I0122 18:21:50.465689 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dvhvq" Jan 22 18:21:50 crc kubenswrapper[4758]: I0122 18:21:50.498703 4758 scope.go:117] "RemoveContainer" containerID="900c8a2fe936e51037550f78234123073641d4c2639cdaccfbfa2c02ce50cb4e" Jan 22 18:21:50 crc kubenswrapper[4758]: I0122 18:21:50.527274 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dvhvq"] Jan 22 18:21:50 crc kubenswrapper[4758]: I0122 18:21:50.534498 4758 scope.go:117] "RemoveContainer" containerID="8feabc82157fa5ccde2c32e7336efb38cbc69ad3d80e8787d74012e20d794834" Jan 22 18:21:50 crc kubenswrapper[4758]: I0122 18:21:50.543488 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dvhvq"] Jan 22 18:21:50 crc kubenswrapper[4758]: I0122 18:21:50.586382 4758 scope.go:117] "RemoveContainer" containerID="42c361e350b8f542eb82987af41feebe3df0b70e0903c6b0a6a501aae2bf149f" Jan 22 18:21:50 crc kubenswrapper[4758]: E0122 18:21:50.586997 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42c361e350b8f542eb82987af41feebe3df0b70e0903c6b0a6a501aae2bf149f\": container with ID starting with 42c361e350b8f542eb82987af41feebe3df0b70e0903c6b0a6a501aae2bf149f not found: ID does not exist" containerID="42c361e350b8f542eb82987af41feebe3df0b70e0903c6b0a6a501aae2bf149f" Jan 22 18:21:50 crc kubenswrapper[4758]: I0122 18:21:50.587067 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42c361e350b8f542eb82987af41feebe3df0b70e0903c6b0a6a501aae2bf149f"} err="failed to get container status \"42c361e350b8f542eb82987af41feebe3df0b70e0903c6b0a6a501aae2bf149f\": rpc error: code = NotFound desc = could not find container \"42c361e350b8f542eb82987af41feebe3df0b70e0903c6b0a6a501aae2bf149f\": container with ID starting with 42c361e350b8f542eb82987af41feebe3df0b70e0903c6b0a6a501aae2bf149f not found: ID does not exist" Jan 22 18:21:50 crc kubenswrapper[4758]: I0122 18:21:50.587163 4758 scope.go:117] "RemoveContainer" containerID="900c8a2fe936e51037550f78234123073641d4c2639cdaccfbfa2c02ce50cb4e" Jan 22 18:21:50 crc kubenswrapper[4758]: E0122 18:21:50.587510 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"900c8a2fe936e51037550f78234123073641d4c2639cdaccfbfa2c02ce50cb4e\": container with ID starting with 900c8a2fe936e51037550f78234123073641d4c2639cdaccfbfa2c02ce50cb4e not found: ID does not exist" containerID="900c8a2fe936e51037550f78234123073641d4c2639cdaccfbfa2c02ce50cb4e" Jan 22 18:21:50 crc kubenswrapper[4758]: I0122 18:21:50.587564 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"900c8a2fe936e51037550f78234123073641d4c2639cdaccfbfa2c02ce50cb4e"} err="failed to get container status \"900c8a2fe936e51037550f78234123073641d4c2639cdaccfbfa2c02ce50cb4e\": rpc error: code = NotFound desc = could not find container \"900c8a2fe936e51037550f78234123073641d4c2639cdaccfbfa2c02ce50cb4e\": container with ID starting with 900c8a2fe936e51037550f78234123073641d4c2639cdaccfbfa2c02ce50cb4e not found: ID does not exist" Jan 22 18:21:50 crc kubenswrapper[4758]: I0122 18:21:50.587605 4758 scope.go:117] "RemoveContainer" containerID="8feabc82157fa5ccde2c32e7336efb38cbc69ad3d80e8787d74012e20d794834" Jan 22 18:21:50 crc kubenswrapper[4758]: E0122 18:21:50.588151 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8feabc82157fa5ccde2c32e7336efb38cbc69ad3d80e8787d74012e20d794834\": container with ID starting with 8feabc82157fa5ccde2c32e7336efb38cbc69ad3d80e8787d74012e20d794834 not found: ID does not exist" containerID="8feabc82157fa5ccde2c32e7336efb38cbc69ad3d80e8787d74012e20d794834" Jan 22 18:21:50 crc kubenswrapper[4758]: I0122 18:21:50.588196 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8feabc82157fa5ccde2c32e7336efb38cbc69ad3d80e8787d74012e20d794834"} err="failed to get container status \"8feabc82157fa5ccde2c32e7336efb38cbc69ad3d80e8787d74012e20d794834\": rpc error: code = NotFound desc = could not find container \"8feabc82157fa5ccde2c32e7336efb38cbc69ad3d80e8787d74012e20d794834\": container with ID starting with 8feabc82157fa5ccde2c32e7336efb38cbc69ad3d80e8787d74012e20d794834 not found: ID does not exist" Jan 22 18:21:50 crc kubenswrapper[4758]: I0122 18:21:50.824650 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49cf8364-ea4f-4e5e-a6bf-93406d5bb76c" path="/var/lib/kubelet/pods/49cf8364-ea4f-4e5e-a6bf-93406d5bb76c/volumes" Jan 22 18:21:57 crc kubenswrapper[4758]: I0122 18:21:57.808601 4758 scope.go:117] "RemoveContainer" containerID="91d7ffcf810ea3c608d14de0dc105ec1bbd1a1cbe5e8f5d16d87dab78b231a4d" Jan 22 18:21:57 crc kubenswrapper[4758]: E0122 18:21:57.811307 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:22:11 crc kubenswrapper[4758]: I0122 18:22:11.809644 4758 scope.go:117] "RemoveContainer" containerID="91d7ffcf810ea3c608d14de0dc105ec1bbd1a1cbe5e8f5d16d87dab78b231a4d" Jan 22 18:22:11 crc kubenswrapper[4758]: E0122 18:22:11.810867 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:22:22 crc kubenswrapper[4758]: I0122 18:22:22.808245 4758 scope.go:117] "RemoveContainer" containerID="91d7ffcf810ea3c608d14de0dc105ec1bbd1a1cbe5e8f5d16d87dab78b231a4d" Jan 22 18:22:22 crc kubenswrapper[4758]: E0122 18:22:22.809082 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:22:33 crc kubenswrapper[4758]: I0122 18:22:33.808515 4758 scope.go:117] "RemoveContainer" containerID="91d7ffcf810ea3c608d14de0dc105ec1bbd1a1cbe5e8f5d16d87dab78b231a4d" Jan 22 18:22:33 crc kubenswrapper[4758]: E0122 18:22:33.809382 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:22:45 crc kubenswrapper[4758]: I0122 18:22:45.809578 4758 scope.go:117] "RemoveContainer" containerID="91d7ffcf810ea3c608d14de0dc105ec1bbd1a1cbe5e8f5d16d87dab78b231a4d" Jan 22 18:22:45 crc kubenswrapper[4758]: E0122 18:22:45.810323 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:22:56 crc kubenswrapper[4758]: I0122 18:22:56.809056 4758 scope.go:117] "RemoveContainer" containerID="91d7ffcf810ea3c608d14de0dc105ec1bbd1a1cbe5e8f5d16d87dab78b231a4d" Jan 22 18:22:56 crc kubenswrapper[4758]: E0122 18:22:56.810091 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:23:08 crc kubenswrapper[4758]: I0122 18:23:08.815821 4758 scope.go:117] "RemoveContainer" containerID="91d7ffcf810ea3c608d14de0dc105ec1bbd1a1cbe5e8f5d16d87dab78b231a4d" Jan 22 18:23:08 crc kubenswrapper[4758]: E0122 18:23:08.818318 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:23:21 crc kubenswrapper[4758]: I0122 18:23:21.808041 4758 scope.go:117] "RemoveContainer" containerID="91d7ffcf810ea3c608d14de0dc105ec1bbd1a1cbe5e8f5d16d87dab78b231a4d" Jan 22 18:23:21 crc kubenswrapper[4758]: E0122 18:23:21.808844 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:23:34 crc kubenswrapper[4758]: I0122 18:23:34.808364 4758 scope.go:117] "RemoveContainer" containerID="91d7ffcf810ea3c608d14de0dc105ec1bbd1a1cbe5e8f5d16d87dab78b231a4d" Jan 22 18:23:34 crc kubenswrapper[4758]: E0122 18:23:34.809245 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:23:48 crc kubenswrapper[4758]: I0122 18:23:48.817499 4758 scope.go:117] "RemoveContainer" containerID="91d7ffcf810ea3c608d14de0dc105ec1bbd1a1cbe5e8f5d16d87dab78b231a4d" Jan 22 18:23:48 crc kubenswrapper[4758]: E0122 18:23:48.818650 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:23:59 crc kubenswrapper[4758]: I0122 18:23:59.808399 4758 scope.go:117] "RemoveContainer" containerID="91d7ffcf810ea3c608d14de0dc105ec1bbd1a1cbe5e8f5d16d87dab78b231a4d" Jan 22 18:23:59 crc kubenswrapper[4758]: E0122 18:23:59.810230 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:24:11 crc kubenswrapper[4758]: I0122 18:24:11.808729 4758 scope.go:117] "RemoveContainer" containerID="91d7ffcf810ea3c608d14de0dc105ec1bbd1a1cbe5e8f5d16d87dab78b231a4d" Jan 22 18:24:11 crc kubenswrapper[4758]: E0122 18:24:11.809616 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:24:25 crc kubenswrapper[4758]: I0122 18:24:25.808615 4758 scope.go:117] "RemoveContainer" containerID="91d7ffcf810ea3c608d14de0dc105ec1bbd1a1cbe5e8f5d16d87dab78b231a4d" Jan 22 18:24:25 crc kubenswrapper[4758]: E0122 18:24:25.809518 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:24:36 crc kubenswrapper[4758]: I0122 18:24:36.811281 4758 scope.go:117] "RemoveContainer" containerID="91d7ffcf810ea3c608d14de0dc105ec1bbd1a1cbe5e8f5d16d87dab78b231a4d" Jan 22 18:24:36 crc kubenswrapper[4758]: E0122 18:24:36.812394 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:24:50 crc kubenswrapper[4758]: I0122 18:24:50.808756 4758 scope.go:117] "RemoveContainer" containerID="91d7ffcf810ea3c608d14de0dc105ec1bbd1a1cbe5e8f5d16d87dab78b231a4d" Jan 22 18:24:50 crc kubenswrapper[4758]: E0122 18:24:50.809445 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:25:01 crc kubenswrapper[4758]: I0122 18:25:01.808647 4758 scope.go:117] "RemoveContainer" containerID="91d7ffcf810ea3c608d14de0dc105ec1bbd1a1cbe5e8f5d16d87dab78b231a4d" Jan 22 18:25:01 crc kubenswrapper[4758]: E0122 18:25:01.809454 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:25:08 crc kubenswrapper[4758]: I0122 18:25:08.782498 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w9mtz"] Jan 22 18:25:08 crc kubenswrapper[4758]: E0122 18:25:08.783528 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49cf8364-ea4f-4e5e-a6bf-93406d5bb76c" containerName="registry-server" Jan 22 18:25:08 crc kubenswrapper[4758]: I0122 18:25:08.783546 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="49cf8364-ea4f-4e5e-a6bf-93406d5bb76c" containerName="registry-server" Jan 22 18:25:08 crc kubenswrapper[4758]: E0122 18:25:08.783583 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49cf8364-ea4f-4e5e-a6bf-93406d5bb76c" containerName="extract-content" Jan 22 18:25:08 crc kubenswrapper[4758]: I0122 18:25:08.783589 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="49cf8364-ea4f-4e5e-a6bf-93406d5bb76c" containerName="extract-content" Jan 22 18:25:08 crc kubenswrapper[4758]: E0122 18:25:08.783601 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49cf8364-ea4f-4e5e-a6bf-93406d5bb76c" containerName="extract-utilities" Jan 22 18:25:08 crc kubenswrapper[4758]: I0122 18:25:08.783607 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="49cf8364-ea4f-4e5e-a6bf-93406d5bb76c" containerName="extract-utilities" Jan 22 18:25:08 crc kubenswrapper[4758]: I0122 18:25:08.783858 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="49cf8364-ea4f-4e5e-a6bf-93406d5bb76c" containerName="registry-server" Jan 22 18:25:08 crc kubenswrapper[4758]: I0122 18:25:08.785315 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w9mtz" Jan 22 18:25:08 crc kubenswrapper[4758]: I0122 18:25:08.798360 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w9mtz"] Jan 22 18:25:08 crc kubenswrapper[4758]: I0122 18:25:08.877406 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8wzx\" (UniqueName: \"kubernetes.io/projected/1777e281-f5de-47a1-816a-638a5ae761b2-kube-api-access-b8wzx\") pod \"redhat-marketplace-w9mtz\" (UID: \"1777e281-f5de-47a1-816a-638a5ae761b2\") " pod="openshift-marketplace/redhat-marketplace-w9mtz" Jan 22 18:25:08 crc kubenswrapper[4758]: I0122 18:25:08.877860 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1777e281-f5de-47a1-816a-638a5ae761b2-utilities\") pod \"redhat-marketplace-w9mtz\" (UID: \"1777e281-f5de-47a1-816a-638a5ae761b2\") " pod="openshift-marketplace/redhat-marketplace-w9mtz" Jan 22 18:25:08 crc kubenswrapper[4758]: I0122 18:25:08.878221 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1777e281-f5de-47a1-816a-638a5ae761b2-catalog-content\") pod \"redhat-marketplace-w9mtz\" (UID: \"1777e281-f5de-47a1-816a-638a5ae761b2\") " pod="openshift-marketplace/redhat-marketplace-w9mtz" Jan 22 18:25:08 crc kubenswrapper[4758]: I0122 18:25:08.980464 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1777e281-f5de-47a1-816a-638a5ae761b2-utilities\") pod \"redhat-marketplace-w9mtz\" (UID: \"1777e281-f5de-47a1-816a-638a5ae761b2\") " pod="openshift-marketplace/redhat-marketplace-w9mtz" Jan 22 18:25:08 crc kubenswrapper[4758]: I0122 18:25:08.980590 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1777e281-f5de-47a1-816a-638a5ae761b2-catalog-content\") pod \"redhat-marketplace-w9mtz\" (UID: \"1777e281-f5de-47a1-816a-638a5ae761b2\") " pod="openshift-marketplace/redhat-marketplace-w9mtz" Jan 22 18:25:08 crc kubenswrapper[4758]: I0122 18:25:08.980676 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8wzx\" (UniqueName: \"kubernetes.io/projected/1777e281-f5de-47a1-816a-638a5ae761b2-kube-api-access-b8wzx\") pod \"redhat-marketplace-w9mtz\" (UID: \"1777e281-f5de-47a1-816a-638a5ae761b2\") " pod="openshift-marketplace/redhat-marketplace-w9mtz" Jan 22 18:25:08 crc kubenswrapper[4758]: I0122 18:25:08.981459 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1777e281-f5de-47a1-816a-638a5ae761b2-utilities\") pod \"redhat-marketplace-w9mtz\" (UID: \"1777e281-f5de-47a1-816a-638a5ae761b2\") " pod="openshift-marketplace/redhat-marketplace-w9mtz" Jan 22 18:25:08 crc kubenswrapper[4758]: I0122 18:25:08.981716 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1777e281-f5de-47a1-816a-638a5ae761b2-catalog-content\") pod \"redhat-marketplace-w9mtz\" (UID: \"1777e281-f5de-47a1-816a-638a5ae761b2\") " pod="openshift-marketplace/redhat-marketplace-w9mtz" Jan 22 18:25:09 crc kubenswrapper[4758]: I0122 18:25:09.013008 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8wzx\" (UniqueName: \"kubernetes.io/projected/1777e281-f5de-47a1-816a-638a5ae761b2-kube-api-access-b8wzx\") pod \"redhat-marketplace-w9mtz\" (UID: \"1777e281-f5de-47a1-816a-638a5ae761b2\") " pod="openshift-marketplace/redhat-marketplace-w9mtz" Jan 22 18:25:09 crc kubenswrapper[4758]: I0122 18:25:09.109453 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w9mtz" Jan 22 18:25:09 crc kubenswrapper[4758]: I0122 18:25:09.633553 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w9mtz"] Jan 22 18:25:10 crc kubenswrapper[4758]: I0122 18:25:10.203249 4758 generic.go:334] "Generic (PLEG): container finished" podID="1777e281-f5de-47a1-816a-638a5ae761b2" containerID="968e6ec7321f77746107d82ab2e65a92aa0abd737ba704b6cd290f27cd56f945" exitCode=0 Jan 22 18:25:10 crc kubenswrapper[4758]: I0122 18:25:10.203361 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w9mtz" event={"ID":"1777e281-f5de-47a1-816a-638a5ae761b2","Type":"ContainerDied","Data":"968e6ec7321f77746107d82ab2e65a92aa0abd737ba704b6cd290f27cd56f945"} Jan 22 18:25:10 crc kubenswrapper[4758]: I0122 18:25:10.203553 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w9mtz" event={"ID":"1777e281-f5de-47a1-816a-638a5ae761b2","Type":"ContainerStarted","Data":"33504578f2b4c38740a8bfbfc31b6b324c777615a46ae8223d701e46bf829497"} Jan 22 18:25:11 crc kubenswrapper[4758]: I0122 18:25:11.214927 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w9mtz" event={"ID":"1777e281-f5de-47a1-816a-638a5ae761b2","Type":"ContainerStarted","Data":"11e0733f96a1076eca79c9b2e2d95f5ba9ba1a4543f0704b3e2ded4a5636825a"} Jan 22 18:25:12 crc kubenswrapper[4758]: I0122 18:25:12.235676 4758 generic.go:334] "Generic (PLEG): container finished" podID="1777e281-f5de-47a1-816a-638a5ae761b2" containerID="11e0733f96a1076eca79c9b2e2d95f5ba9ba1a4543f0704b3e2ded4a5636825a" exitCode=0 Jan 22 18:25:12 crc kubenswrapper[4758]: I0122 18:25:12.235779 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w9mtz" event={"ID":"1777e281-f5de-47a1-816a-638a5ae761b2","Type":"ContainerDied","Data":"11e0733f96a1076eca79c9b2e2d95f5ba9ba1a4543f0704b3e2ded4a5636825a"} Jan 22 18:25:13 crc kubenswrapper[4758]: I0122 18:25:13.247917 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w9mtz" event={"ID":"1777e281-f5de-47a1-816a-638a5ae761b2","Type":"ContainerStarted","Data":"c528caffb8a234d5471c3ba2a9d935f080b471818ce5a3c081c48c24b9ca062c"} Jan 22 18:25:13 crc kubenswrapper[4758]: I0122 18:25:13.276524 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w9mtz" podStartSLOduration=2.8479317269999997 podStartE2EDuration="5.276469753s" podCreationTimestamp="2026-01-22 18:25:08 +0000 UTC" firstStartedPulling="2026-01-22 18:25:10.204913375 +0000 UTC m=+6931.688252650" lastFinishedPulling="2026-01-22 18:25:12.633451391 +0000 UTC m=+6934.116790676" observedRunningTime="2026-01-22 18:25:13.266919473 +0000 UTC m=+6934.750258758" watchObservedRunningTime="2026-01-22 18:25:13.276469753 +0000 UTC m=+6934.759809038" Jan 22 18:25:16 crc kubenswrapper[4758]: I0122 18:25:16.808674 4758 scope.go:117] "RemoveContainer" containerID="91d7ffcf810ea3c608d14de0dc105ec1bbd1a1cbe5e8f5d16d87dab78b231a4d" Jan 22 18:25:16 crc kubenswrapper[4758]: E0122 18:25:16.810579 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:25:19 crc kubenswrapper[4758]: I0122 18:25:19.110213 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w9mtz" Jan 22 18:25:19 crc kubenswrapper[4758]: I0122 18:25:19.110563 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w9mtz" Jan 22 18:25:19 crc kubenswrapper[4758]: I0122 18:25:19.163458 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w9mtz" Jan 22 18:25:19 crc kubenswrapper[4758]: I0122 18:25:19.348075 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w9mtz" Jan 22 18:25:19 crc kubenswrapper[4758]: I0122 18:25:19.403290 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w9mtz"] Jan 22 18:25:21 crc kubenswrapper[4758]: I0122 18:25:21.329133 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-w9mtz" podUID="1777e281-f5de-47a1-816a-638a5ae761b2" containerName="registry-server" containerID="cri-o://c528caffb8a234d5471c3ba2a9d935f080b471818ce5a3c081c48c24b9ca062c" gracePeriod=2 Jan 22 18:25:21 crc kubenswrapper[4758]: I0122 18:25:21.906311 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w9mtz" Jan 22 18:25:21 crc kubenswrapper[4758]: I0122 18:25:21.975025 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1777e281-f5de-47a1-816a-638a5ae761b2-catalog-content\") pod \"1777e281-f5de-47a1-816a-638a5ae761b2\" (UID: \"1777e281-f5de-47a1-816a-638a5ae761b2\") " Jan 22 18:25:21 crc kubenswrapper[4758]: I0122 18:25:21.975213 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1777e281-f5de-47a1-816a-638a5ae761b2-utilities\") pod \"1777e281-f5de-47a1-816a-638a5ae761b2\" (UID: \"1777e281-f5de-47a1-816a-638a5ae761b2\") " Jan 22 18:25:21 crc kubenswrapper[4758]: I0122 18:25:21.975275 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8wzx\" (UniqueName: \"kubernetes.io/projected/1777e281-f5de-47a1-816a-638a5ae761b2-kube-api-access-b8wzx\") pod \"1777e281-f5de-47a1-816a-638a5ae761b2\" (UID: \"1777e281-f5de-47a1-816a-638a5ae761b2\") " Jan 22 18:25:21 crc kubenswrapper[4758]: I0122 18:25:21.976473 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1777e281-f5de-47a1-816a-638a5ae761b2-utilities" (OuterVolumeSpecName: "utilities") pod "1777e281-f5de-47a1-816a-638a5ae761b2" (UID: "1777e281-f5de-47a1-816a-638a5ae761b2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 18:25:21 crc kubenswrapper[4758]: I0122 18:25:21.985327 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1777e281-f5de-47a1-816a-638a5ae761b2-kube-api-access-b8wzx" (OuterVolumeSpecName: "kube-api-access-b8wzx") pod "1777e281-f5de-47a1-816a-638a5ae761b2" (UID: "1777e281-f5de-47a1-816a-638a5ae761b2"). InnerVolumeSpecName "kube-api-access-b8wzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 18:25:21 crc kubenswrapper[4758]: I0122 18:25:21.997980 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1777e281-f5de-47a1-816a-638a5ae761b2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1777e281-f5de-47a1-816a-638a5ae761b2" (UID: "1777e281-f5de-47a1-816a-638a5ae761b2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 18:25:22 crc kubenswrapper[4758]: I0122 18:25:22.078294 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1777e281-f5de-47a1-816a-638a5ae761b2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 18:25:22 crc kubenswrapper[4758]: I0122 18:25:22.078368 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1777e281-f5de-47a1-816a-638a5ae761b2-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 18:25:22 crc kubenswrapper[4758]: I0122 18:25:22.078384 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8wzx\" (UniqueName: \"kubernetes.io/projected/1777e281-f5de-47a1-816a-638a5ae761b2-kube-api-access-b8wzx\") on node \"crc\" DevicePath \"\"" Jan 22 18:25:22 crc kubenswrapper[4758]: I0122 18:25:22.340616 4758 generic.go:334] "Generic (PLEG): container finished" podID="1777e281-f5de-47a1-816a-638a5ae761b2" containerID="c528caffb8a234d5471c3ba2a9d935f080b471818ce5a3c081c48c24b9ca062c" exitCode=0 Jan 22 18:25:22 crc kubenswrapper[4758]: I0122 18:25:22.340691 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w9mtz" Jan 22 18:25:22 crc kubenswrapper[4758]: I0122 18:25:22.340690 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w9mtz" event={"ID":"1777e281-f5de-47a1-816a-638a5ae761b2","Type":"ContainerDied","Data":"c528caffb8a234d5471c3ba2a9d935f080b471818ce5a3c081c48c24b9ca062c"} Jan 22 18:25:22 crc kubenswrapper[4758]: I0122 18:25:22.341153 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w9mtz" event={"ID":"1777e281-f5de-47a1-816a-638a5ae761b2","Type":"ContainerDied","Data":"33504578f2b4c38740a8bfbfc31b6b324c777615a46ae8223d701e46bf829497"} Jan 22 18:25:22 crc kubenswrapper[4758]: I0122 18:25:22.341178 4758 scope.go:117] "RemoveContainer" containerID="c528caffb8a234d5471c3ba2a9d935f080b471818ce5a3c081c48c24b9ca062c" Jan 22 18:25:22 crc kubenswrapper[4758]: I0122 18:25:22.366799 4758 scope.go:117] "RemoveContainer" containerID="11e0733f96a1076eca79c9b2e2d95f5ba9ba1a4543f0704b3e2ded4a5636825a" Jan 22 18:25:22 crc kubenswrapper[4758]: I0122 18:25:22.389844 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w9mtz"] Jan 22 18:25:22 crc kubenswrapper[4758]: I0122 18:25:22.407711 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-w9mtz"] Jan 22 18:25:22 crc kubenswrapper[4758]: I0122 18:25:22.417015 4758 scope.go:117] "RemoveContainer" containerID="968e6ec7321f77746107d82ab2e65a92aa0abd737ba704b6cd290f27cd56f945" Jan 22 18:25:22 crc kubenswrapper[4758]: I0122 18:25:22.448954 4758 scope.go:117] "RemoveContainer" containerID="c528caffb8a234d5471c3ba2a9d935f080b471818ce5a3c081c48c24b9ca062c" Jan 22 18:25:22 crc kubenswrapper[4758]: E0122 18:25:22.450377 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c528caffb8a234d5471c3ba2a9d935f080b471818ce5a3c081c48c24b9ca062c\": container with ID starting with c528caffb8a234d5471c3ba2a9d935f080b471818ce5a3c081c48c24b9ca062c not found: ID does not exist" containerID="c528caffb8a234d5471c3ba2a9d935f080b471818ce5a3c081c48c24b9ca062c" Jan 22 18:25:22 crc kubenswrapper[4758]: I0122 18:25:22.450446 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c528caffb8a234d5471c3ba2a9d935f080b471818ce5a3c081c48c24b9ca062c"} err="failed to get container status \"c528caffb8a234d5471c3ba2a9d935f080b471818ce5a3c081c48c24b9ca062c\": rpc error: code = NotFound desc = could not find container \"c528caffb8a234d5471c3ba2a9d935f080b471818ce5a3c081c48c24b9ca062c\": container with ID starting with c528caffb8a234d5471c3ba2a9d935f080b471818ce5a3c081c48c24b9ca062c not found: ID does not exist" Jan 22 18:25:22 crc kubenswrapper[4758]: I0122 18:25:22.450473 4758 scope.go:117] "RemoveContainer" containerID="11e0733f96a1076eca79c9b2e2d95f5ba9ba1a4543f0704b3e2ded4a5636825a" Jan 22 18:25:22 crc kubenswrapper[4758]: E0122 18:25:22.451041 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11e0733f96a1076eca79c9b2e2d95f5ba9ba1a4543f0704b3e2ded4a5636825a\": container with ID starting with 11e0733f96a1076eca79c9b2e2d95f5ba9ba1a4543f0704b3e2ded4a5636825a not found: ID does not exist" containerID="11e0733f96a1076eca79c9b2e2d95f5ba9ba1a4543f0704b3e2ded4a5636825a" Jan 22 18:25:22 crc kubenswrapper[4758]: I0122 18:25:22.451107 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11e0733f96a1076eca79c9b2e2d95f5ba9ba1a4543f0704b3e2ded4a5636825a"} err="failed to get container status \"11e0733f96a1076eca79c9b2e2d95f5ba9ba1a4543f0704b3e2ded4a5636825a\": rpc error: code = NotFound desc = could not find container \"11e0733f96a1076eca79c9b2e2d95f5ba9ba1a4543f0704b3e2ded4a5636825a\": container with ID starting with 11e0733f96a1076eca79c9b2e2d95f5ba9ba1a4543f0704b3e2ded4a5636825a not found: ID does not exist" Jan 22 18:25:22 crc kubenswrapper[4758]: I0122 18:25:22.451161 4758 scope.go:117] "RemoveContainer" containerID="968e6ec7321f77746107d82ab2e65a92aa0abd737ba704b6cd290f27cd56f945" Jan 22 18:25:22 crc kubenswrapper[4758]: E0122 18:25:22.451587 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"968e6ec7321f77746107d82ab2e65a92aa0abd737ba704b6cd290f27cd56f945\": container with ID starting with 968e6ec7321f77746107d82ab2e65a92aa0abd737ba704b6cd290f27cd56f945 not found: ID does not exist" containerID="968e6ec7321f77746107d82ab2e65a92aa0abd737ba704b6cd290f27cd56f945" Jan 22 18:25:22 crc kubenswrapper[4758]: I0122 18:25:22.451631 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"968e6ec7321f77746107d82ab2e65a92aa0abd737ba704b6cd290f27cd56f945"} err="failed to get container status \"968e6ec7321f77746107d82ab2e65a92aa0abd737ba704b6cd290f27cd56f945\": rpc error: code = NotFound desc = could not find container \"968e6ec7321f77746107d82ab2e65a92aa0abd737ba704b6cd290f27cd56f945\": container with ID starting with 968e6ec7321f77746107d82ab2e65a92aa0abd737ba704b6cd290f27cd56f945 not found: ID does not exist" Jan 22 18:25:22 crc kubenswrapper[4758]: I0122 18:25:22.831141 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1777e281-f5de-47a1-816a-638a5ae761b2" path="/var/lib/kubelet/pods/1777e281-f5de-47a1-816a-638a5ae761b2/volumes" Jan 22 18:25:29 crc kubenswrapper[4758]: I0122 18:25:29.808227 4758 scope.go:117] "RemoveContainer" containerID="91d7ffcf810ea3c608d14de0dc105ec1bbd1a1cbe5e8f5d16d87dab78b231a4d" Jan 22 18:25:29 crc kubenswrapper[4758]: E0122 18:25:29.809205 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:25:41 crc kubenswrapper[4758]: I0122 18:25:41.808770 4758 scope.go:117] "RemoveContainer" containerID="91d7ffcf810ea3c608d14de0dc105ec1bbd1a1cbe5e8f5d16d87dab78b231a4d" Jan 22 18:25:41 crc kubenswrapper[4758]: E0122 18:25:41.809650 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:25:53 crc kubenswrapper[4758]: I0122 18:25:53.808894 4758 scope.go:117] "RemoveContainer" containerID="91d7ffcf810ea3c608d14de0dc105ec1bbd1a1cbe5e8f5d16d87dab78b231a4d" Jan 22 18:25:53 crc kubenswrapper[4758]: E0122 18:25:53.810012 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:26:08 crc kubenswrapper[4758]: I0122 18:26:08.818653 4758 scope.go:117] "RemoveContainer" containerID="91d7ffcf810ea3c608d14de0dc105ec1bbd1a1cbe5e8f5d16d87dab78b231a4d" Jan 22 18:26:08 crc kubenswrapper[4758]: E0122 18:26:08.819468 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:26:23 crc kubenswrapper[4758]: I0122 18:26:23.808715 4758 scope.go:117] "RemoveContainer" containerID="91d7ffcf810ea3c608d14de0dc105ec1bbd1a1cbe5e8f5d16d87dab78b231a4d" Jan 22 18:26:23 crc kubenswrapper[4758]: E0122 18:26:23.809401 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:26:34 crc kubenswrapper[4758]: I0122 18:26:34.808517 4758 scope.go:117] "RemoveContainer" containerID="91d7ffcf810ea3c608d14de0dc105ec1bbd1a1cbe5e8f5d16d87dab78b231a4d" Jan 22 18:26:34 crc kubenswrapper[4758]: E0122 18:26:34.809344 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:26:47 crc kubenswrapper[4758]: I0122 18:26:47.808955 4758 scope.go:117] "RemoveContainer" containerID="91d7ffcf810ea3c608d14de0dc105ec1bbd1a1cbe5e8f5d16d87dab78b231a4d" Jan 22 18:26:48 crc kubenswrapper[4758]: I0122 18:26:48.510816 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerStarted","Data":"2a6a8e642e4ee60ebde8d328db1c83e15009314791b4ff4fb0767d4d7274d9c0"} Jan 22 18:27:04 crc kubenswrapper[4758]: I0122 18:27:04.519398 4758 scope.go:117] "RemoveContainer" containerID="a97db24759d0118d7a7a862005fa6a36dd7b64a3dafe81c252a8e0eb661b4a2a" Jan 22 18:27:04 crc kubenswrapper[4758]: I0122 18:27:04.560849 4758 scope.go:117] "RemoveContainer" containerID="7053239dd89d9ca7c5468b495727a1c2d4eeaf9dfdd9fa70085b6a58ecd2e328" Jan 22 18:27:04 crc kubenswrapper[4758]: I0122 18:27:04.615306 4758 scope.go:117] "RemoveContainer" containerID="a32595cebd962646928a173c7f6d1e511d1183fdd40e884d3e9ed5a1d24b1e05" Jan 22 18:29:13 crc kubenswrapper[4758]: I0122 18:29:13.837492 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 18:29:13 crc kubenswrapper[4758]: I0122 18:29:13.838618 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 18:29:43 crc kubenswrapper[4758]: I0122 18:29:43.837552 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 18:29:43 crc kubenswrapper[4758]: I0122 18:29:43.838217 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 18:30:00 crc kubenswrapper[4758]: I0122 18:30:00.159046 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485110-q9xpx"] Jan 22 18:30:00 crc kubenswrapper[4758]: E0122 18:30:00.160454 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1777e281-f5de-47a1-816a-638a5ae761b2" containerName="registry-server" Jan 22 18:30:00 crc kubenswrapper[4758]: I0122 18:30:00.160482 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="1777e281-f5de-47a1-816a-638a5ae761b2" containerName="registry-server" Jan 22 18:30:00 crc kubenswrapper[4758]: E0122 18:30:00.160523 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1777e281-f5de-47a1-816a-638a5ae761b2" containerName="extract-utilities" Jan 22 18:30:00 crc kubenswrapper[4758]: I0122 18:30:00.160532 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="1777e281-f5de-47a1-816a-638a5ae761b2" containerName="extract-utilities" Jan 22 18:30:00 crc kubenswrapper[4758]: E0122 18:30:00.160571 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1777e281-f5de-47a1-816a-638a5ae761b2" containerName="extract-content" Jan 22 18:30:00 crc kubenswrapper[4758]: I0122 18:30:00.160578 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="1777e281-f5de-47a1-816a-638a5ae761b2" containerName="extract-content" Jan 22 18:30:00 crc kubenswrapper[4758]: I0122 18:30:00.160938 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="1777e281-f5de-47a1-816a-638a5ae761b2" containerName="registry-server" Jan 22 18:30:00 crc kubenswrapper[4758]: I0122 18:30:00.161921 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485110-q9xpx" Jan 22 18:30:00 crc kubenswrapper[4758]: I0122 18:30:00.165249 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 18:30:00 crc kubenswrapper[4758]: I0122 18:30:00.169342 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 18:30:00 crc kubenswrapper[4758]: I0122 18:30:00.178536 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485110-q9xpx"] Jan 22 18:30:00 crc kubenswrapper[4758]: I0122 18:30:00.346633 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kzhs\" (UniqueName: \"kubernetes.io/projected/b86d2c55-8454-4584-b532-b013054209c5-kube-api-access-8kzhs\") pod \"collect-profiles-29485110-q9xpx\" (UID: \"b86d2c55-8454-4584-b532-b013054209c5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485110-q9xpx" Jan 22 18:30:00 crc kubenswrapper[4758]: I0122 18:30:00.347045 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b86d2c55-8454-4584-b532-b013054209c5-secret-volume\") pod \"collect-profiles-29485110-q9xpx\" (UID: \"b86d2c55-8454-4584-b532-b013054209c5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485110-q9xpx" Jan 22 18:30:00 crc kubenswrapper[4758]: I0122 18:30:00.347520 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b86d2c55-8454-4584-b532-b013054209c5-config-volume\") pod \"collect-profiles-29485110-q9xpx\" (UID: \"b86d2c55-8454-4584-b532-b013054209c5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485110-q9xpx" Jan 22 18:30:00 crc kubenswrapper[4758]: I0122 18:30:00.449824 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b86d2c55-8454-4584-b532-b013054209c5-secret-volume\") pod \"collect-profiles-29485110-q9xpx\" (UID: \"b86d2c55-8454-4584-b532-b013054209c5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485110-q9xpx" Jan 22 18:30:00 crc kubenswrapper[4758]: I0122 18:30:00.449962 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b86d2c55-8454-4584-b532-b013054209c5-config-volume\") pod \"collect-profiles-29485110-q9xpx\" (UID: \"b86d2c55-8454-4584-b532-b013054209c5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485110-q9xpx" Jan 22 18:30:00 crc kubenswrapper[4758]: I0122 18:30:00.450029 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kzhs\" (UniqueName: \"kubernetes.io/projected/b86d2c55-8454-4584-b532-b013054209c5-kube-api-access-8kzhs\") pod \"collect-profiles-29485110-q9xpx\" (UID: \"b86d2c55-8454-4584-b532-b013054209c5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485110-q9xpx" Jan 22 18:30:00 crc kubenswrapper[4758]: I0122 18:30:00.451055 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b86d2c55-8454-4584-b532-b013054209c5-config-volume\") pod \"collect-profiles-29485110-q9xpx\" (UID: \"b86d2c55-8454-4584-b532-b013054209c5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485110-q9xpx" Jan 22 18:30:00 crc kubenswrapper[4758]: I0122 18:30:00.456019 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b86d2c55-8454-4584-b532-b013054209c5-secret-volume\") pod \"collect-profiles-29485110-q9xpx\" (UID: \"b86d2c55-8454-4584-b532-b013054209c5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485110-q9xpx" Jan 22 18:30:00 crc kubenswrapper[4758]: I0122 18:30:00.467015 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kzhs\" (UniqueName: \"kubernetes.io/projected/b86d2c55-8454-4584-b532-b013054209c5-kube-api-access-8kzhs\") pod \"collect-profiles-29485110-q9xpx\" (UID: \"b86d2c55-8454-4584-b532-b013054209c5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485110-q9xpx" Jan 22 18:30:00 crc kubenswrapper[4758]: I0122 18:30:00.493686 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485110-q9xpx" Jan 22 18:30:01 crc kubenswrapper[4758]: I0122 18:30:01.015634 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485110-q9xpx"] Jan 22 18:30:01 crc kubenswrapper[4758]: W0122 18:30:01.015940 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb86d2c55_8454_4584_b532_b013054209c5.slice/crio-2153e8e34bbb93293760552815eabb539fe3a3841f46a05a46eac34afb34ab2c WatchSource:0}: Error finding container 2153e8e34bbb93293760552815eabb539fe3a3841f46a05a46eac34afb34ab2c: Status 404 returned error can't find the container with id 2153e8e34bbb93293760552815eabb539fe3a3841f46a05a46eac34afb34ab2c Jan 22 18:30:01 crc kubenswrapper[4758]: I0122 18:30:01.455970 4758 generic.go:334] "Generic (PLEG): container finished" podID="b86d2c55-8454-4584-b532-b013054209c5" containerID="dbd98272d643997321edbc7ac69920c01e9d6b2a410b482b7c68b5f29fd2b696" exitCode=0 Jan 22 18:30:01 crc kubenswrapper[4758]: I0122 18:30:01.456044 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485110-q9xpx" event={"ID":"b86d2c55-8454-4584-b532-b013054209c5","Type":"ContainerDied","Data":"dbd98272d643997321edbc7ac69920c01e9d6b2a410b482b7c68b5f29fd2b696"} Jan 22 18:30:01 crc kubenswrapper[4758]: I0122 18:30:01.456233 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485110-q9xpx" event={"ID":"b86d2c55-8454-4584-b532-b013054209c5","Type":"ContainerStarted","Data":"2153e8e34bbb93293760552815eabb539fe3a3841f46a05a46eac34afb34ab2c"} Jan 22 18:30:02 crc kubenswrapper[4758]: I0122 18:30:02.886494 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485110-q9xpx" Jan 22 18:30:03 crc kubenswrapper[4758]: I0122 18:30:03.006898 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kzhs\" (UniqueName: \"kubernetes.io/projected/b86d2c55-8454-4584-b532-b013054209c5-kube-api-access-8kzhs\") pod \"b86d2c55-8454-4584-b532-b013054209c5\" (UID: \"b86d2c55-8454-4584-b532-b013054209c5\") " Jan 22 18:30:03 crc kubenswrapper[4758]: I0122 18:30:03.006979 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b86d2c55-8454-4584-b532-b013054209c5-secret-volume\") pod \"b86d2c55-8454-4584-b532-b013054209c5\" (UID: \"b86d2c55-8454-4584-b532-b013054209c5\") " Jan 22 18:30:03 crc kubenswrapper[4758]: I0122 18:30:03.007062 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b86d2c55-8454-4584-b532-b013054209c5-config-volume\") pod \"b86d2c55-8454-4584-b532-b013054209c5\" (UID: \"b86d2c55-8454-4584-b532-b013054209c5\") " Jan 22 18:30:03 crc kubenswrapper[4758]: I0122 18:30:03.008065 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b86d2c55-8454-4584-b532-b013054209c5-config-volume" (OuterVolumeSpecName: "config-volume") pod "b86d2c55-8454-4584-b532-b013054209c5" (UID: "b86d2c55-8454-4584-b532-b013054209c5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 18:30:03 crc kubenswrapper[4758]: I0122 18:30:03.020111 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b86d2c55-8454-4584-b532-b013054209c5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b86d2c55-8454-4584-b532-b013054209c5" (UID: "b86d2c55-8454-4584-b532-b013054209c5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 18:30:03 crc kubenswrapper[4758]: I0122 18:30:03.020240 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b86d2c55-8454-4584-b532-b013054209c5-kube-api-access-8kzhs" (OuterVolumeSpecName: "kube-api-access-8kzhs") pod "b86d2c55-8454-4584-b532-b013054209c5" (UID: "b86d2c55-8454-4584-b532-b013054209c5"). InnerVolumeSpecName "kube-api-access-8kzhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 18:30:03 crc kubenswrapper[4758]: I0122 18:30:03.109629 4758 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b86d2c55-8454-4584-b532-b013054209c5-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 18:30:03 crc kubenswrapper[4758]: I0122 18:30:03.109984 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kzhs\" (UniqueName: \"kubernetes.io/projected/b86d2c55-8454-4584-b532-b013054209c5-kube-api-access-8kzhs\") on node \"crc\" DevicePath \"\"" Jan 22 18:30:03 crc kubenswrapper[4758]: I0122 18:30:03.109999 4758 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b86d2c55-8454-4584-b532-b013054209c5-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 18:30:03 crc kubenswrapper[4758]: I0122 18:30:03.478311 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485110-q9xpx" event={"ID":"b86d2c55-8454-4584-b532-b013054209c5","Type":"ContainerDied","Data":"2153e8e34bbb93293760552815eabb539fe3a3841f46a05a46eac34afb34ab2c"} Jan 22 18:30:03 crc kubenswrapper[4758]: I0122 18:30:03.478719 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2153e8e34bbb93293760552815eabb539fe3a3841f46a05a46eac34afb34ab2c" Jan 22 18:30:03 crc kubenswrapper[4758]: I0122 18:30:03.478365 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485110-q9xpx" Jan 22 18:30:04 crc kubenswrapper[4758]: I0122 18:30:04.042816 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485065-mlm94"] Jan 22 18:30:04 crc kubenswrapper[4758]: I0122 18:30:04.056043 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485065-mlm94"] Jan 22 18:30:04 crc kubenswrapper[4758]: I0122 18:30:04.822124 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="793d9467-9d54-4846-b94d-a37e214504ee" path="/var/lib/kubelet/pods/793d9467-9d54-4846-b94d-a37e214504ee/volumes" Jan 22 18:30:13 crc kubenswrapper[4758]: I0122 18:30:13.839133 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 18:30:13 crc kubenswrapper[4758]: I0122 18:30:13.840211 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 18:30:13 crc kubenswrapper[4758]: I0122 18:30:13.840313 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" Jan 22 18:30:13 crc kubenswrapper[4758]: I0122 18:30:13.841890 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2a6a8e642e4ee60ebde8d328db1c83e15009314791b4ff4fb0767d4d7274d9c0"} pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 18:30:13 crc kubenswrapper[4758]: I0122 18:30:13.842031 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" containerID="cri-o://2a6a8e642e4ee60ebde8d328db1c83e15009314791b4ff4fb0767d4d7274d9c0" gracePeriod=600 Jan 22 18:30:14 crc kubenswrapper[4758]: I0122 18:30:14.877028 4758 generic.go:334] "Generic (PLEG): container finished" podID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerID="2a6a8e642e4ee60ebde8d328db1c83e15009314791b4ff4fb0767d4d7274d9c0" exitCode=0 Jan 22 18:30:14 crc kubenswrapper[4758]: I0122 18:30:14.877129 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerDied","Data":"2a6a8e642e4ee60ebde8d328db1c83e15009314791b4ff4fb0767d4d7274d9c0"} Jan 22 18:30:14 crc kubenswrapper[4758]: I0122 18:30:14.877688 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerStarted","Data":"c9e8da31eaeda42e5063e8764a836396b209f3fbacb8473b8179fc4d39590b99"} Jan 22 18:30:14 crc kubenswrapper[4758]: I0122 18:30:14.877766 4758 scope.go:117] "RemoveContainer" containerID="91d7ffcf810ea3c608d14de0dc105ec1bbd1a1cbe5e8f5d16d87dab78b231a4d" Jan 22 18:31:04 crc kubenswrapper[4758]: I0122 18:31:04.770167 4758 scope.go:117] "RemoveContainer" containerID="71feffce9b38bbaffaf76f2ee35515dd25b3fa00f30e54aefa7c9195f2c008b2" Jan 22 18:31:36 crc kubenswrapper[4758]: I0122 18:31:36.200569 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6lbk6/must-gather-928fr"] Jan 22 18:31:36 crc kubenswrapper[4758]: E0122 18:31:36.201660 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b86d2c55-8454-4584-b532-b013054209c5" containerName="collect-profiles" Jan 22 18:31:36 crc kubenswrapper[4758]: I0122 18:31:36.201675 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="b86d2c55-8454-4584-b532-b013054209c5" containerName="collect-profiles" Jan 22 18:31:36 crc kubenswrapper[4758]: I0122 18:31:36.201941 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="b86d2c55-8454-4584-b532-b013054209c5" containerName="collect-profiles" Jan 22 18:31:36 crc kubenswrapper[4758]: I0122 18:31:36.203089 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6lbk6/must-gather-928fr" Jan 22 18:31:36 crc kubenswrapper[4758]: I0122 18:31:36.204606 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-6lbk6"/"openshift-service-ca.crt" Jan 22 18:31:36 crc kubenswrapper[4758]: I0122 18:31:36.204734 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-6lbk6"/"default-dockercfg-g7bb7" Jan 22 18:31:36 crc kubenswrapper[4758]: I0122 18:31:36.204756 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-6lbk6"/"kube-root-ca.crt" Jan 22 18:31:36 crc kubenswrapper[4758]: I0122 18:31:36.222329 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-6lbk6/must-gather-928fr"] Jan 22 18:31:36 crc kubenswrapper[4758]: I0122 18:31:36.320773 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a357e497-4622-4b8e-9ea7-9bfd5efa4716-must-gather-output\") pod \"must-gather-928fr\" (UID: \"a357e497-4622-4b8e-9ea7-9bfd5efa4716\") " pod="openshift-must-gather-6lbk6/must-gather-928fr" Jan 22 18:31:36 crc kubenswrapper[4758]: I0122 18:31:36.320913 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zpzk\" (UniqueName: \"kubernetes.io/projected/a357e497-4622-4b8e-9ea7-9bfd5efa4716-kube-api-access-5zpzk\") pod \"must-gather-928fr\" (UID: \"a357e497-4622-4b8e-9ea7-9bfd5efa4716\") " pod="openshift-must-gather-6lbk6/must-gather-928fr" Jan 22 18:31:36 crc kubenswrapper[4758]: I0122 18:31:36.422967 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a357e497-4622-4b8e-9ea7-9bfd5efa4716-must-gather-output\") pod \"must-gather-928fr\" (UID: \"a357e497-4622-4b8e-9ea7-9bfd5efa4716\") " pod="openshift-must-gather-6lbk6/must-gather-928fr" Jan 22 18:31:36 crc kubenswrapper[4758]: I0122 18:31:36.423111 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zpzk\" (UniqueName: \"kubernetes.io/projected/a357e497-4622-4b8e-9ea7-9bfd5efa4716-kube-api-access-5zpzk\") pod \"must-gather-928fr\" (UID: \"a357e497-4622-4b8e-9ea7-9bfd5efa4716\") " pod="openshift-must-gather-6lbk6/must-gather-928fr" Jan 22 18:31:36 crc kubenswrapper[4758]: I0122 18:31:36.423349 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a357e497-4622-4b8e-9ea7-9bfd5efa4716-must-gather-output\") pod \"must-gather-928fr\" (UID: \"a357e497-4622-4b8e-9ea7-9bfd5efa4716\") " pod="openshift-must-gather-6lbk6/must-gather-928fr" Jan 22 18:31:36 crc kubenswrapper[4758]: I0122 18:31:36.447336 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zpzk\" (UniqueName: \"kubernetes.io/projected/a357e497-4622-4b8e-9ea7-9bfd5efa4716-kube-api-access-5zpzk\") pod \"must-gather-928fr\" (UID: \"a357e497-4622-4b8e-9ea7-9bfd5efa4716\") " pod="openshift-must-gather-6lbk6/must-gather-928fr" Jan 22 18:31:36 crc kubenswrapper[4758]: I0122 18:31:36.524527 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6lbk6/must-gather-928fr" Jan 22 18:31:37 crc kubenswrapper[4758]: I0122 18:31:37.165637 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-6lbk6/must-gather-928fr"] Jan 22 18:31:37 crc kubenswrapper[4758]: I0122 18:31:37.173767 4758 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 18:31:37 crc kubenswrapper[4758]: I0122 18:31:37.818113 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6lbk6/must-gather-928fr" event={"ID":"a357e497-4622-4b8e-9ea7-9bfd5efa4716","Type":"ContainerStarted","Data":"64a1b9316cf0ee05967ecaee4ff861c6dc40f87f63e31e23d0b1c4f0a619aa0e"} Jan 22 18:31:44 crc kubenswrapper[4758]: I0122 18:31:44.917892 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6lbk6/must-gather-928fr" event={"ID":"a357e497-4622-4b8e-9ea7-9bfd5efa4716","Type":"ContainerStarted","Data":"81c86fb605c2717632248fca8f61655ac5c10b9b226626cbc38f55aeec103df9"} Jan 22 18:31:44 crc kubenswrapper[4758]: I0122 18:31:44.918465 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6lbk6/must-gather-928fr" event={"ID":"a357e497-4622-4b8e-9ea7-9bfd5efa4716","Type":"ContainerStarted","Data":"c63c4d153cae82f00105da0713ffa87273a2c3f987de7c02d199ccb1988003be"} Jan 22 18:31:44 crc kubenswrapper[4758]: I0122 18:31:44.945938 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-6lbk6/must-gather-928fr" podStartSLOduration=2.344954238 podStartE2EDuration="8.945901395s" podCreationTimestamp="2026-01-22 18:31:36 +0000 UTC" firstStartedPulling="2026-01-22 18:31:37.173412791 +0000 UTC m=+7318.656752076" lastFinishedPulling="2026-01-22 18:31:43.774359948 +0000 UTC m=+7325.257699233" observedRunningTime="2026-01-22 18:31:44.932815639 +0000 UTC m=+7326.416154924" watchObservedRunningTime="2026-01-22 18:31:44.945901395 +0000 UTC m=+7326.429240680" Jan 22 18:31:48 crc kubenswrapper[4758]: I0122 18:31:48.618668 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6lbk6/crc-debug-9rtsq"] Jan 22 18:31:48 crc kubenswrapper[4758]: I0122 18:31:48.620623 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6lbk6/crc-debug-9rtsq" Jan 22 18:31:48 crc kubenswrapper[4758]: I0122 18:31:48.720853 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbbgs\" (UniqueName: \"kubernetes.io/projected/cc45afaf-9258-414a-a784-ba0fef57349f-kube-api-access-lbbgs\") pod \"crc-debug-9rtsq\" (UID: \"cc45afaf-9258-414a-a784-ba0fef57349f\") " pod="openshift-must-gather-6lbk6/crc-debug-9rtsq" Jan 22 18:31:48 crc kubenswrapper[4758]: I0122 18:31:48.720969 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cc45afaf-9258-414a-a784-ba0fef57349f-host\") pod \"crc-debug-9rtsq\" (UID: \"cc45afaf-9258-414a-a784-ba0fef57349f\") " pod="openshift-must-gather-6lbk6/crc-debug-9rtsq" Jan 22 18:31:48 crc kubenswrapper[4758]: I0122 18:31:48.822650 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbbgs\" (UniqueName: \"kubernetes.io/projected/cc45afaf-9258-414a-a784-ba0fef57349f-kube-api-access-lbbgs\") pod \"crc-debug-9rtsq\" (UID: \"cc45afaf-9258-414a-a784-ba0fef57349f\") " pod="openshift-must-gather-6lbk6/crc-debug-9rtsq" Jan 22 18:31:48 crc kubenswrapper[4758]: I0122 18:31:48.822780 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cc45afaf-9258-414a-a784-ba0fef57349f-host\") pod \"crc-debug-9rtsq\" (UID: \"cc45afaf-9258-414a-a784-ba0fef57349f\") " pod="openshift-must-gather-6lbk6/crc-debug-9rtsq" Jan 22 18:31:48 crc kubenswrapper[4758]: I0122 18:31:48.823060 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cc45afaf-9258-414a-a784-ba0fef57349f-host\") pod \"crc-debug-9rtsq\" (UID: \"cc45afaf-9258-414a-a784-ba0fef57349f\") " pod="openshift-must-gather-6lbk6/crc-debug-9rtsq" Jan 22 18:31:48 crc kubenswrapper[4758]: I0122 18:31:48.845469 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbbgs\" (UniqueName: \"kubernetes.io/projected/cc45afaf-9258-414a-a784-ba0fef57349f-kube-api-access-lbbgs\") pod \"crc-debug-9rtsq\" (UID: \"cc45afaf-9258-414a-a784-ba0fef57349f\") " pod="openshift-must-gather-6lbk6/crc-debug-9rtsq" Jan 22 18:31:48 crc kubenswrapper[4758]: I0122 18:31:48.942556 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6lbk6/crc-debug-9rtsq" Jan 22 18:31:50 crc kubenswrapper[4758]: I0122 18:31:50.024939 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6lbk6/crc-debug-9rtsq" event={"ID":"cc45afaf-9258-414a-a784-ba0fef57349f","Type":"ContainerStarted","Data":"745c4a8746795b3200dbf986c16f0d29a12a156b0e742675ff24697cfa3d259e"} Jan 22 18:31:54 crc kubenswrapper[4758]: I0122 18:31:54.732873 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tw5bj"] Jan 22 18:31:54 crc kubenswrapper[4758]: I0122 18:31:54.735994 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tw5bj" Jan 22 18:31:54 crc kubenswrapper[4758]: I0122 18:31:54.741727 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tw5bj"] Jan 22 18:31:54 crc kubenswrapper[4758]: I0122 18:31:54.759022 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d806d29-def9-436a-8c9c-90e378bcfc22-utilities\") pod \"community-operators-tw5bj\" (UID: \"2d806d29-def9-436a-8c9c-90e378bcfc22\") " pod="openshift-marketplace/community-operators-tw5bj" Jan 22 18:31:54 crc kubenswrapper[4758]: I0122 18:31:54.759387 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwcpj\" (UniqueName: \"kubernetes.io/projected/2d806d29-def9-436a-8c9c-90e378bcfc22-kube-api-access-xwcpj\") pod \"community-operators-tw5bj\" (UID: \"2d806d29-def9-436a-8c9c-90e378bcfc22\") " pod="openshift-marketplace/community-operators-tw5bj" Jan 22 18:31:54 crc kubenswrapper[4758]: I0122 18:31:54.759435 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d806d29-def9-436a-8c9c-90e378bcfc22-catalog-content\") pod \"community-operators-tw5bj\" (UID: \"2d806d29-def9-436a-8c9c-90e378bcfc22\") " pod="openshift-marketplace/community-operators-tw5bj" Jan 22 18:31:54 crc kubenswrapper[4758]: I0122 18:31:54.860726 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwcpj\" (UniqueName: \"kubernetes.io/projected/2d806d29-def9-436a-8c9c-90e378bcfc22-kube-api-access-xwcpj\") pod \"community-operators-tw5bj\" (UID: \"2d806d29-def9-436a-8c9c-90e378bcfc22\") " pod="openshift-marketplace/community-operators-tw5bj" Jan 22 18:31:54 crc kubenswrapper[4758]: I0122 18:31:54.860865 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d806d29-def9-436a-8c9c-90e378bcfc22-catalog-content\") pod \"community-operators-tw5bj\" (UID: \"2d806d29-def9-436a-8c9c-90e378bcfc22\") " pod="openshift-marketplace/community-operators-tw5bj" Jan 22 18:31:54 crc kubenswrapper[4758]: I0122 18:31:54.860907 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d806d29-def9-436a-8c9c-90e378bcfc22-utilities\") pod \"community-operators-tw5bj\" (UID: \"2d806d29-def9-436a-8c9c-90e378bcfc22\") " pod="openshift-marketplace/community-operators-tw5bj" Jan 22 18:31:54 crc kubenswrapper[4758]: I0122 18:31:54.861360 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d806d29-def9-436a-8c9c-90e378bcfc22-catalog-content\") pod \"community-operators-tw5bj\" (UID: \"2d806d29-def9-436a-8c9c-90e378bcfc22\") " pod="openshift-marketplace/community-operators-tw5bj" Jan 22 18:31:54 crc kubenswrapper[4758]: I0122 18:31:54.861695 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d806d29-def9-436a-8c9c-90e378bcfc22-utilities\") pod \"community-operators-tw5bj\" (UID: \"2d806d29-def9-436a-8c9c-90e378bcfc22\") " pod="openshift-marketplace/community-operators-tw5bj" Jan 22 18:31:54 crc kubenswrapper[4758]: I0122 18:31:54.891227 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwcpj\" (UniqueName: \"kubernetes.io/projected/2d806d29-def9-436a-8c9c-90e378bcfc22-kube-api-access-xwcpj\") pod \"community-operators-tw5bj\" (UID: \"2d806d29-def9-436a-8c9c-90e378bcfc22\") " pod="openshift-marketplace/community-operators-tw5bj" Jan 22 18:31:55 crc kubenswrapper[4758]: I0122 18:31:55.057612 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tw5bj" Jan 22 18:32:00 crc kubenswrapper[4758]: W0122 18:32:00.371705 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d806d29_def9_436a_8c9c_90e378bcfc22.slice/crio-fcf4cebc137e502c4eb44631844018e41d7a072b6f2f66b4e98f7e85f949ab95 WatchSource:0}: Error finding container fcf4cebc137e502c4eb44631844018e41d7a072b6f2f66b4e98f7e85f949ab95: Status 404 returned error can't find the container with id fcf4cebc137e502c4eb44631844018e41d7a072b6f2f66b4e98f7e85f949ab95 Jan 22 18:32:00 crc kubenswrapper[4758]: I0122 18:32:00.388463 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tw5bj"] Jan 22 18:32:01 crc kubenswrapper[4758]: I0122 18:32:01.183833 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6lbk6/crc-debug-9rtsq" event={"ID":"cc45afaf-9258-414a-a784-ba0fef57349f","Type":"ContainerStarted","Data":"16d98b054a985a06cbf1a01ab07bd1796951c1d372c7eeacebf49207c18da139"} Jan 22 18:32:01 crc kubenswrapper[4758]: I0122 18:32:01.186317 4758 generic.go:334] "Generic (PLEG): container finished" podID="2d806d29-def9-436a-8c9c-90e378bcfc22" containerID="dc7d9bd667f8e68d2e208bd95537280b9be5bcef846dba9cc8cf62f9ad57bab1" exitCode=0 Jan 22 18:32:01 crc kubenswrapper[4758]: I0122 18:32:01.186362 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tw5bj" event={"ID":"2d806d29-def9-436a-8c9c-90e378bcfc22","Type":"ContainerDied","Data":"dc7d9bd667f8e68d2e208bd95537280b9be5bcef846dba9cc8cf62f9ad57bab1"} Jan 22 18:32:01 crc kubenswrapper[4758]: I0122 18:32:01.186387 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tw5bj" event={"ID":"2d806d29-def9-436a-8c9c-90e378bcfc22","Type":"ContainerStarted","Data":"fcf4cebc137e502c4eb44631844018e41d7a072b6f2f66b4e98f7e85f949ab95"} Jan 22 18:32:01 crc kubenswrapper[4758]: I0122 18:32:01.210420 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-6lbk6/crc-debug-9rtsq" podStartSLOduration=2.258449822 podStartE2EDuration="13.210393576s" podCreationTimestamp="2026-01-22 18:31:48 +0000 UTC" firstStartedPulling="2026-01-22 18:31:48.986439228 +0000 UTC m=+7330.469778513" lastFinishedPulling="2026-01-22 18:31:59.938382982 +0000 UTC m=+7341.421722267" observedRunningTime="2026-01-22 18:32:01.200513028 +0000 UTC m=+7342.683852313" watchObservedRunningTime="2026-01-22 18:32:01.210393576 +0000 UTC m=+7342.693732861" Jan 22 18:32:03 crc kubenswrapper[4758]: I0122 18:32:03.226362 4758 generic.go:334] "Generic (PLEG): container finished" podID="2d806d29-def9-436a-8c9c-90e378bcfc22" containerID="b934cccd0c1e61599eb69fdca95275f611830c03b2f876d336efab1e4074318d" exitCode=0 Jan 22 18:32:03 crc kubenswrapper[4758]: I0122 18:32:03.226475 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tw5bj" event={"ID":"2d806d29-def9-436a-8c9c-90e378bcfc22","Type":"ContainerDied","Data":"b934cccd0c1e61599eb69fdca95275f611830c03b2f876d336efab1e4074318d"} Jan 22 18:32:04 crc kubenswrapper[4758]: I0122 18:32:04.240052 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tw5bj" event={"ID":"2d806d29-def9-436a-8c9c-90e378bcfc22","Type":"ContainerStarted","Data":"85158cda2b26b4709001c47b264a9f58d7842c549039836f1475044d57cfe944"} Jan 22 18:32:04 crc kubenswrapper[4758]: I0122 18:32:04.269298 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tw5bj" podStartSLOduration=7.578808496 podStartE2EDuration="10.269279088s" podCreationTimestamp="2026-01-22 18:31:54 +0000 UTC" firstStartedPulling="2026-01-22 18:32:01.187601988 +0000 UTC m=+7342.670941273" lastFinishedPulling="2026-01-22 18:32:03.87807258 +0000 UTC m=+7345.361411865" observedRunningTime="2026-01-22 18:32:04.260694445 +0000 UTC m=+7345.744033730" watchObservedRunningTime="2026-01-22 18:32:04.269279088 +0000 UTC m=+7345.752618363" Jan 22 18:32:05 crc kubenswrapper[4758]: I0122 18:32:05.058645 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tw5bj" Jan 22 18:32:05 crc kubenswrapper[4758]: I0122 18:32:05.058695 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tw5bj" Jan 22 18:32:06 crc kubenswrapper[4758]: I0122 18:32:06.109070 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-tw5bj" podUID="2d806d29-def9-436a-8c9c-90e378bcfc22" containerName="registry-server" probeResult="failure" output=< Jan 22 18:32:06 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 22 18:32:06 crc kubenswrapper[4758]: > Jan 22 18:32:15 crc kubenswrapper[4758]: I0122 18:32:15.139168 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tw5bj" Jan 22 18:32:15 crc kubenswrapper[4758]: I0122 18:32:15.189098 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tw5bj" Jan 22 18:32:15 crc kubenswrapper[4758]: I0122 18:32:15.383721 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tw5bj"] Jan 22 18:32:16 crc kubenswrapper[4758]: I0122 18:32:16.378114 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tw5bj" podUID="2d806d29-def9-436a-8c9c-90e378bcfc22" containerName="registry-server" containerID="cri-o://85158cda2b26b4709001c47b264a9f58d7842c549039836f1475044d57cfe944" gracePeriod=2 Jan 22 18:32:17 crc kubenswrapper[4758]: I0122 18:32:17.181166 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tw5bj" Jan 22 18:32:17 crc kubenswrapper[4758]: I0122 18:32:17.322295 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d806d29-def9-436a-8c9c-90e378bcfc22-catalog-content\") pod \"2d806d29-def9-436a-8c9c-90e378bcfc22\" (UID: \"2d806d29-def9-436a-8c9c-90e378bcfc22\") " Jan 22 18:32:17 crc kubenswrapper[4758]: I0122 18:32:17.322397 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d806d29-def9-436a-8c9c-90e378bcfc22-utilities\") pod \"2d806d29-def9-436a-8c9c-90e378bcfc22\" (UID: \"2d806d29-def9-436a-8c9c-90e378bcfc22\") " Jan 22 18:32:17 crc kubenswrapper[4758]: I0122 18:32:17.322490 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwcpj\" (UniqueName: \"kubernetes.io/projected/2d806d29-def9-436a-8c9c-90e378bcfc22-kube-api-access-xwcpj\") pod \"2d806d29-def9-436a-8c9c-90e378bcfc22\" (UID: \"2d806d29-def9-436a-8c9c-90e378bcfc22\") " Jan 22 18:32:17 crc kubenswrapper[4758]: I0122 18:32:17.323673 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d806d29-def9-436a-8c9c-90e378bcfc22-utilities" (OuterVolumeSpecName: "utilities") pod "2d806d29-def9-436a-8c9c-90e378bcfc22" (UID: "2d806d29-def9-436a-8c9c-90e378bcfc22"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 18:32:17 crc kubenswrapper[4758]: I0122 18:32:17.334756 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d806d29-def9-436a-8c9c-90e378bcfc22-kube-api-access-xwcpj" (OuterVolumeSpecName: "kube-api-access-xwcpj") pod "2d806d29-def9-436a-8c9c-90e378bcfc22" (UID: "2d806d29-def9-436a-8c9c-90e378bcfc22"). InnerVolumeSpecName "kube-api-access-xwcpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 18:32:17 crc kubenswrapper[4758]: I0122 18:32:17.385390 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d806d29-def9-436a-8c9c-90e378bcfc22-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2d806d29-def9-436a-8c9c-90e378bcfc22" (UID: "2d806d29-def9-436a-8c9c-90e378bcfc22"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 18:32:17 crc kubenswrapper[4758]: I0122 18:32:17.393274 4758 generic.go:334] "Generic (PLEG): container finished" podID="2d806d29-def9-436a-8c9c-90e378bcfc22" containerID="85158cda2b26b4709001c47b264a9f58d7842c549039836f1475044d57cfe944" exitCode=0 Jan 22 18:32:17 crc kubenswrapper[4758]: I0122 18:32:17.393325 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tw5bj" event={"ID":"2d806d29-def9-436a-8c9c-90e378bcfc22","Type":"ContainerDied","Data":"85158cda2b26b4709001c47b264a9f58d7842c549039836f1475044d57cfe944"} Jan 22 18:32:17 crc kubenswrapper[4758]: I0122 18:32:17.393364 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tw5bj" Jan 22 18:32:17 crc kubenswrapper[4758]: I0122 18:32:17.393605 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tw5bj" event={"ID":"2d806d29-def9-436a-8c9c-90e378bcfc22","Type":"ContainerDied","Data":"fcf4cebc137e502c4eb44631844018e41d7a072b6f2f66b4e98f7e85f949ab95"} Jan 22 18:32:17 crc kubenswrapper[4758]: I0122 18:32:17.393692 4758 scope.go:117] "RemoveContainer" containerID="85158cda2b26b4709001c47b264a9f58d7842c549039836f1475044d57cfe944" Jan 22 18:32:17 crc kubenswrapper[4758]: I0122 18:32:17.427691 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d806d29-def9-436a-8c9c-90e378bcfc22-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 18:32:17 crc kubenswrapper[4758]: I0122 18:32:17.427732 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d806d29-def9-436a-8c9c-90e378bcfc22-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 18:32:17 crc kubenswrapper[4758]: I0122 18:32:17.427766 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwcpj\" (UniqueName: \"kubernetes.io/projected/2d806d29-def9-436a-8c9c-90e378bcfc22-kube-api-access-xwcpj\") on node \"crc\" DevicePath \"\"" Jan 22 18:32:17 crc kubenswrapper[4758]: I0122 18:32:17.438896 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tw5bj"] Jan 22 18:32:17 crc kubenswrapper[4758]: I0122 18:32:17.448011 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tw5bj"] Jan 22 18:32:17 crc kubenswrapper[4758]: I0122 18:32:17.450345 4758 scope.go:117] "RemoveContainer" containerID="b934cccd0c1e61599eb69fdca95275f611830c03b2f876d336efab1e4074318d" Jan 22 18:32:17 crc kubenswrapper[4758]: I0122 18:32:17.487497 4758 scope.go:117] "RemoveContainer" containerID="dc7d9bd667f8e68d2e208bd95537280b9be5bcef846dba9cc8cf62f9ad57bab1" Jan 22 18:32:17 crc kubenswrapper[4758]: I0122 18:32:17.522920 4758 scope.go:117] "RemoveContainer" containerID="85158cda2b26b4709001c47b264a9f58d7842c549039836f1475044d57cfe944" Jan 22 18:32:17 crc kubenswrapper[4758]: E0122 18:32:17.523434 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85158cda2b26b4709001c47b264a9f58d7842c549039836f1475044d57cfe944\": container with ID starting with 85158cda2b26b4709001c47b264a9f58d7842c549039836f1475044d57cfe944 not found: ID does not exist" containerID="85158cda2b26b4709001c47b264a9f58d7842c549039836f1475044d57cfe944" Jan 22 18:32:17 crc kubenswrapper[4758]: I0122 18:32:17.523500 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85158cda2b26b4709001c47b264a9f58d7842c549039836f1475044d57cfe944"} err="failed to get container status \"85158cda2b26b4709001c47b264a9f58d7842c549039836f1475044d57cfe944\": rpc error: code = NotFound desc = could not find container \"85158cda2b26b4709001c47b264a9f58d7842c549039836f1475044d57cfe944\": container with ID starting with 85158cda2b26b4709001c47b264a9f58d7842c549039836f1475044d57cfe944 not found: ID does not exist" Jan 22 18:32:17 crc kubenswrapper[4758]: I0122 18:32:17.523535 4758 scope.go:117] "RemoveContainer" containerID="b934cccd0c1e61599eb69fdca95275f611830c03b2f876d336efab1e4074318d" Jan 22 18:32:17 crc kubenswrapper[4758]: E0122 18:32:17.524111 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b934cccd0c1e61599eb69fdca95275f611830c03b2f876d336efab1e4074318d\": container with ID starting with b934cccd0c1e61599eb69fdca95275f611830c03b2f876d336efab1e4074318d not found: ID does not exist" containerID="b934cccd0c1e61599eb69fdca95275f611830c03b2f876d336efab1e4074318d" Jan 22 18:32:17 crc kubenswrapper[4758]: I0122 18:32:17.524244 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b934cccd0c1e61599eb69fdca95275f611830c03b2f876d336efab1e4074318d"} err="failed to get container status \"b934cccd0c1e61599eb69fdca95275f611830c03b2f876d336efab1e4074318d\": rpc error: code = NotFound desc = could not find container \"b934cccd0c1e61599eb69fdca95275f611830c03b2f876d336efab1e4074318d\": container with ID starting with b934cccd0c1e61599eb69fdca95275f611830c03b2f876d336efab1e4074318d not found: ID does not exist" Jan 22 18:32:17 crc kubenswrapper[4758]: I0122 18:32:17.524362 4758 scope.go:117] "RemoveContainer" containerID="dc7d9bd667f8e68d2e208bd95537280b9be5bcef846dba9cc8cf62f9ad57bab1" Jan 22 18:32:17 crc kubenswrapper[4758]: E0122 18:32:17.524772 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc7d9bd667f8e68d2e208bd95537280b9be5bcef846dba9cc8cf62f9ad57bab1\": container with ID starting with dc7d9bd667f8e68d2e208bd95537280b9be5bcef846dba9cc8cf62f9ad57bab1 not found: ID does not exist" containerID="dc7d9bd667f8e68d2e208bd95537280b9be5bcef846dba9cc8cf62f9ad57bab1" Jan 22 18:32:17 crc kubenswrapper[4758]: I0122 18:32:17.524807 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc7d9bd667f8e68d2e208bd95537280b9be5bcef846dba9cc8cf62f9ad57bab1"} err="failed to get container status \"dc7d9bd667f8e68d2e208bd95537280b9be5bcef846dba9cc8cf62f9ad57bab1\": rpc error: code = NotFound desc = could not find container \"dc7d9bd667f8e68d2e208bd95537280b9be5bcef846dba9cc8cf62f9ad57bab1\": container with ID starting with dc7d9bd667f8e68d2e208bd95537280b9be5bcef846dba9cc8cf62f9ad57bab1 not found: ID does not exist" Jan 22 18:32:18 crc kubenswrapper[4758]: I0122 18:32:18.821665 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d806d29-def9-436a-8c9c-90e378bcfc22" path="/var/lib/kubelet/pods/2d806d29-def9-436a-8c9c-90e378bcfc22/volumes" Jan 22 18:32:27 crc kubenswrapper[4758]: I0122 18:32:27.364390 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6bcpg"] Jan 22 18:32:27 crc kubenswrapper[4758]: E0122 18:32:27.366126 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d806d29-def9-436a-8c9c-90e378bcfc22" containerName="extract-content" Jan 22 18:32:27 crc kubenswrapper[4758]: I0122 18:32:27.366142 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d806d29-def9-436a-8c9c-90e378bcfc22" containerName="extract-content" Jan 22 18:32:27 crc kubenswrapper[4758]: E0122 18:32:27.366175 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d806d29-def9-436a-8c9c-90e378bcfc22" containerName="extract-utilities" Jan 22 18:32:27 crc kubenswrapper[4758]: I0122 18:32:27.366181 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d806d29-def9-436a-8c9c-90e378bcfc22" containerName="extract-utilities" Jan 22 18:32:27 crc kubenswrapper[4758]: E0122 18:32:27.366208 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d806d29-def9-436a-8c9c-90e378bcfc22" containerName="registry-server" Jan 22 18:32:27 crc kubenswrapper[4758]: I0122 18:32:27.366214 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d806d29-def9-436a-8c9c-90e378bcfc22" containerName="registry-server" Jan 22 18:32:27 crc kubenswrapper[4758]: I0122 18:32:27.366421 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d806d29-def9-436a-8c9c-90e378bcfc22" containerName="registry-server" Jan 22 18:32:27 crc kubenswrapper[4758]: I0122 18:32:27.367920 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6bcpg" Jan 22 18:32:27 crc kubenswrapper[4758]: I0122 18:32:27.386087 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6bcpg"] Jan 22 18:32:27 crc kubenswrapper[4758]: I0122 18:32:27.474262 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6b17958-7e4f-487d-96be-904ac8f181d8-utilities\") pod \"redhat-operators-6bcpg\" (UID: \"e6b17958-7e4f-487d-96be-904ac8f181d8\") " pod="openshift-marketplace/redhat-operators-6bcpg" Jan 22 18:32:27 crc kubenswrapper[4758]: I0122 18:32:27.474665 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6b17958-7e4f-487d-96be-904ac8f181d8-catalog-content\") pod \"redhat-operators-6bcpg\" (UID: \"e6b17958-7e4f-487d-96be-904ac8f181d8\") " pod="openshift-marketplace/redhat-operators-6bcpg" Jan 22 18:32:27 crc kubenswrapper[4758]: I0122 18:32:27.474912 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gwbn\" (UniqueName: \"kubernetes.io/projected/e6b17958-7e4f-487d-96be-904ac8f181d8-kube-api-access-8gwbn\") pod \"redhat-operators-6bcpg\" (UID: \"e6b17958-7e4f-487d-96be-904ac8f181d8\") " pod="openshift-marketplace/redhat-operators-6bcpg" Jan 22 18:32:27 crc kubenswrapper[4758]: I0122 18:32:27.577788 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6b17958-7e4f-487d-96be-904ac8f181d8-utilities\") pod \"redhat-operators-6bcpg\" (UID: \"e6b17958-7e4f-487d-96be-904ac8f181d8\") " pod="openshift-marketplace/redhat-operators-6bcpg" Jan 22 18:32:27 crc kubenswrapper[4758]: I0122 18:32:27.577936 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6b17958-7e4f-487d-96be-904ac8f181d8-catalog-content\") pod \"redhat-operators-6bcpg\" (UID: \"e6b17958-7e4f-487d-96be-904ac8f181d8\") " pod="openshift-marketplace/redhat-operators-6bcpg" Jan 22 18:32:27 crc kubenswrapper[4758]: I0122 18:32:27.577988 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gwbn\" (UniqueName: \"kubernetes.io/projected/e6b17958-7e4f-487d-96be-904ac8f181d8-kube-api-access-8gwbn\") pod \"redhat-operators-6bcpg\" (UID: \"e6b17958-7e4f-487d-96be-904ac8f181d8\") " pod="openshift-marketplace/redhat-operators-6bcpg" Jan 22 18:32:27 crc kubenswrapper[4758]: I0122 18:32:27.579016 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6b17958-7e4f-487d-96be-904ac8f181d8-utilities\") pod \"redhat-operators-6bcpg\" (UID: \"e6b17958-7e4f-487d-96be-904ac8f181d8\") " pod="openshift-marketplace/redhat-operators-6bcpg" Jan 22 18:32:27 crc kubenswrapper[4758]: I0122 18:32:27.579374 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6b17958-7e4f-487d-96be-904ac8f181d8-catalog-content\") pod \"redhat-operators-6bcpg\" (UID: \"e6b17958-7e4f-487d-96be-904ac8f181d8\") " pod="openshift-marketplace/redhat-operators-6bcpg" Jan 22 18:32:27 crc kubenswrapper[4758]: I0122 18:32:27.600435 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gwbn\" (UniqueName: \"kubernetes.io/projected/e6b17958-7e4f-487d-96be-904ac8f181d8-kube-api-access-8gwbn\") pod \"redhat-operators-6bcpg\" (UID: \"e6b17958-7e4f-487d-96be-904ac8f181d8\") " pod="openshift-marketplace/redhat-operators-6bcpg" Jan 22 18:32:27 crc kubenswrapper[4758]: I0122 18:32:27.688453 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6bcpg" Jan 22 18:32:28 crc kubenswrapper[4758]: I0122 18:32:28.178486 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6bcpg"] Jan 22 18:32:28 crc kubenswrapper[4758]: I0122 18:32:28.501637 4758 generic.go:334] "Generic (PLEG): container finished" podID="e6b17958-7e4f-487d-96be-904ac8f181d8" containerID="8f63d867fb6f488894fbe2857491908c8846fbecfa2bcc8e4c7a5d548ad7161a" exitCode=0 Jan 22 18:32:28 crc kubenswrapper[4758]: I0122 18:32:28.501690 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6bcpg" event={"ID":"e6b17958-7e4f-487d-96be-904ac8f181d8","Type":"ContainerDied","Data":"8f63d867fb6f488894fbe2857491908c8846fbecfa2bcc8e4c7a5d548ad7161a"} Jan 22 18:32:28 crc kubenswrapper[4758]: I0122 18:32:28.502033 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6bcpg" event={"ID":"e6b17958-7e4f-487d-96be-904ac8f181d8","Type":"ContainerStarted","Data":"25168e230e7832d3dc66d1f760d6382decde43d2f272250e832d19866ff1b9e7"} Jan 22 18:32:29 crc kubenswrapper[4758]: I0122 18:32:29.515218 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6bcpg" event={"ID":"e6b17958-7e4f-487d-96be-904ac8f181d8","Type":"ContainerStarted","Data":"6804523475c3184557f81e66a82c7674b50f6720b235400506a409368786f02a"} Jan 22 18:32:33 crc kubenswrapper[4758]: I0122 18:32:33.555920 4758 generic.go:334] "Generic (PLEG): container finished" podID="e6b17958-7e4f-487d-96be-904ac8f181d8" containerID="6804523475c3184557f81e66a82c7674b50f6720b235400506a409368786f02a" exitCode=0 Jan 22 18:32:33 crc kubenswrapper[4758]: I0122 18:32:33.556010 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6bcpg" event={"ID":"e6b17958-7e4f-487d-96be-904ac8f181d8","Type":"ContainerDied","Data":"6804523475c3184557f81e66a82c7674b50f6720b235400506a409368786f02a"} Jan 22 18:32:34 crc kubenswrapper[4758]: I0122 18:32:34.569003 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6bcpg" event={"ID":"e6b17958-7e4f-487d-96be-904ac8f181d8","Type":"ContainerStarted","Data":"2540fa8d006305e9c1b9302b18580ffccae257a4f1f4f9936858d143213fa537"} Jan 22 18:32:34 crc kubenswrapper[4758]: I0122 18:32:34.597365 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6bcpg" podStartSLOduration=2.119420454 podStartE2EDuration="7.59734089s" podCreationTimestamp="2026-01-22 18:32:27 +0000 UTC" firstStartedPulling="2026-01-22 18:32:28.504806813 +0000 UTC m=+7369.988146098" lastFinishedPulling="2026-01-22 18:32:33.982727249 +0000 UTC m=+7375.466066534" observedRunningTime="2026-01-22 18:32:34.592618432 +0000 UTC m=+7376.075957727" watchObservedRunningTime="2026-01-22 18:32:34.59734089 +0000 UTC m=+7376.080680175" Jan 22 18:32:37 crc kubenswrapper[4758]: I0122 18:32:37.689128 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6bcpg" Jan 22 18:32:37 crc kubenswrapper[4758]: I0122 18:32:37.691066 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6bcpg" Jan 22 18:32:38 crc kubenswrapper[4758]: I0122 18:32:38.746248 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6bcpg" podUID="e6b17958-7e4f-487d-96be-904ac8f181d8" containerName="registry-server" probeResult="failure" output=< Jan 22 18:32:38 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 22 18:32:38 crc kubenswrapper[4758]: > Jan 22 18:32:43 crc kubenswrapper[4758]: I0122 18:32:43.837278 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 18:32:43 crc kubenswrapper[4758]: I0122 18:32:43.838265 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 18:32:47 crc kubenswrapper[4758]: I0122 18:32:47.745946 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6bcpg" Jan 22 18:32:47 crc kubenswrapper[4758]: I0122 18:32:47.805173 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6bcpg" Jan 22 18:32:47 crc kubenswrapper[4758]: I0122 18:32:47.984717 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6bcpg"] Jan 22 18:32:48 crc kubenswrapper[4758]: I0122 18:32:48.704246 4758 generic.go:334] "Generic (PLEG): container finished" podID="cc45afaf-9258-414a-a784-ba0fef57349f" containerID="16d98b054a985a06cbf1a01ab07bd1796951c1d372c7eeacebf49207c18da139" exitCode=0 Jan 22 18:32:48 crc kubenswrapper[4758]: I0122 18:32:48.704405 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6lbk6/crc-debug-9rtsq" event={"ID":"cc45afaf-9258-414a-a784-ba0fef57349f","Type":"ContainerDied","Data":"16d98b054a985a06cbf1a01ab07bd1796951c1d372c7eeacebf49207c18da139"} Jan 22 18:32:49 crc kubenswrapper[4758]: I0122 18:32:49.716912 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6bcpg" podUID="e6b17958-7e4f-487d-96be-904ac8f181d8" containerName="registry-server" containerID="cri-o://2540fa8d006305e9c1b9302b18580ffccae257a4f1f4f9936858d143213fa537" gracePeriod=2 Jan 22 18:32:49 crc kubenswrapper[4758]: I0122 18:32:49.993806 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6lbk6/crc-debug-9rtsq" Jan 22 18:32:50 crc kubenswrapper[4758]: I0122 18:32:50.056443 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6lbk6/crc-debug-9rtsq"] Jan 22 18:32:50 crc kubenswrapper[4758]: I0122 18:32:50.063492 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbbgs\" (UniqueName: \"kubernetes.io/projected/cc45afaf-9258-414a-a784-ba0fef57349f-kube-api-access-lbbgs\") pod \"cc45afaf-9258-414a-a784-ba0fef57349f\" (UID: \"cc45afaf-9258-414a-a784-ba0fef57349f\") " Jan 22 18:32:50 crc kubenswrapper[4758]: I0122 18:32:50.063717 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cc45afaf-9258-414a-a784-ba0fef57349f-host\") pod \"cc45afaf-9258-414a-a784-ba0fef57349f\" (UID: \"cc45afaf-9258-414a-a784-ba0fef57349f\") " Jan 22 18:32:50 crc kubenswrapper[4758]: I0122 18:32:50.064420 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc45afaf-9258-414a-a784-ba0fef57349f-host" (OuterVolumeSpecName: "host") pod "cc45afaf-9258-414a-a784-ba0fef57349f" (UID: "cc45afaf-9258-414a-a784-ba0fef57349f"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 18:32:50 crc kubenswrapper[4758]: I0122 18:32:50.066647 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6lbk6/crc-debug-9rtsq"] Jan 22 18:32:50 crc kubenswrapper[4758]: I0122 18:32:50.072259 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc45afaf-9258-414a-a784-ba0fef57349f-kube-api-access-lbbgs" (OuterVolumeSpecName: "kube-api-access-lbbgs") pod "cc45afaf-9258-414a-a784-ba0fef57349f" (UID: "cc45afaf-9258-414a-a784-ba0fef57349f"). InnerVolumeSpecName "kube-api-access-lbbgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 18:32:50 crc kubenswrapper[4758]: I0122 18:32:50.166582 4758 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cc45afaf-9258-414a-a784-ba0fef57349f-host\") on node \"crc\" DevicePath \"\"" Jan 22 18:32:50 crc kubenswrapper[4758]: I0122 18:32:50.166641 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbbgs\" (UniqueName: \"kubernetes.io/projected/cc45afaf-9258-414a-a784-ba0fef57349f-kube-api-access-lbbgs\") on node \"crc\" DevicePath \"\"" Jan 22 18:32:50 crc kubenswrapper[4758]: I0122 18:32:50.184833 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6bcpg" Jan 22 18:32:50 crc kubenswrapper[4758]: I0122 18:32:50.268292 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6b17958-7e4f-487d-96be-904ac8f181d8-utilities\") pod \"e6b17958-7e4f-487d-96be-904ac8f181d8\" (UID: \"e6b17958-7e4f-487d-96be-904ac8f181d8\") " Jan 22 18:32:50 crc kubenswrapper[4758]: I0122 18:32:50.268336 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6b17958-7e4f-487d-96be-904ac8f181d8-catalog-content\") pod \"e6b17958-7e4f-487d-96be-904ac8f181d8\" (UID: \"e6b17958-7e4f-487d-96be-904ac8f181d8\") " Jan 22 18:32:50 crc kubenswrapper[4758]: I0122 18:32:50.268474 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gwbn\" (UniqueName: \"kubernetes.io/projected/e6b17958-7e4f-487d-96be-904ac8f181d8-kube-api-access-8gwbn\") pod \"e6b17958-7e4f-487d-96be-904ac8f181d8\" (UID: \"e6b17958-7e4f-487d-96be-904ac8f181d8\") " Jan 22 18:32:50 crc kubenswrapper[4758]: I0122 18:32:50.269210 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6b17958-7e4f-487d-96be-904ac8f181d8-utilities" (OuterVolumeSpecName: "utilities") pod "e6b17958-7e4f-487d-96be-904ac8f181d8" (UID: "e6b17958-7e4f-487d-96be-904ac8f181d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 18:32:50 crc kubenswrapper[4758]: I0122 18:32:50.272410 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6b17958-7e4f-487d-96be-904ac8f181d8-kube-api-access-8gwbn" (OuterVolumeSpecName: "kube-api-access-8gwbn") pod "e6b17958-7e4f-487d-96be-904ac8f181d8" (UID: "e6b17958-7e4f-487d-96be-904ac8f181d8"). InnerVolumeSpecName "kube-api-access-8gwbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 18:32:50 crc kubenswrapper[4758]: I0122 18:32:50.370614 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6b17958-7e4f-487d-96be-904ac8f181d8-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 18:32:50 crc kubenswrapper[4758]: I0122 18:32:50.370683 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gwbn\" (UniqueName: \"kubernetes.io/projected/e6b17958-7e4f-487d-96be-904ac8f181d8-kube-api-access-8gwbn\") on node \"crc\" DevicePath \"\"" Jan 22 18:32:50 crc kubenswrapper[4758]: I0122 18:32:50.399879 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6b17958-7e4f-487d-96be-904ac8f181d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e6b17958-7e4f-487d-96be-904ac8f181d8" (UID: "e6b17958-7e4f-487d-96be-904ac8f181d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 18:32:50 crc kubenswrapper[4758]: I0122 18:32:50.473142 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6b17958-7e4f-487d-96be-904ac8f181d8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 18:32:50 crc kubenswrapper[4758]: I0122 18:32:50.727849 4758 generic.go:334] "Generic (PLEG): container finished" podID="e6b17958-7e4f-487d-96be-904ac8f181d8" containerID="2540fa8d006305e9c1b9302b18580ffccae257a4f1f4f9936858d143213fa537" exitCode=0 Jan 22 18:32:50 crc kubenswrapper[4758]: I0122 18:32:50.728161 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6bcpg" Jan 22 18:32:50 crc kubenswrapper[4758]: I0122 18:32:50.728015 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6bcpg" event={"ID":"e6b17958-7e4f-487d-96be-904ac8f181d8","Type":"ContainerDied","Data":"2540fa8d006305e9c1b9302b18580ffccae257a4f1f4f9936858d143213fa537"} Jan 22 18:32:50 crc kubenswrapper[4758]: I0122 18:32:50.728225 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6bcpg" event={"ID":"e6b17958-7e4f-487d-96be-904ac8f181d8","Type":"ContainerDied","Data":"25168e230e7832d3dc66d1f760d6382decde43d2f272250e832d19866ff1b9e7"} Jan 22 18:32:50 crc kubenswrapper[4758]: I0122 18:32:50.728248 4758 scope.go:117] "RemoveContainer" containerID="2540fa8d006305e9c1b9302b18580ffccae257a4f1f4f9936858d143213fa537" Jan 22 18:32:50 crc kubenswrapper[4758]: I0122 18:32:50.731652 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="745c4a8746795b3200dbf986c16f0d29a12a156b0e742675ff24697cfa3d259e" Jan 22 18:32:50 crc kubenswrapper[4758]: I0122 18:32:50.731705 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6lbk6/crc-debug-9rtsq" Jan 22 18:32:50 crc kubenswrapper[4758]: I0122 18:32:50.800707 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6bcpg"] Jan 22 18:32:50 crc kubenswrapper[4758]: I0122 18:32:50.802507 4758 scope.go:117] "RemoveContainer" containerID="6804523475c3184557f81e66a82c7674b50f6720b235400506a409368786f02a" Jan 22 18:32:50 crc kubenswrapper[4758]: I0122 18:32:50.820405 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc45afaf-9258-414a-a784-ba0fef57349f" path="/var/lib/kubelet/pods/cc45afaf-9258-414a-a784-ba0fef57349f/volumes" Jan 22 18:32:50 crc kubenswrapper[4758]: I0122 18:32:50.821315 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6bcpg"] Jan 22 18:32:50 crc kubenswrapper[4758]: I0122 18:32:50.842197 4758 scope.go:117] "RemoveContainer" containerID="8f63d867fb6f488894fbe2857491908c8846fbecfa2bcc8e4c7a5d548ad7161a" Jan 22 18:32:50 crc kubenswrapper[4758]: I0122 18:32:50.871906 4758 scope.go:117] "RemoveContainer" containerID="2540fa8d006305e9c1b9302b18580ffccae257a4f1f4f9936858d143213fa537" Jan 22 18:32:50 crc kubenswrapper[4758]: E0122 18:32:50.872367 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2540fa8d006305e9c1b9302b18580ffccae257a4f1f4f9936858d143213fa537\": container with ID starting with 2540fa8d006305e9c1b9302b18580ffccae257a4f1f4f9936858d143213fa537 not found: ID does not exist" containerID="2540fa8d006305e9c1b9302b18580ffccae257a4f1f4f9936858d143213fa537" Jan 22 18:32:50 crc kubenswrapper[4758]: I0122 18:32:50.872425 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2540fa8d006305e9c1b9302b18580ffccae257a4f1f4f9936858d143213fa537"} err="failed to get container status \"2540fa8d006305e9c1b9302b18580ffccae257a4f1f4f9936858d143213fa537\": rpc error: code = NotFound desc = could not find container \"2540fa8d006305e9c1b9302b18580ffccae257a4f1f4f9936858d143213fa537\": container with ID starting with 2540fa8d006305e9c1b9302b18580ffccae257a4f1f4f9936858d143213fa537 not found: ID does not exist" Jan 22 18:32:50 crc kubenswrapper[4758]: I0122 18:32:50.872464 4758 scope.go:117] "RemoveContainer" containerID="6804523475c3184557f81e66a82c7674b50f6720b235400506a409368786f02a" Jan 22 18:32:50 crc kubenswrapper[4758]: E0122 18:32:50.872820 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6804523475c3184557f81e66a82c7674b50f6720b235400506a409368786f02a\": container with ID starting with 6804523475c3184557f81e66a82c7674b50f6720b235400506a409368786f02a not found: ID does not exist" containerID="6804523475c3184557f81e66a82c7674b50f6720b235400506a409368786f02a" Jan 22 18:32:50 crc kubenswrapper[4758]: I0122 18:32:50.872865 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6804523475c3184557f81e66a82c7674b50f6720b235400506a409368786f02a"} err="failed to get container status \"6804523475c3184557f81e66a82c7674b50f6720b235400506a409368786f02a\": rpc error: code = NotFound desc = could not find container \"6804523475c3184557f81e66a82c7674b50f6720b235400506a409368786f02a\": container with ID starting with 6804523475c3184557f81e66a82c7674b50f6720b235400506a409368786f02a not found: ID does not exist" Jan 22 18:32:50 crc kubenswrapper[4758]: I0122 18:32:50.872896 4758 scope.go:117] "RemoveContainer" containerID="8f63d867fb6f488894fbe2857491908c8846fbecfa2bcc8e4c7a5d548ad7161a" Jan 22 18:32:50 crc kubenswrapper[4758]: E0122 18:32:50.873129 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f63d867fb6f488894fbe2857491908c8846fbecfa2bcc8e4c7a5d548ad7161a\": container with ID starting with 8f63d867fb6f488894fbe2857491908c8846fbecfa2bcc8e4c7a5d548ad7161a not found: ID does not exist" containerID="8f63d867fb6f488894fbe2857491908c8846fbecfa2bcc8e4c7a5d548ad7161a" Jan 22 18:32:50 crc kubenswrapper[4758]: I0122 18:32:50.873165 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f63d867fb6f488894fbe2857491908c8846fbecfa2bcc8e4c7a5d548ad7161a"} err="failed to get container status \"8f63d867fb6f488894fbe2857491908c8846fbecfa2bcc8e4c7a5d548ad7161a\": rpc error: code = NotFound desc = could not find container \"8f63d867fb6f488894fbe2857491908c8846fbecfa2bcc8e4c7a5d548ad7161a\": container with ID starting with 8f63d867fb6f488894fbe2857491908c8846fbecfa2bcc8e4c7a5d548ad7161a not found: ID does not exist" Jan 22 18:32:51 crc kubenswrapper[4758]: I0122 18:32:51.288714 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6lbk6/crc-debug-k96lf"] Jan 22 18:32:51 crc kubenswrapper[4758]: E0122 18:32:51.289317 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc45afaf-9258-414a-a784-ba0fef57349f" containerName="container-00" Jan 22 18:32:51 crc kubenswrapper[4758]: I0122 18:32:51.289339 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc45afaf-9258-414a-a784-ba0fef57349f" containerName="container-00" Jan 22 18:32:51 crc kubenswrapper[4758]: E0122 18:32:51.289367 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6b17958-7e4f-487d-96be-904ac8f181d8" containerName="extract-content" Jan 22 18:32:51 crc kubenswrapper[4758]: I0122 18:32:51.289373 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6b17958-7e4f-487d-96be-904ac8f181d8" containerName="extract-content" Jan 22 18:32:51 crc kubenswrapper[4758]: E0122 18:32:51.289409 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6b17958-7e4f-487d-96be-904ac8f181d8" containerName="registry-server" Jan 22 18:32:51 crc kubenswrapper[4758]: I0122 18:32:51.289416 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6b17958-7e4f-487d-96be-904ac8f181d8" containerName="registry-server" Jan 22 18:32:51 crc kubenswrapper[4758]: E0122 18:32:51.289426 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6b17958-7e4f-487d-96be-904ac8f181d8" containerName="extract-utilities" Jan 22 18:32:51 crc kubenswrapper[4758]: I0122 18:32:51.289432 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6b17958-7e4f-487d-96be-904ac8f181d8" containerName="extract-utilities" Jan 22 18:32:51 crc kubenswrapper[4758]: I0122 18:32:51.289656 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6b17958-7e4f-487d-96be-904ac8f181d8" containerName="registry-server" Jan 22 18:32:51 crc kubenswrapper[4758]: I0122 18:32:51.289681 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc45afaf-9258-414a-a784-ba0fef57349f" containerName="container-00" Jan 22 18:32:51 crc kubenswrapper[4758]: I0122 18:32:51.290438 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6lbk6/crc-debug-k96lf" Jan 22 18:32:51 crc kubenswrapper[4758]: I0122 18:32:51.392633 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfmgc\" (UniqueName: \"kubernetes.io/projected/ad16792c-5fa2-4018-9247-a8876bfff921-kube-api-access-pfmgc\") pod \"crc-debug-k96lf\" (UID: \"ad16792c-5fa2-4018-9247-a8876bfff921\") " pod="openshift-must-gather-6lbk6/crc-debug-k96lf" Jan 22 18:32:51 crc kubenswrapper[4758]: I0122 18:32:51.392882 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ad16792c-5fa2-4018-9247-a8876bfff921-host\") pod \"crc-debug-k96lf\" (UID: \"ad16792c-5fa2-4018-9247-a8876bfff921\") " pod="openshift-must-gather-6lbk6/crc-debug-k96lf" Jan 22 18:32:51 crc kubenswrapper[4758]: I0122 18:32:51.494767 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ad16792c-5fa2-4018-9247-a8876bfff921-host\") pod \"crc-debug-k96lf\" (UID: \"ad16792c-5fa2-4018-9247-a8876bfff921\") " pod="openshift-must-gather-6lbk6/crc-debug-k96lf" Jan 22 18:32:51 crc kubenswrapper[4758]: I0122 18:32:51.494979 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ad16792c-5fa2-4018-9247-a8876bfff921-host\") pod \"crc-debug-k96lf\" (UID: \"ad16792c-5fa2-4018-9247-a8876bfff921\") " pod="openshift-must-gather-6lbk6/crc-debug-k96lf" Jan 22 18:32:51 crc kubenswrapper[4758]: I0122 18:32:51.495287 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfmgc\" (UniqueName: \"kubernetes.io/projected/ad16792c-5fa2-4018-9247-a8876bfff921-kube-api-access-pfmgc\") pod \"crc-debug-k96lf\" (UID: \"ad16792c-5fa2-4018-9247-a8876bfff921\") " pod="openshift-must-gather-6lbk6/crc-debug-k96lf" Jan 22 18:32:51 crc kubenswrapper[4758]: I0122 18:32:51.525079 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfmgc\" (UniqueName: \"kubernetes.io/projected/ad16792c-5fa2-4018-9247-a8876bfff921-kube-api-access-pfmgc\") pod \"crc-debug-k96lf\" (UID: \"ad16792c-5fa2-4018-9247-a8876bfff921\") " pod="openshift-must-gather-6lbk6/crc-debug-k96lf" Jan 22 18:32:51 crc kubenswrapper[4758]: I0122 18:32:51.614578 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6lbk6/crc-debug-k96lf" Jan 22 18:32:51 crc kubenswrapper[4758]: I0122 18:32:51.778345 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6lbk6/crc-debug-k96lf" event={"ID":"ad16792c-5fa2-4018-9247-a8876bfff921","Type":"ContainerStarted","Data":"ddf52f9e358f2b9aa0f53adef2d7dbc38293c39b442794842731d3530af42109"} Jan 22 18:32:52 crc kubenswrapper[4758]: I0122 18:32:52.788776 4758 generic.go:334] "Generic (PLEG): container finished" podID="ad16792c-5fa2-4018-9247-a8876bfff921" containerID="b687aebc0399c4f878489e9d64bacccd096a7201dd9aa16bde15484cd7ea5f08" exitCode=0 Jan 22 18:32:52 crc kubenswrapper[4758]: I0122 18:32:52.789098 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6lbk6/crc-debug-k96lf" event={"ID":"ad16792c-5fa2-4018-9247-a8876bfff921","Type":"ContainerDied","Data":"b687aebc0399c4f878489e9d64bacccd096a7201dd9aa16bde15484cd7ea5f08"} Jan 22 18:32:52 crc kubenswrapper[4758]: I0122 18:32:52.830489 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6b17958-7e4f-487d-96be-904ac8f181d8" path="/var/lib/kubelet/pods/e6b17958-7e4f-487d-96be-904ac8f181d8/volumes" Jan 22 18:32:53 crc kubenswrapper[4758]: I0122 18:32:53.922964 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6lbk6/crc-debug-k96lf" Jan 22 18:32:54 crc kubenswrapper[4758]: I0122 18:32:54.053439 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfmgc\" (UniqueName: \"kubernetes.io/projected/ad16792c-5fa2-4018-9247-a8876bfff921-kube-api-access-pfmgc\") pod \"ad16792c-5fa2-4018-9247-a8876bfff921\" (UID: \"ad16792c-5fa2-4018-9247-a8876bfff921\") " Jan 22 18:32:54 crc kubenswrapper[4758]: I0122 18:32:54.053708 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ad16792c-5fa2-4018-9247-a8876bfff921-host\") pod \"ad16792c-5fa2-4018-9247-a8876bfff921\" (UID: \"ad16792c-5fa2-4018-9247-a8876bfff921\") " Jan 22 18:32:54 crc kubenswrapper[4758]: I0122 18:32:54.053988 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad16792c-5fa2-4018-9247-a8876bfff921-host" (OuterVolumeSpecName: "host") pod "ad16792c-5fa2-4018-9247-a8876bfff921" (UID: "ad16792c-5fa2-4018-9247-a8876bfff921"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 18:32:54 crc kubenswrapper[4758]: I0122 18:32:54.054646 4758 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ad16792c-5fa2-4018-9247-a8876bfff921-host\") on node \"crc\" DevicePath \"\"" Jan 22 18:32:54 crc kubenswrapper[4758]: I0122 18:32:54.072145 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad16792c-5fa2-4018-9247-a8876bfff921-kube-api-access-pfmgc" (OuterVolumeSpecName: "kube-api-access-pfmgc") pod "ad16792c-5fa2-4018-9247-a8876bfff921" (UID: "ad16792c-5fa2-4018-9247-a8876bfff921"). InnerVolumeSpecName "kube-api-access-pfmgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 18:32:54 crc kubenswrapper[4758]: I0122 18:32:54.156992 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfmgc\" (UniqueName: \"kubernetes.io/projected/ad16792c-5fa2-4018-9247-a8876bfff921-kube-api-access-pfmgc\") on node \"crc\" DevicePath \"\"" Jan 22 18:32:54 crc kubenswrapper[4758]: I0122 18:32:54.812673 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6lbk6/crc-debug-k96lf" Jan 22 18:32:54 crc kubenswrapper[4758]: I0122 18:32:54.842107 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6lbk6/crc-debug-k96lf"] Jan 22 18:32:54 crc kubenswrapper[4758]: I0122 18:32:54.842176 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6lbk6/crc-debug-k96lf" event={"ID":"ad16792c-5fa2-4018-9247-a8876bfff921","Type":"ContainerDied","Data":"ddf52f9e358f2b9aa0f53adef2d7dbc38293c39b442794842731d3530af42109"} Jan 22 18:32:54 crc kubenswrapper[4758]: I0122 18:32:54.842228 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddf52f9e358f2b9aa0f53adef2d7dbc38293c39b442794842731d3530af42109" Jan 22 18:32:54 crc kubenswrapper[4758]: I0122 18:32:54.842245 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6lbk6/crc-debug-k96lf"] Jan 22 18:32:55 crc kubenswrapper[4758]: I0122 18:32:55.991147 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6lbk6/crc-debug-64t6d"] Jan 22 18:32:55 crc kubenswrapper[4758]: E0122 18:32:55.992412 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad16792c-5fa2-4018-9247-a8876bfff921" containerName="container-00" Jan 22 18:32:55 crc kubenswrapper[4758]: I0122 18:32:55.992457 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad16792c-5fa2-4018-9247-a8876bfff921" containerName="container-00" Jan 22 18:32:55 crc kubenswrapper[4758]: I0122 18:32:55.992695 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad16792c-5fa2-4018-9247-a8876bfff921" containerName="container-00" Jan 22 18:32:55 crc kubenswrapper[4758]: I0122 18:32:55.993558 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6lbk6/crc-debug-64t6d" Jan 22 18:32:56 crc kubenswrapper[4758]: I0122 18:32:56.098840 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8f354e99-225a-4821-a06d-ec540e06d9e5-host\") pod \"crc-debug-64t6d\" (UID: \"8f354e99-225a-4821-a06d-ec540e06d9e5\") " pod="openshift-must-gather-6lbk6/crc-debug-64t6d" Jan 22 18:32:56 crc kubenswrapper[4758]: I0122 18:32:56.099041 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfnwd\" (UniqueName: \"kubernetes.io/projected/8f354e99-225a-4821-a06d-ec540e06d9e5-kube-api-access-tfnwd\") pod \"crc-debug-64t6d\" (UID: \"8f354e99-225a-4821-a06d-ec540e06d9e5\") " pod="openshift-must-gather-6lbk6/crc-debug-64t6d" Jan 22 18:32:56 crc kubenswrapper[4758]: I0122 18:32:56.201681 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfnwd\" (UniqueName: \"kubernetes.io/projected/8f354e99-225a-4821-a06d-ec540e06d9e5-kube-api-access-tfnwd\") pod \"crc-debug-64t6d\" (UID: \"8f354e99-225a-4821-a06d-ec540e06d9e5\") " pod="openshift-must-gather-6lbk6/crc-debug-64t6d" Jan 22 18:32:56 crc kubenswrapper[4758]: I0122 18:32:56.201935 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8f354e99-225a-4821-a06d-ec540e06d9e5-host\") pod \"crc-debug-64t6d\" (UID: \"8f354e99-225a-4821-a06d-ec540e06d9e5\") " pod="openshift-must-gather-6lbk6/crc-debug-64t6d" Jan 22 18:32:56 crc kubenswrapper[4758]: I0122 18:32:56.202156 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8f354e99-225a-4821-a06d-ec540e06d9e5-host\") pod \"crc-debug-64t6d\" (UID: \"8f354e99-225a-4821-a06d-ec540e06d9e5\") " pod="openshift-must-gather-6lbk6/crc-debug-64t6d" Jan 22 18:32:56 crc kubenswrapper[4758]: I0122 18:32:56.222042 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfnwd\" (UniqueName: \"kubernetes.io/projected/8f354e99-225a-4821-a06d-ec540e06d9e5-kube-api-access-tfnwd\") pod \"crc-debug-64t6d\" (UID: \"8f354e99-225a-4821-a06d-ec540e06d9e5\") " pod="openshift-must-gather-6lbk6/crc-debug-64t6d" Jan 22 18:32:56 crc kubenswrapper[4758]: I0122 18:32:56.318354 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6lbk6/crc-debug-64t6d" Jan 22 18:32:56 crc kubenswrapper[4758]: W0122 18:32:56.355509 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f354e99_225a_4821_a06d_ec540e06d9e5.slice/crio-da0f5cee1b90ce64e65a353be86c0de683594ca91ab10759077093947043f02d WatchSource:0}: Error finding container da0f5cee1b90ce64e65a353be86c0de683594ca91ab10759077093947043f02d: Status 404 returned error can't find the container with id da0f5cee1b90ce64e65a353be86c0de683594ca91ab10759077093947043f02d Jan 22 18:32:56 crc kubenswrapper[4758]: I0122 18:32:56.842829 4758 generic.go:334] "Generic (PLEG): container finished" podID="8f354e99-225a-4821-a06d-ec540e06d9e5" containerID="26fbfeb0a3f57edb3de302dfa69ca69d385efe5b0f2121e92e313149d784e96f" exitCode=0 Jan 22 18:32:56 crc kubenswrapper[4758]: I0122 18:32:56.846511 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad16792c-5fa2-4018-9247-a8876bfff921" path="/var/lib/kubelet/pods/ad16792c-5fa2-4018-9247-a8876bfff921/volumes" Jan 22 18:32:56 crc kubenswrapper[4758]: I0122 18:32:56.847058 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6lbk6/crc-debug-64t6d" event={"ID":"8f354e99-225a-4821-a06d-ec540e06d9e5","Type":"ContainerDied","Data":"26fbfeb0a3f57edb3de302dfa69ca69d385efe5b0f2121e92e313149d784e96f"} Jan 22 18:32:56 crc kubenswrapper[4758]: I0122 18:32:56.847087 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6lbk6/crc-debug-64t6d" event={"ID":"8f354e99-225a-4821-a06d-ec540e06d9e5","Type":"ContainerStarted","Data":"da0f5cee1b90ce64e65a353be86c0de683594ca91ab10759077093947043f02d"} Jan 22 18:32:56 crc kubenswrapper[4758]: I0122 18:32:56.890204 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6lbk6/crc-debug-64t6d"] Jan 22 18:32:56 crc kubenswrapper[4758]: I0122 18:32:56.900341 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6lbk6/crc-debug-64t6d"] Jan 22 18:32:57 crc kubenswrapper[4758]: I0122 18:32:57.994371 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6lbk6/crc-debug-64t6d" Jan 22 18:32:58 crc kubenswrapper[4758]: I0122 18:32:58.153484 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8f354e99-225a-4821-a06d-ec540e06d9e5-host\") pod \"8f354e99-225a-4821-a06d-ec540e06d9e5\" (UID: \"8f354e99-225a-4821-a06d-ec540e06d9e5\") " Jan 22 18:32:58 crc kubenswrapper[4758]: I0122 18:32:58.153546 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfnwd\" (UniqueName: \"kubernetes.io/projected/8f354e99-225a-4821-a06d-ec540e06d9e5-kube-api-access-tfnwd\") pod \"8f354e99-225a-4821-a06d-ec540e06d9e5\" (UID: \"8f354e99-225a-4821-a06d-ec540e06d9e5\") " Jan 22 18:32:58 crc kubenswrapper[4758]: I0122 18:32:58.154601 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f354e99-225a-4821-a06d-ec540e06d9e5-host" (OuterVolumeSpecName: "host") pod "8f354e99-225a-4821-a06d-ec540e06d9e5" (UID: "8f354e99-225a-4821-a06d-ec540e06d9e5"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 18:32:58 crc kubenswrapper[4758]: I0122 18:32:58.165276 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f354e99-225a-4821-a06d-ec540e06d9e5-kube-api-access-tfnwd" (OuterVolumeSpecName: "kube-api-access-tfnwd") pod "8f354e99-225a-4821-a06d-ec540e06d9e5" (UID: "8f354e99-225a-4821-a06d-ec540e06d9e5"). InnerVolumeSpecName "kube-api-access-tfnwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 18:32:58 crc kubenswrapper[4758]: I0122 18:32:58.256848 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfnwd\" (UniqueName: \"kubernetes.io/projected/8f354e99-225a-4821-a06d-ec540e06d9e5-kube-api-access-tfnwd\") on node \"crc\" DevicePath \"\"" Jan 22 18:32:58 crc kubenswrapper[4758]: I0122 18:32:58.257230 4758 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8f354e99-225a-4821-a06d-ec540e06d9e5-host\") on node \"crc\" DevicePath \"\"" Jan 22 18:32:58 crc kubenswrapper[4758]: I0122 18:32:58.824300 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f354e99-225a-4821-a06d-ec540e06d9e5" path="/var/lib/kubelet/pods/8f354e99-225a-4821-a06d-ec540e06d9e5/volumes" Jan 22 18:32:58 crc kubenswrapper[4758]: I0122 18:32:58.865897 4758 scope.go:117] "RemoveContainer" containerID="26fbfeb0a3f57edb3de302dfa69ca69d385efe5b0f2121e92e313149d784e96f" Jan 22 18:32:58 crc kubenswrapper[4758]: I0122 18:32:58.866127 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6lbk6/crc-debug-64t6d" Jan 22 18:33:13 crc kubenswrapper[4758]: I0122 18:33:13.837124 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 18:33:13 crc kubenswrapper[4758]: I0122 18:33:13.837596 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 18:33:21 crc kubenswrapper[4758]: I0122 18:33:21.757293 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nqqvd"] Jan 22 18:33:21 crc kubenswrapper[4758]: E0122 18:33:21.758461 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f354e99-225a-4821-a06d-ec540e06d9e5" containerName="container-00" Jan 22 18:33:21 crc kubenswrapper[4758]: I0122 18:33:21.758484 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f354e99-225a-4821-a06d-ec540e06d9e5" containerName="container-00" Jan 22 18:33:21 crc kubenswrapper[4758]: I0122 18:33:21.758856 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f354e99-225a-4821-a06d-ec540e06d9e5" containerName="container-00" Jan 22 18:33:21 crc kubenswrapper[4758]: I0122 18:33:21.761105 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nqqvd" Jan 22 18:33:21 crc kubenswrapper[4758]: I0122 18:33:21.772120 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nqqvd"] Jan 22 18:33:21 crc kubenswrapper[4758]: I0122 18:33:21.854577 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52f21b16-b14c-457a-9598-e53499f22ad2-utilities\") pod \"certified-operators-nqqvd\" (UID: \"52f21b16-b14c-457a-9598-e53499f22ad2\") " pod="openshift-marketplace/certified-operators-nqqvd" Jan 22 18:33:21 crc kubenswrapper[4758]: I0122 18:33:21.854835 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bsnw\" (UniqueName: \"kubernetes.io/projected/52f21b16-b14c-457a-9598-e53499f22ad2-kube-api-access-8bsnw\") pod \"certified-operators-nqqvd\" (UID: \"52f21b16-b14c-457a-9598-e53499f22ad2\") " pod="openshift-marketplace/certified-operators-nqqvd" Jan 22 18:33:21 crc kubenswrapper[4758]: I0122 18:33:21.855129 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52f21b16-b14c-457a-9598-e53499f22ad2-catalog-content\") pod \"certified-operators-nqqvd\" (UID: \"52f21b16-b14c-457a-9598-e53499f22ad2\") " pod="openshift-marketplace/certified-operators-nqqvd" Jan 22 18:33:21 crc kubenswrapper[4758]: I0122 18:33:21.957864 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52f21b16-b14c-457a-9598-e53499f22ad2-catalog-content\") pod \"certified-operators-nqqvd\" (UID: \"52f21b16-b14c-457a-9598-e53499f22ad2\") " pod="openshift-marketplace/certified-operators-nqqvd" Jan 22 18:33:21 crc kubenswrapper[4758]: I0122 18:33:21.958072 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52f21b16-b14c-457a-9598-e53499f22ad2-utilities\") pod \"certified-operators-nqqvd\" (UID: \"52f21b16-b14c-457a-9598-e53499f22ad2\") " pod="openshift-marketplace/certified-operators-nqqvd" Jan 22 18:33:21 crc kubenswrapper[4758]: I0122 18:33:21.958160 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bsnw\" (UniqueName: \"kubernetes.io/projected/52f21b16-b14c-457a-9598-e53499f22ad2-kube-api-access-8bsnw\") pod \"certified-operators-nqqvd\" (UID: \"52f21b16-b14c-457a-9598-e53499f22ad2\") " pod="openshift-marketplace/certified-operators-nqqvd" Jan 22 18:33:21 crc kubenswrapper[4758]: I0122 18:33:21.958357 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52f21b16-b14c-457a-9598-e53499f22ad2-catalog-content\") pod \"certified-operators-nqqvd\" (UID: \"52f21b16-b14c-457a-9598-e53499f22ad2\") " pod="openshift-marketplace/certified-operators-nqqvd" Jan 22 18:33:21 crc kubenswrapper[4758]: I0122 18:33:21.958378 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52f21b16-b14c-457a-9598-e53499f22ad2-utilities\") pod \"certified-operators-nqqvd\" (UID: \"52f21b16-b14c-457a-9598-e53499f22ad2\") " pod="openshift-marketplace/certified-operators-nqqvd" Jan 22 18:33:21 crc kubenswrapper[4758]: I0122 18:33:21.981266 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bsnw\" (UniqueName: \"kubernetes.io/projected/52f21b16-b14c-457a-9598-e53499f22ad2-kube-api-access-8bsnw\") pod \"certified-operators-nqqvd\" (UID: \"52f21b16-b14c-457a-9598-e53499f22ad2\") " pod="openshift-marketplace/certified-operators-nqqvd" Jan 22 18:33:22 crc kubenswrapper[4758]: I0122 18:33:22.095395 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nqqvd" Jan 22 18:33:22 crc kubenswrapper[4758]: I0122 18:33:22.865100 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nqqvd"] Jan 22 18:33:23 crc kubenswrapper[4758]: I0122 18:33:23.193893 4758 generic.go:334] "Generic (PLEG): container finished" podID="52f21b16-b14c-457a-9598-e53499f22ad2" containerID="6280e680c989a402befad4c760c62bdd801224661f6cbbb53818b73cc7bcc2cc" exitCode=0 Jan 22 18:33:23 crc kubenswrapper[4758]: I0122 18:33:23.194006 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nqqvd" event={"ID":"52f21b16-b14c-457a-9598-e53499f22ad2","Type":"ContainerDied","Data":"6280e680c989a402befad4c760c62bdd801224661f6cbbb53818b73cc7bcc2cc"} Jan 22 18:33:23 crc kubenswrapper[4758]: I0122 18:33:23.194223 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nqqvd" event={"ID":"52f21b16-b14c-457a-9598-e53499f22ad2","Type":"ContainerStarted","Data":"1d28cedd7b4cdd20ed7cf102d7ebe6674367d0450033213d624b7b0dfb359dbd"} Jan 22 18:33:24 crc kubenswrapper[4758]: I0122 18:33:24.213551 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nqqvd" event={"ID":"52f21b16-b14c-457a-9598-e53499f22ad2","Type":"ContainerStarted","Data":"8773f8dbae8a5a98370eaf207752e992102fafc8ff9f1f6173a952ea872477c3"} Jan 22 18:33:25 crc kubenswrapper[4758]: I0122 18:33:25.223508 4758 generic.go:334] "Generic (PLEG): container finished" podID="52f21b16-b14c-457a-9598-e53499f22ad2" containerID="8773f8dbae8a5a98370eaf207752e992102fafc8ff9f1f6173a952ea872477c3" exitCode=0 Jan 22 18:33:25 crc kubenswrapper[4758]: I0122 18:33:25.223575 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nqqvd" event={"ID":"52f21b16-b14c-457a-9598-e53499f22ad2","Type":"ContainerDied","Data":"8773f8dbae8a5a98370eaf207752e992102fafc8ff9f1f6173a952ea872477c3"} Jan 22 18:33:26 crc kubenswrapper[4758]: I0122 18:33:26.237283 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nqqvd" event={"ID":"52f21b16-b14c-457a-9598-e53499f22ad2","Type":"ContainerStarted","Data":"e691a3c2807ce4e3316c8da6ce19c94f389cae400ad2f2c333962fc915b31b7a"} Jan 22 18:33:26 crc kubenswrapper[4758]: I0122 18:33:26.258709 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nqqvd" podStartSLOduration=2.784487684 podStartE2EDuration="5.258683866s" podCreationTimestamp="2026-01-22 18:33:21 +0000 UTC" firstStartedPulling="2026-01-22 18:33:23.19591226 +0000 UTC m=+7424.679251565" lastFinishedPulling="2026-01-22 18:33:25.670108422 +0000 UTC m=+7427.153447747" observedRunningTime="2026-01-22 18:33:26.254434921 +0000 UTC m=+7427.737774216" watchObservedRunningTime="2026-01-22 18:33:26.258683866 +0000 UTC m=+7427.742023151" Jan 22 18:33:27 crc kubenswrapper[4758]: I0122 18:33:27.738286 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6c78f7b546-sv5rx_177272b6-b55b-4e45-9336-d6227af172d0/barbican-api/0.log" Jan 22 18:33:27 crc kubenswrapper[4758]: I0122 18:33:27.918958 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6c78f7b546-sv5rx_177272b6-b55b-4e45-9336-d6227af172d0/barbican-api-log/0.log" Jan 22 18:33:27 crc kubenswrapper[4758]: I0122 18:33:27.953130 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5fbd4457db-5gt55_b4115ae1-f42e-40b7-b82a-74d7e4abfa77/barbican-keystone-listener/0.log" Jan 22 18:33:28 crc kubenswrapper[4758]: I0122 18:33:28.044333 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5fbd4457db-5gt55_b4115ae1-f42e-40b7-b82a-74d7e4abfa77/barbican-keystone-listener-log/0.log" Jan 22 18:33:28 crc kubenswrapper[4758]: I0122 18:33:28.163655 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-775569c6d5-2vjq7_925ad838-b20e-48b3-9ee7-08133afb7840/barbican-worker/0.log" Jan 22 18:33:28 crc kubenswrapper[4758]: I0122 18:33:28.188292 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-775569c6d5-2vjq7_925ad838-b20e-48b3-9ee7-08133afb7840/barbican-worker-log/0.log" Jan 22 18:33:28 crc kubenswrapper[4758]: I0122 18:33:28.375297 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-bzvws_7b0250c2-eb08-4c81-9d0b-788f1746df63/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 18:33:28 crc kubenswrapper[4758]: I0122 18:33:28.430900 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_93923998-0016-4db9-adff-a433c7a8d57c/ceilometer-central-agent/1.log" Jan 22 18:33:28 crc kubenswrapper[4758]: I0122 18:33:28.560967 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_93923998-0016-4db9-adff-a433c7a8d57c/ceilometer-central-agent/0.log" Jan 22 18:33:28 crc kubenswrapper[4758]: I0122 18:33:28.756238 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_93923998-0016-4db9-adff-a433c7a8d57c/sg-core/0.log" Jan 22 18:33:28 crc kubenswrapper[4758]: I0122 18:33:28.867352 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_93923998-0016-4db9-adff-a433c7a8d57c/ceilometer-notification-agent/0.log" Jan 22 18:33:28 crc kubenswrapper[4758]: I0122 18:33:28.876332 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_93923998-0016-4db9-adff-a433c7a8d57c/proxy-httpd/0.log" Jan 22 18:33:28 crc kubenswrapper[4758]: I0122 18:33:28.905593 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_93923998-0016-4db9-adff-a433c7a8d57c/ceilometer-notification-agent/1.log" Jan 22 18:33:29 crc kubenswrapper[4758]: I0122 18:33:29.258250 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3943daea-3dfe-4c65-ada3-f1c36f9701f8/cinder-api-log/0.log" Jan 22 18:33:29 crc kubenswrapper[4758]: I0122 18:33:29.452544 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_9246ea76-1e99-4458-86ef-6ca8d66b6eba/cinder-backup/0.log" Jan 22 18:33:29 crc kubenswrapper[4758]: I0122 18:33:29.542191 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3943daea-3dfe-4c65-ada3-f1c36f9701f8/cinder-api/0.log" Jan 22 18:33:29 crc kubenswrapper[4758]: I0122 18:33:29.614565 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_9246ea76-1e99-4458-86ef-6ca8d66b6eba/probe/0.log" Jan 22 18:33:29 crc kubenswrapper[4758]: I0122 18:33:29.719234 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_4898b260-d20c-4e08-a760-5fa80e700b95/cinder-scheduler/0.log" Jan 22 18:33:30 crc kubenswrapper[4758]: I0122 18:33:30.175830 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_4898b260-d20c-4e08-a760-5fa80e700b95/probe/0.log" Jan 22 18:33:30 crc kubenswrapper[4758]: I0122 18:33:30.192930 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_d027f54f-c313-4750-b9ba-18241f322033/probe/0.log" Jan 22 18:33:30 crc kubenswrapper[4758]: I0122 18:33:30.243156 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_d027f54f-c313-4750-b9ba-18241f322033/cinder-volume/0.log" Jan 22 18:33:30 crc kubenswrapper[4758]: I0122 18:33:30.343915 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5/cinder-volume/0.log" Jan 22 18:33:30 crc kubenswrapper[4758]: I0122 18:33:30.957196 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-5jzhg_7247ce98-99d8-4a62-87bc-6fb7696602c4/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 18:33:31 crc kubenswrapper[4758]: I0122 18:33:31.026712 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_bc682df5-1ce8-4c38-aea1-2c1d3e2f78b5/probe/0.log" Jan 22 18:33:31 crc kubenswrapper[4758]: I0122 18:33:31.511996 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-pnlt7_8ad7e035-e1f4-4274-b9c1-9014a86bfb5d/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 18:33:31 crc kubenswrapper[4758]: I0122 18:33:31.681137 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7884569b4f-9q84h_33e06aca-f569-49e5-8849-8677661defe4/init/0.log" Jan 22 18:33:31 crc kubenswrapper[4758]: I0122 18:33:31.850520 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7884569b4f-9q84h_33e06aca-f569-49e5-8849-8677661defe4/init/0.log" Jan 22 18:33:31 crc kubenswrapper[4758]: I0122 18:33:31.935625 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-wst56_d877ce08-9a59-401c-ab3f-fc2c6905507f/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 18:33:31 crc kubenswrapper[4758]: I0122 18:33:31.976989 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7884569b4f-9q84h_33e06aca-f569-49e5-8849-8677661defe4/dnsmasq-dns/0.log" Jan 22 18:33:32 crc kubenswrapper[4758]: I0122 18:33:32.096134 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nqqvd" Jan 22 18:33:32 crc kubenswrapper[4758]: I0122 18:33:32.097395 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nqqvd" Jan 22 18:33:32 crc kubenswrapper[4758]: I0122 18:33:32.147794 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_cbbd5d99-3b1f-4e99-b3f9-a8c39af70665/glance-log/0.log" Jan 22 18:33:32 crc kubenswrapper[4758]: I0122 18:33:32.180206 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nqqvd" Jan 22 18:33:32 crc kubenswrapper[4758]: I0122 18:33:32.187654 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_cbbd5d99-3b1f-4e99-b3f9-a8c39af70665/glance-httpd/0.log" Jan 22 18:33:32 crc kubenswrapper[4758]: I0122 18:33:32.386964 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_e24622ea-6d08-4bb7-ae62-57d07c5c07aa/glance-httpd/0.log" Jan 22 18:33:32 crc kubenswrapper[4758]: I0122 18:33:32.428409 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_e24622ea-6d08-4bb7-ae62-57d07c5c07aa/glance-log/0.log" Jan 22 18:33:32 crc kubenswrapper[4758]: I0122 18:33:32.461665 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nqqvd" Jan 22 18:33:32 crc kubenswrapper[4758]: I0122 18:33:32.522259 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nqqvd"] Jan 22 18:33:32 crc kubenswrapper[4758]: I0122 18:33:32.611924 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-55b94d9b56-4x8cx_44cc928c-2531-4055-9b8f-b36957f3485d/horizon/0.log" Jan 22 18:33:32 crc kubenswrapper[4758]: I0122 18:33:32.689701 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-smgm5_328e6c99-b23b-4d6d-b816-79d6af92932f/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 18:33:32 crc kubenswrapper[4758]: I0122 18:33:32.910068 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-mpdz7_1a46b6a5-f2c3-49cc-b49e-8fcee32c1b9c/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 18:33:33 crc kubenswrapper[4758]: I0122 18:33:33.181059 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-55b94d9b56-4x8cx_44cc928c-2531-4055-9b8f-b36957f3485d/horizon-log/0.log" Jan 22 18:33:33 crc kubenswrapper[4758]: I0122 18:33:33.287216 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29485021-kphfm_5d061133-6e47-4b25-951f-01e66858742e/keystone-cron/0.log" Jan 22 18:33:33 crc kubenswrapper[4758]: I0122 18:33:33.368437 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5486585c8c-crbmm_e86c0ccc-4e60-4edc-b8e1-6ba42b49fc1b/keystone-api/0.log" Jan 22 18:33:33 crc kubenswrapper[4758]: I0122 18:33:33.433888 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29485081-78sn6_a0a8915e-da6f-453e-bee3-3ef86673f477/keystone-cron/0.log" Jan 22 18:33:33 crc kubenswrapper[4758]: I0122 18:33:33.643623 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_d5a7a812-eaba-4ae7-8d97-e80ae4f70d78/kube-state-metrics/3.log" Jan 22 18:33:33 crc kubenswrapper[4758]: I0122 18:33:33.646497 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_d5a7a812-eaba-4ae7-8d97-e80ae4f70d78/kube-state-metrics/2.log" Jan 22 18:33:33 crc kubenswrapper[4758]: I0122 18:33:33.789168 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-vlm88_23a50ad6-72f6-49e1-b41f-7ab16b033783/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 18:33:34 crc kubenswrapper[4758]: I0122 18:33:34.113901 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-579gl_0a76cd73-4259-4fa1-8846-f645ef6603b1/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 18:33:34 crc kubenswrapper[4758]: I0122 18:33:34.175348 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-877b57c45-cs9rd_1d80f9d2-e7aa-4cc3-876f-0ecd9915704d/neutron-httpd/0.log" Jan 22 18:33:34 crc kubenswrapper[4758]: I0122 18:33:34.181532 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-877b57c45-cs9rd_1d80f9d2-e7aa-4cc3-876f-0ecd9915704d/neutron-api/0.log" Jan 22 18:33:34 crc kubenswrapper[4758]: I0122 18:33:34.429668 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nqqvd" podUID="52f21b16-b14c-457a-9598-e53499f22ad2" containerName="registry-server" containerID="cri-o://e691a3c2807ce4e3316c8da6ce19c94f389cae400ad2f2c333962fc915b31b7a" gracePeriod=2 Jan 22 18:33:34 crc kubenswrapper[4758]: I0122 18:33:34.946291 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_20c9fbe2-1c90-4beb-9154-094e3fdc87d1/nova-cell0-conductor-conductor/0.log" Jan 22 18:33:34 crc kubenswrapper[4758]: I0122 18:33:34.980775 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nqqvd" Jan 22 18:33:35 crc kubenswrapper[4758]: I0122 18:33:35.061315 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bsnw\" (UniqueName: \"kubernetes.io/projected/52f21b16-b14c-457a-9598-e53499f22ad2-kube-api-access-8bsnw\") pod \"52f21b16-b14c-457a-9598-e53499f22ad2\" (UID: \"52f21b16-b14c-457a-9598-e53499f22ad2\") " Jan 22 18:33:35 crc kubenswrapper[4758]: I0122 18:33:35.061457 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52f21b16-b14c-457a-9598-e53499f22ad2-catalog-content\") pod \"52f21b16-b14c-457a-9598-e53499f22ad2\" (UID: \"52f21b16-b14c-457a-9598-e53499f22ad2\") " Jan 22 18:33:35 crc kubenswrapper[4758]: I0122 18:33:35.061635 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52f21b16-b14c-457a-9598-e53499f22ad2-utilities\") pod \"52f21b16-b14c-457a-9598-e53499f22ad2\" (UID: \"52f21b16-b14c-457a-9598-e53499f22ad2\") " Jan 22 18:33:35 crc kubenswrapper[4758]: I0122 18:33:35.063221 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52f21b16-b14c-457a-9598-e53499f22ad2-utilities" (OuterVolumeSpecName: "utilities") pod "52f21b16-b14c-457a-9598-e53499f22ad2" (UID: "52f21b16-b14c-457a-9598-e53499f22ad2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 18:33:35 crc kubenswrapper[4758]: I0122 18:33:35.090682 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52f21b16-b14c-457a-9598-e53499f22ad2-kube-api-access-8bsnw" (OuterVolumeSpecName: "kube-api-access-8bsnw") pod "52f21b16-b14c-457a-9598-e53499f22ad2" (UID: "52f21b16-b14c-457a-9598-e53499f22ad2"). InnerVolumeSpecName "kube-api-access-8bsnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 18:33:35 crc kubenswrapper[4758]: I0122 18:33:35.106920 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52f21b16-b14c-457a-9598-e53499f22ad2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "52f21b16-b14c-457a-9598-e53499f22ad2" (UID: "52f21b16-b14c-457a-9598-e53499f22ad2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 18:33:35 crc kubenswrapper[4758]: I0122 18:33:35.163938 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52f21b16-b14c-457a-9598-e53499f22ad2-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 18:33:35 crc kubenswrapper[4758]: I0122 18:33:35.163965 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bsnw\" (UniqueName: \"kubernetes.io/projected/52f21b16-b14c-457a-9598-e53499f22ad2-kube-api-access-8bsnw\") on node \"crc\" DevicePath \"\"" Jan 22 18:33:35 crc kubenswrapper[4758]: I0122 18:33:35.163978 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52f21b16-b14c-457a-9598-e53499f22ad2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 18:33:35 crc kubenswrapper[4758]: I0122 18:33:35.226501 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_56eabbf1-f0f4-4d6d-8839-47dee8e04278/nova-cell1-conductor-conductor/0.log" Jan 22 18:33:35 crc kubenswrapper[4758]: I0122 18:33:35.356761 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_946719d1-252a-449e-9b4e-5ae6639fd635/nova-api-log/0.log" Jan 22 18:33:35 crc kubenswrapper[4758]: I0122 18:33:35.444001 4758 generic.go:334] "Generic (PLEG): container finished" podID="52f21b16-b14c-457a-9598-e53499f22ad2" containerID="e691a3c2807ce4e3316c8da6ce19c94f389cae400ad2f2c333962fc915b31b7a" exitCode=0 Jan 22 18:33:35 crc kubenswrapper[4758]: I0122 18:33:35.444046 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nqqvd" event={"ID":"52f21b16-b14c-457a-9598-e53499f22ad2","Type":"ContainerDied","Data":"e691a3c2807ce4e3316c8da6ce19c94f389cae400ad2f2c333962fc915b31b7a"} Jan 22 18:33:35 crc kubenswrapper[4758]: I0122 18:33:35.444078 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nqqvd" event={"ID":"52f21b16-b14c-457a-9598-e53499f22ad2","Type":"ContainerDied","Data":"1d28cedd7b4cdd20ed7cf102d7ebe6674367d0450033213d624b7b0dfb359dbd"} Jan 22 18:33:35 crc kubenswrapper[4758]: I0122 18:33:35.444118 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nqqvd" Jan 22 18:33:35 crc kubenswrapper[4758]: I0122 18:33:35.444138 4758 scope.go:117] "RemoveContainer" containerID="e691a3c2807ce4e3316c8da6ce19c94f389cae400ad2f2c333962fc915b31b7a" Jan 22 18:33:35 crc kubenswrapper[4758]: I0122 18:33:35.475034 4758 scope.go:117] "RemoveContainer" containerID="8773f8dbae8a5a98370eaf207752e992102fafc8ff9f1f6173a952ea872477c3" Jan 22 18:33:35 crc kubenswrapper[4758]: I0122 18:33:35.491822 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nqqvd"] Jan 22 18:33:35 crc kubenswrapper[4758]: I0122 18:33:35.500448 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nqqvd"] Jan 22 18:33:35 crc kubenswrapper[4758]: I0122 18:33:35.502390 4758 scope.go:117] "RemoveContainer" containerID="6280e680c989a402befad4c760c62bdd801224661f6cbbb53818b73cc7bcc2cc" Jan 22 18:33:35 crc kubenswrapper[4758]: I0122 18:33:35.562413 4758 scope.go:117] "RemoveContainer" containerID="e691a3c2807ce4e3316c8da6ce19c94f389cae400ad2f2c333962fc915b31b7a" Jan 22 18:33:35 crc kubenswrapper[4758]: E0122 18:33:35.563295 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e691a3c2807ce4e3316c8da6ce19c94f389cae400ad2f2c333962fc915b31b7a\": container with ID starting with e691a3c2807ce4e3316c8da6ce19c94f389cae400ad2f2c333962fc915b31b7a not found: ID does not exist" containerID="e691a3c2807ce4e3316c8da6ce19c94f389cae400ad2f2c333962fc915b31b7a" Jan 22 18:33:35 crc kubenswrapper[4758]: I0122 18:33:35.563352 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e691a3c2807ce4e3316c8da6ce19c94f389cae400ad2f2c333962fc915b31b7a"} err="failed to get container status \"e691a3c2807ce4e3316c8da6ce19c94f389cae400ad2f2c333962fc915b31b7a\": rpc error: code = NotFound desc = could not find container \"e691a3c2807ce4e3316c8da6ce19c94f389cae400ad2f2c333962fc915b31b7a\": container with ID starting with e691a3c2807ce4e3316c8da6ce19c94f389cae400ad2f2c333962fc915b31b7a not found: ID does not exist" Jan 22 18:33:35 crc kubenswrapper[4758]: I0122 18:33:35.563391 4758 scope.go:117] "RemoveContainer" containerID="8773f8dbae8a5a98370eaf207752e992102fafc8ff9f1f6173a952ea872477c3" Jan 22 18:33:35 crc kubenswrapper[4758]: E0122 18:33:35.563946 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8773f8dbae8a5a98370eaf207752e992102fafc8ff9f1f6173a952ea872477c3\": container with ID starting with 8773f8dbae8a5a98370eaf207752e992102fafc8ff9f1f6173a952ea872477c3 not found: ID does not exist" containerID="8773f8dbae8a5a98370eaf207752e992102fafc8ff9f1f6173a952ea872477c3" Jan 22 18:33:35 crc kubenswrapper[4758]: I0122 18:33:35.563988 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8773f8dbae8a5a98370eaf207752e992102fafc8ff9f1f6173a952ea872477c3"} err="failed to get container status \"8773f8dbae8a5a98370eaf207752e992102fafc8ff9f1f6173a952ea872477c3\": rpc error: code = NotFound desc = could not find container \"8773f8dbae8a5a98370eaf207752e992102fafc8ff9f1f6173a952ea872477c3\": container with ID starting with 8773f8dbae8a5a98370eaf207752e992102fafc8ff9f1f6173a952ea872477c3 not found: ID does not exist" Jan 22 18:33:35 crc kubenswrapper[4758]: I0122 18:33:35.564022 4758 scope.go:117] "RemoveContainer" containerID="6280e680c989a402befad4c760c62bdd801224661f6cbbb53818b73cc7bcc2cc" Jan 22 18:33:35 crc kubenswrapper[4758]: E0122 18:33:35.564493 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6280e680c989a402befad4c760c62bdd801224661f6cbbb53818b73cc7bcc2cc\": container with ID starting with 6280e680c989a402befad4c760c62bdd801224661f6cbbb53818b73cc7bcc2cc not found: ID does not exist" containerID="6280e680c989a402befad4c760c62bdd801224661f6cbbb53818b73cc7bcc2cc" Jan 22 18:33:35 crc kubenswrapper[4758]: I0122 18:33:35.564532 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6280e680c989a402befad4c760c62bdd801224661f6cbbb53818b73cc7bcc2cc"} err="failed to get container status \"6280e680c989a402befad4c760c62bdd801224661f6cbbb53818b73cc7bcc2cc\": rpc error: code = NotFound desc = could not find container \"6280e680c989a402befad4c760c62bdd801224661f6cbbb53818b73cc7bcc2cc\": container with ID starting with 6280e680c989a402befad4c760c62bdd801224661f6cbbb53818b73cc7bcc2cc not found: ID does not exist" Jan 22 18:33:35 crc kubenswrapper[4758]: I0122 18:33:35.667830 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-7j728_7cbdeacc-f53e-43de-9068-513ac27f1487/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 18:33:35 crc kubenswrapper[4758]: I0122 18:33:35.669185 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_6d192e57-5d00-4cbb-a380-db73a28f70f1/nova-cell1-novncproxy-novncproxy/0.log" Jan 22 18:33:36 crc kubenswrapper[4758]: I0122 18:33:36.067332 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_946719d1-252a-449e-9b4e-5ae6639fd635/nova-api-api/0.log" Jan 22 18:33:36 crc kubenswrapper[4758]: I0122 18:33:36.086153 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_6052fa46-8362-4abe-8577-5e47c36af2c1/nova-metadata-log/0.log" Jan 22 18:33:36 crc kubenswrapper[4758]: I0122 18:33:36.288193 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf/mysql-bootstrap/0.log" Jan 22 18:33:36 crc kubenswrapper[4758]: I0122 18:33:36.524790 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_40fd7db8-beee-4742-bc51-2234f6b22e17/nova-scheduler-scheduler/0.log" Jan 22 18:33:36 crc kubenswrapper[4758]: I0122 18:33:36.569281 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf/mysql-bootstrap/0.log" Jan 22 18:33:36 crc kubenswrapper[4758]: I0122 18:33:36.678733 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_3ae20e0d-61fb-44b1-8176-ed7ecb6bf1cf/galera/0.log" Jan 22 18:33:36 crc kubenswrapper[4758]: I0122 18:33:36.794861 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f52e2571-4001-441f-b7b7-b4746ae1c10d/mysql-bootstrap/0.log" Jan 22 18:33:36 crc kubenswrapper[4758]: I0122 18:33:36.822574 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52f21b16-b14c-457a-9598-e53499f22ad2" path="/var/lib/kubelet/pods/52f21b16-b14c-457a-9598-e53499f22ad2/volumes" Jan 22 18:33:36 crc kubenswrapper[4758]: I0122 18:33:36.929767 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f52e2571-4001-441f-b7b7-b4746ae1c10d/mysql-bootstrap/0.log" Jan 22 18:33:37 crc kubenswrapper[4758]: I0122 18:33:37.023865 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f52e2571-4001-441f-b7b7-b4746ae1c10d/galera/0.log" Jan 22 18:33:37 crc kubenswrapper[4758]: I0122 18:33:37.141069 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_f05be9d3-0051-48ce-9100-e436b5f14762/openstackclient/0.log" Jan 22 18:33:37 crc kubenswrapper[4758]: I0122 18:33:37.229913 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-pbmk8_15cc31e0-f0b3-4f0f-aaf2-af71e3c34aff/openstack-network-exporter/0.log" Jan 22 18:33:37 crc kubenswrapper[4758]: I0122 18:33:37.440141 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-mpsgq_7911c0f6-531a-403c-861f-f9cd3ec18ce4/ovn-controller/0.log" Jan 22 18:33:37 crc kubenswrapper[4758]: I0122 18:33:37.563429 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6sx98_ca3428d6-c5a4-4c73-897f-7a03fa7c8463/ovsdb-server-init/0.log" Jan 22 18:33:37 crc kubenswrapper[4758]: I0122 18:33:37.828180 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6sx98_ca3428d6-c5a4-4c73-897f-7a03fa7c8463/ovsdb-server-init/0.log" Jan 22 18:33:37 crc kubenswrapper[4758]: I0122 18:33:37.856826 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6sx98_ca3428d6-c5a4-4c73-897f-7a03fa7c8463/ovsdb-server/0.log" Jan 22 18:33:38 crc kubenswrapper[4758]: I0122 18:33:38.062577 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6sx98_ca3428d6-c5a4-4c73-897f-7a03fa7c8463/ovs-vswitchd/0.log" Jan 22 18:33:38 crc kubenswrapper[4758]: I0122 18:33:38.088401 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-t69z2_3b6debcd-ee7f-4791-90eb-36e13e82f542/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 18:33:38 crc kubenswrapper[4758]: I0122 18:33:38.296632 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5335ec54-1c39-41ba-9788-672cde3d164c/openstack-network-exporter/0.log" Jan 22 18:33:38 crc kubenswrapper[4758]: I0122 18:33:38.369408 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5335ec54-1c39-41ba-9788-672cde3d164c/ovn-northd/0.log" Jan 22 18:33:38 crc kubenswrapper[4758]: I0122 18:33:38.549301 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_aa00a9b2-102b-4b46-b69f-86efda64b178/openstack-network-exporter/0.log" Jan 22 18:33:38 crc kubenswrapper[4758]: I0122 18:33:38.586270 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_aa00a9b2-102b-4b46-b69f-86efda64b178/ovsdbserver-nb/0.log" Jan 22 18:33:38 crc kubenswrapper[4758]: I0122 18:33:38.757384 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_fad5367d-b78c-4015-ac3a-4db4e3d3012a/openstack-network-exporter/0.log" Jan 22 18:33:38 crc kubenswrapper[4758]: I0122 18:33:38.780357 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_fad5367d-b78c-4015-ac3a-4db4e3d3012a/ovsdbserver-sb/0.log" Jan 22 18:33:39 crc kubenswrapper[4758]: I0122 18:33:39.230569 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_6052fa46-8362-4abe-8577-5e47c36af2c1/nova-metadata-metadata/0.log" Jan 22 18:33:39 crc kubenswrapper[4758]: I0122 18:33:39.244285 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6cd69747bd-jv5rb_e48d0711-47a0-4fe2-8341-7c4fc97e58b0/placement-api/0.log" Jan 22 18:33:39 crc kubenswrapper[4758]: I0122 18:33:39.292290 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6cd69747bd-jv5rb_e48d0711-47a0-4fe2-8341-7c4fc97e58b0/placement-log/0.log" Jan 22 18:33:39 crc kubenswrapper[4758]: I0122 18:33:39.440661 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_743945d0-7488-4665-beaf-f2026e10a424/init-config-reloader/0.log" Jan 22 18:33:39 crc kubenswrapper[4758]: I0122 18:33:39.651444 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_743945d0-7488-4665-beaf-f2026e10a424/init-config-reloader/0.log" Jan 22 18:33:39 crc kubenswrapper[4758]: I0122 18:33:39.659410 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_743945d0-7488-4665-beaf-f2026e10a424/config-reloader/0.log" Jan 22 18:33:39 crc kubenswrapper[4758]: I0122 18:33:39.688947 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_743945d0-7488-4665-beaf-f2026e10a424/thanos-sidecar/0.log" Jan 22 18:33:39 crc kubenswrapper[4758]: I0122 18:33:39.703810 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_743945d0-7488-4665-beaf-f2026e10a424/prometheus/0.log" Jan 22 18:33:39 crc kubenswrapper[4758]: I0122 18:33:39.844437 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_11ff72c7-325b-4836-8d06-dce1d2e8ea26/setup-container/0.log" Jan 22 18:33:40 crc kubenswrapper[4758]: I0122 18:33:40.181528 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_11ff72c7-325b-4836-8d06-dce1d2e8ea26/setup-container/0.log" Jan 22 18:33:40 crc kubenswrapper[4758]: I0122 18:33:40.193860 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_11ff72c7-325b-4836-8d06-dce1d2e8ea26/rabbitmq/0.log" Jan 22 18:33:40 crc kubenswrapper[4758]: I0122 18:33:40.205834 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_be871bb7-c028-4788-9769-51685b7290ea/setup-container/0.log" Jan 22 18:33:40 crc kubenswrapper[4758]: I0122 18:33:40.458489 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_be871bb7-c028-4788-9769-51685b7290ea/setup-container/0.log" Jan 22 18:33:40 crc kubenswrapper[4758]: I0122 18:33:40.486861 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_401b6249-7451-4767-9363-89295d6224f8/setup-container/0.log" Jan 22 18:33:40 crc kubenswrapper[4758]: I0122 18:33:40.487659 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_be871bb7-c028-4788-9769-51685b7290ea/rabbitmq/0.log" Jan 22 18:33:40 crc kubenswrapper[4758]: I0122 18:33:40.740339 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_401b6249-7451-4767-9363-89295d6224f8/setup-container/0.log" Jan 22 18:33:40 crc kubenswrapper[4758]: I0122 18:33:40.754707 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_401b6249-7451-4767-9363-89295d6224f8/rabbitmq/0.log" Jan 22 18:33:40 crc kubenswrapper[4758]: I0122 18:33:40.831048 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-wrcrb_01e9a9ff-8646-410e-81d5-f8757e1089bc/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 18:33:41 crc kubenswrapper[4758]: I0122 18:33:41.020595 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-7knhk_37ddbe64-608a-4aac-9d84-f18a622cf3f4/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 18:33:41 crc kubenswrapper[4758]: I0122 18:33:41.053716 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-7xb4q_4d38f3f0-3531-4733-8548-950b770f2094/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 18:33:41 crc kubenswrapper[4758]: I0122 18:33:41.256264 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-tm4f5_84e11d12-3496-4358-9062-7cd076d2de7c/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 18:33:41 crc kubenswrapper[4758]: I0122 18:33:41.334598 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-h2nj4_b4ba22a1-71a4-433b-a32f-c73302d187de/ssh-known-hosts-edpm-deployment/0.log" Jan 22 18:33:41 crc kubenswrapper[4758]: I0122 18:33:41.588945 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5fb5ff74dc-qd4wf_8c43412a-0632-40d3-918a-e8a601754dcd/proxy-server/0.log" Jan 22 18:33:41 crc kubenswrapper[4758]: I0122 18:33:41.759915 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-q78gl_3df63c93-1525-4b38-92e3-4d9b15a5c293/swift-ring-rebalance/0.log" Jan 22 18:33:41 crc kubenswrapper[4758]: I0122 18:33:41.847495 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5fb5ff74dc-qd4wf_8c43412a-0632-40d3-918a-e8a601754dcd/proxy-httpd/0.log" Jan 22 18:33:41 crc kubenswrapper[4758]: I0122 18:33:41.887375 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c63f01b2-8785-4108-b532-b69bc2407a26/account-auditor/0.log" Jan 22 18:33:42 crc kubenswrapper[4758]: I0122 18:33:42.000330 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c63f01b2-8785-4108-b532-b69bc2407a26/account-reaper/0.log" Jan 22 18:33:42 crc kubenswrapper[4758]: I0122 18:33:42.115255 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c63f01b2-8785-4108-b532-b69bc2407a26/account-server/0.log" Jan 22 18:33:42 crc kubenswrapper[4758]: I0122 18:33:42.182091 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c63f01b2-8785-4108-b532-b69bc2407a26/account-replicator/0.log" Jan 22 18:33:42 crc kubenswrapper[4758]: I0122 18:33:42.197546 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c63f01b2-8785-4108-b532-b69bc2407a26/container-auditor/0.log" Jan 22 18:33:42 crc kubenswrapper[4758]: I0122 18:33:42.327981 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c63f01b2-8785-4108-b532-b69bc2407a26/container-server/0.log" Jan 22 18:33:42 crc kubenswrapper[4758]: I0122 18:33:42.349815 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c63f01b2-8785-4108-b532-b69bc2407a26/container-replicator/0.log" Jan 22 18:33:42 crc kubenswrapper[4758]: I0122 18:33:42.422491 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c63f01b2-8785-4108-b532-b69bc2407a26/container-updater/0.log" Jan 22 18:33:42 crc kubenswrapper[4758]: I0122 18:33:42.456446 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c63f01b2-8785-4108-b532-b69bc2407a26/object-auditor/0.log" Jan 22 18:33:42 crc kubenswrapper[4758]: I0122 18:33:42.551285 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c63f01b2-8785-4108-b532-b69bc2407a26/object-expirer/0.log" Jan 22 18:33:42 crc kubenswrapper[4758]: I0122 18:33:42.642118 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c63f01b2-8785-4108-b532-b69bc2407a26/object-replicator/0.log" Jan 22 18:33:42 crc kubenswrapper[4758]: I0122 18:33:42.663059 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c63f01b2-8785-4108-b532-b69bc2407a26/object-server/0.log" Jan 22 18:33:42 crc kubenswrapper[4758]: I0122 18:33:42.665244 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c63f01b2-8785-4108-b532-b69bc2407a26/object-updater/0.log" Jan 22 18:33:42 crc kubenswrapper[4758]: I0122 18:33:42.775127 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c63f01b2-8785-4108-b532-b69bc2407a26/rsync/0.log" Jan 22 18:33:42 crc kubenswrapper[4758]: I0122 18:33:42.894889 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c63f01b2-8785-4108-b532-b69bc2407a26/swift-recon-cron/0.log" Jan 22 18:33:42 crc kubenswrapper[4758]: I0122 18:33:42.973864 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-ddjd9_e8778204-17cb-497b-a3d2-4d5f7709924d/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 18:33:43 crc kubenswrapper[4758]: I0122 18:33:43.215387 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-lm46c_de60144d-7668-4bcf-8421-dc4b0ceedf26/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 18:33:43 crc kubenswrapper[4758]: I0122 18:33:43.306884 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_0a5885aa-206d-4176-bc4b-2967b7391af9/tempest-tests-tempest-tests-runner/0.log" Jan 22 18:33:43 crc kubenswrapper[4758]: I0122 18:33:43.837359 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 18:33:43 crc kubenswrapper[4758]: I0122 18:33:43.837458 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 18:33:43 crc kubenswrapper[4758]: I0122 18:33:43.837526 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" Jan 22 18:33:43 crc kubenswrapper[4758]: I0122 18:33:43.841186 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c9e8da31eaeda42e5063e8764a836396b209f3fbacb8473b8179fc4d39590b99"} pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 18:33:43 crc kubenswrapper[4758]: I0122 18:33:43.842064 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" containerID="cri-o://c9e8da31eaeda42e5063e8764a836396b209f3fbacb8473b8179fc4d39590b99" gracePeriod=600 Jan 22 18:33:43 crc kubenswrapper[4758]: I0122 18:33:43.960399 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_400d3b29-16ae-4eeb-a00d-716c210a1947/watcher-applier/0.log" Jan 22 18:33:43 crc kubenswrapper[4758]: E0122 18:33:43.970288 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:33:44 crc kubenswrapper[4758]: I0122 18:33:44.540600 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerDied","Data":"c9e8da31eaeda42e5063e8764a836396b209f3fbacb8473b8179fc4d39590b99"} Jan 22 18:33:44 crc kubenswrapper[4758]: I0122 18:33:44.540506 4758 generic.go:334] "Generic (PLEG): container finished" podID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerID="c9e8da31eaeda42e5063e8764a836396b209f3fbacb8473b8179fc4d39590b99" exitCode=0 Jan 22 18:33:44 crc kubenswrapper[4758]: I0122 18:33:44.541050 4758 scope.go:117] "RemoveContainer" containerID="2a6a8e642e4ee60ebde8d328db1c83e15009314791b4ff4fb0767d4d7274d9c0" Jan 22 18:33:44 crc kubenswrapper[4758]: I0122 18:33:44.541927 4758 scope.go:117] "RemoveContainer" containerID="c9e8da31eaeda42e5063e8764a836396b209f3fbacb8473b8179fc4d39590b99" Jan 22 18:33:44 crc kubenswrapper[4758]: E0122 18:33:44.542265 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:33:44 crc kubenswrapper[4758]: I0122 18:33:44.768336 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_817e9de5-ef65-4caf-b47e-1cd6dd125daf/watcher-api-log/0.log" Jan 22 18:33:46 crc kubenswrapper[4758]: I0122 18:33:46.683186 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_4917bff0-0c03-454c-b1db-416fe2caaf7f/watcher-decision-engine/0.log" Jan 22 18:33:49 crc kubenswrapper[4758]: I0122 18:33:49.796694 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_817e9de5-ef65-4caf-b47e-1cd6dd125daf/watcher-api/0.log" Jan 22 18:33:59 crc kubenswrapper[4758]: I0122 18:33:59.808602 4758 scope.go:117] "RemoveContainer" containerID="c9e8da31eaeda42e5063e8764a836396b209f3fbacb8473b8179fc4d39590b99" Jan 22 18:33:59 crc kubenswrapper[4758]: E0122 18:33:59.809395 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:34:03 crc kubenswrapper[4758]: I0122 18:34:03.943674 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_7bab3882-8d1f-43dd-bbd6-53fc702f137d/memcached/0.log" Jan 22 18:34:13 crc kubenswrapper[4758]: I0122 18:34:13.808683 4758 scope.go:117] "RemoveContainer" containerID="c9e8da31eaeda42e5063e8764a836396b209f3fbacb8473b8179fc4d39590b99" Jan 22 18:34:13 crc kubenswrapper[4758]: E0122 18:34:13.809496 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:34:17 crc kubenswrapper[4758]: I0122 18:34:17.798802 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-59dd8b7cbf-s8q8p_c3e0f5c7-10cb-441c-9516-f6de8fe29757/manager/1.log" Jan 22 18:34:17 crc kubenswrapper[4758]: I0122 18:34:17.801558 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-59dd8b7cbf-s8q8p_c3e0f5c7-10cb-441c-9516-f6de8fe29757/manager/0.log" Jan 22 18:34:18 crc kubenswrapper[4758]: I0122 18:34:18.038043 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-tlt96_e7fdd2cd-e517-46b5-acb3-22b59b7f132f/manager/1.log" Jan 22 18:34:18 crc kubenswrapper[4758]: I0122 18:34:18.041707 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-tlt96_e7fdd2cd-e517-46b5-acb3-22b59b7f132f/manager/0.log" Jan 22 18:34:18 crc kubenswrapper[4758]: I0122 18:34:18.190218 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-2mr2s_901f347a-3b10-4392-8247-41a859112544/manager/1.log" Jan 22 18:34:18 crc kubenswrapper[4758]: I0122 18:34:18.394337 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-2mr2s_901f347a-3b10-4392-8247-41a859112544/manager/0.log" Jan 22 18:34:18 crc kubenswrapper[4758]: I0122 18:34:18.488559 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26ckcb6j_4b41ab64-3525-4cfb-a7b6-1d3a59959fd2/util/0.log" Jan 22 18:34:18 crc kubenswrapper[4758]: I0122 18:34:18.690901 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26ckcb6j_4b41ab64-3525-4cfb-a7b6-1d3a59959fd2/pull/0.log" Jan 22 18:34:18 crc kubenswrapper[4758]: I0122 18:34:18.755573 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26ckcb6j_4b41ab64-3525-4cfb-a7b6-1d3a59959fd2/util/0.log" Jan 22 18:34:18 crc kubenswrapper[4758]: I0122 18:34:18.819718 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26ckcb6j_4b41ab64-3525-4cfb-a7b6-1d3a59959fd2/pull/0.log" Jan 22 18:34:19 crc kubenswrapper[4758]: I0122 18:34:19.038028 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26ckcb6j_4b41ab64-3525-4cfb-a7b6-1d3a59959fd2/util/0.log" Jan 22 18:34:19 crc kubenswrapper[4758]: I0122 18:34:19.056527 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26ckcb6j_4b41ab64-3525-4cfb-a7b6-1d3a59959fd2/pull/0.log" Jan 22 18:34:19 crc kubenswrapper[4758]: I0122 18:34:19.072144 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26ckcb6j_4b41ab64-3525-4cfb-a7b6-1d3a59959fd2/extract/0.log" Jan 22 18:34:19 crc kubenswrapper[4758]: I0122 18:34:19.299034 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-skwtp_fa976a5e-7cd9-402f-9792-015ca1488d1f/manager/1.log" Jan 22 18:34:19 crc kubenswrapper[4758]: I0122 18:34:19.313936 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-skwtp_fa976a5e-7cd9-402f-9792-015ca1488d1f/manager/2.log" Jan 22 18:34:19 crc kubenswrapper[4758]: I0122 18:34:19.363940 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-2fkhp_659f7d3e-5518-4d19-bb54-e39295a667d2/manager/2.log" Jan 22 18:34:19 crc kubenswrapper[4758]: I0122 18:34:19.529578 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-2fkhp_659f7d3e-5518-4d19-bb54-e39295a667d2/manager/1.log" Jan 22 18:34:19 crc kubenswrapper[4758]: I0122 18:34:19.578095 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-zkfzz_25848d11-6830-45f8-aff0-0082594b5f3f/manager/1.log" Jan 22 18:34:19 crc kubenswrapper[4758]: I0122 18:34:19.610275 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-zkfzz_25848d11-6830-45f8-aff0-0082594b5f3f/manager/0.log" Jan 22 18:34:19 crc kubenswrapper[4758]: I0122 18:34:19.808803 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-54ccf4f85d-sb974_35a3fafd-45ea-465d-90ef-36148a60685e/manager/2.log" Jan 22 18:34:19 crc kubenswrapper[4758]: I0122 18:34:19.846072 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-54ccf4f85d-sb974_35a3fafd-45ea-465d-90ef-36148a60685e/manager/1.log" Jan 22 18:34:20 crc kubenswrapper[4758]: I0122 18:34:20.028523 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-69d6c9f5b8-gd568_e8d5a5c6-b15b-490d-aab9-7fc63e9f30f7/manager/1.log" Jan 22 18:34:20 crc kubenswrapper[4758]: I0122 18:34:20.045303 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-69d6c9f5b8-gd568_e8d5a5c6-b15b-490d-aab9-7fc63e9f30f7/manager/2.log" Jan 22 18:34:20 crc kubenswrapper[4758]: I0122 18:34:20.131755 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-dfb5n_78689fee-3fe7-47d2-866d-6465d23378ea/manager/1.log" Jan 22 18:34:20 crc kubenswrapper[4758]: I0122 18:34:20.309981 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-2qp8f_5ade5af9-f79e-4285-841c-0f08e88cca47/manager/1.log" Jan 22 18:34:20 crc kubenswrapper[4758]: I0122 18:34:20.333931 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-dfb5n_78689fee-3fe7-47d2-866d-6465d23378ea/manager/0.log" Jan 22 18:34:20 crc kubenswrapper[4758]: I0122 18:34:20.342634 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-2qp8f_5ade5af9-f79e-4285-841c-0f08e88cca47/manager/0.log" Jan 22 18:34:20 crc kubenswrapper[4758]: I0122 18:34:20.527373 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-d2nmz_d67bb459-81fe-48a2-ac8a-cb4441bb35bb/manager/2.log" Jan 22 18:34:20 crc kubenswrapper[4758]: I0122 18:34:20.540312 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-d2nmz_d67bb459-81fe-48a2-ac8a-cb4441bb35bb/manager/1.log" Jan 22 18:34:20 crc kubenswrapper[4758]: I0122 18:34:20.730560 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5d8f59fb49-7tzm4_c73a71b4-f1fd-4a6c-9832-ce9b48a5f220/manager/2.log" Jan 22 18:34:20 crc kubenswrapper[4758]: I0122 18:34:20.816899 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5d8f59fb49-7tzm4_c73a71b4-f1fd-4a6c-9832-ce9b48a5f220/manager/1.log" Jan 22 18:34:20 crc kubenswrapper[4758]: I0122 18:34:20.936796 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-zfcl5_7d2439ad-1ca6-4c24-9d15-e04f0f89aedf/manager/2.log" Jan 22 18:34:20 crc kubenswrapper[4758]: I0122 18:34:20.984428 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-zfcl5_7d2439ad-1ca6-4c24-9d15-e04f0f89aedf/manager/1.log" Jan 22 18:34:21 crc kubenswrapper[4758]: I0122 18:34:21.075841 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-jr994_16d19f40-45e9-4f1a-b953-e5c68ca014f3/manager/2.log" Jan 22 18:34:21 crc kubenswrapper[4758]: I0122 18:34:21.184321 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-jr994_16d19f40-45e9-4f1a-b953-e5c68ca014f3/manager/1.log" Jan 22 18:34:21 crc kubenswrapper[4758]: I0122 18:34:21.308391 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854wxd6d_cdd1962b-fbf0-480c-b5e2-e28ee6988046/manager/1.log" Jan 22 18:34:21 crc kubenswrapper[4758]: I0122 18:34:21.334588 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854wxd6d_cdd1962b-fbf0-480c-b5e2-e28ee6988046/manager/0.log" Jan 22 18:34:21 crc kubenswrapper[4758]: I0122 18:34:21.546267 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-b7565899b-vlqs7_4801e5d3-a66d-4856-bfc2-95dfebf6f442/operator/1.log" Jan 22 18:34:21 crc kubenswrapper[4758]: I0122 18:34:21.802243 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-675f79667-vjvtj_c4847ca7-5057-4d6d-80c5-f74c7d633e83/manager/1.log" Jan 22 18:34:21 crc kubenswrapper[4758]: I0122 18:34:21.868331 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-b7565899b-vlqs7_4801e5d3-a66d-4856-bfc2-95dfebf6f442/operator/0.log" Jan 22 18:34:22 crc kubenswrapper[4758]: I0122 18:34:22.087430 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-gvt49_c721cd63-b13a-43f8-a903-f8a996d9c478/registry-server/0.log" Jan 22 18:34:22 crc kubenswrapper[4758]: I0122 18:34:22.259308 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-lb8mx_f5135718-a42b-4089-922b-9fba781fe6db/manager/2.log" Jan 22 18:34:22 crc kubenswrapper[4758]: I0122 18:34:22.344903 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-lb8mx_f5135718-a42b-4089-922b-9fba781fe6db/manager/1.log" Jan 22 18:34:22 crc kubenswrapper[4758]: I0122 18:34:22.345725 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-4jthc_19b4b900-d90f-4e59-b082-61f058f5882b/manager/2.log" Jan 22 18:34:22 crc kubenswrapper[4758]: I0122 18:34:22.507643 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-4jthc_19b4b900-d90f-4e59-b082-61f058f5882b/manager/1.log" Jan 22 18:34:22 crc kubenswrapper[4758]: I0122 18:34:22.567601 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-cb5t8_26d5529a-b270-40fc-9faa-037435dd2f80/operator/2.log" Jan 22 18:34:22 crc kubenswrapper[4758]: I0122 18:34:22.599987 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-cb5t8_26d5529a-b270-40fc-9faa-037435dd2f80/operator/1.log" Jan 22 18:34:22 crc kubenswrapper[4758]: I0122 18:34:22.831414 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-4rlkk_40845474-36a2-48c0-a0df-af5deb2a31fd/manager/2.log" Jan 22 18:34:22 crc kubenswrapper[4758]: I0122 18:34:22.842824 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-4rlkk_40845474-36a2-48c0-a0df-af5deb2a31fd/manager/1.log" Jan 22 18:34:23 crc kubenswrapper[4758]: I0122 18:34:23.099821 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-59n7w_d4c5d14c-33e9-4cb0-9ff4-9747c2cd3c13/manager/1.log" Jan 22 18:34:23 crc kubenswrapper[4758]: I0122 18:34:23.116556 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-675f79667-vjvtj_c4847ca7-5057-4d6d-80c5-f74c7d633e83/manager/0.log" Jan 22 18:34:23 crc kubenswrapper[4758]: I0122 18:34:23.196937 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-2xj52_644142ed-c937-406d-9fd5-3fe856879a92/manager/1.log" Jan 22 18:34:23 crc kubenswrapper[4758]: I0122 18:34:23.319379 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-59n7w_d4c5d14c-33e9-4cb0-9ff4-9747c2cd3c13/manager/0.log" Jan 22 18:34:23 crc kubenswrapper[4758]: I0122 18:34:23.366562 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-2xj52_644142ed-c937-406d-9fd5-3fe856879a92/manager/0.log" Jan 22 18:34:23 crc kubenswrapper[4758]: I0122 18:34:23.471265 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-85b8fd6746-9vvd6_71c16ac1-3276-4096-93c5-d10765320713/manager/1.log" Jan 22 18:34:23 crc kubenswrapper[4758]: I0122 18:34:23.478608 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-85b8fd6746-9vvd6_71c16ac1-3276-4096-93c5-d10765320713/manager/2.log" Jan 22 18:34:25 crc kubenswrapper[4758]: I0122 18:34:25.808586 4758 scope.go:117] "RemoveContainer" containerID="c9e8da31eaeda42e5063e8764a836396b209f3fbacb8473b8179fc4d39590b99" Jan 22 18:34:25 crc kubenswrapper[4758]: E0122 18:34:25.809259 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:34:37 crc kubenswrapper[4758]: I0122 18:34:37.809534 4758 scope.go:117] "RemoveContainer" containerID="c9e8da31eaeda42e5063e8764a836396b209f3fbacb8473b8179fc4d39590b99" Jan 22 18:34:37 crc kubenswrapper[4758]: E0122 18:34:37.810894 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:34:44 crc kubenswrapper[4758]: I0122 18:34:44.179584 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-cvjnm_06a279e1-00f2-4ae0-9bc4-6481c53c14f1/control-plane-machine-set-operator/0.log" Jan 22 18:34:44 crc kubenswrapper[4758]: I0122 18:34:44.406435 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-2k2wj_e82bca83-9360-4ff6-b0d8-dcaeb20cdcf7/kube-rbac-proxy/0.log" Jan 22 18:34:44 crc kubenswrapper[4758]: I0122 18:34:44.421513 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-2k2wj_e82bca83-9360-4ff6-b0d8-dcaeb20cdcf7/machine-api-operator/0.log" Jan 22 18:34:51 crc kubenswrapper[4758]: I0122 18:34:51.808951 4758 scope.go:117] "RemoveContainer" containerID="c9e8da31eaeda42e5063e8764a836396b209f3fbacb8473b8179fc4d39590b99" Jan 22 18:34:51 crc kubenswrapper[4758]: E0122 18:34:51.809952 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:34:57 crc kubenswrapper[4758]: I0122 18:34:57.915983 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-bpw4j_36cf0be1-e796-4c9e-b232-2a0c0ceaaa79/cert-manager-controller/1.log" Jan 22 18:34:58 crc kubenswrapper[4758]: I0122 18:34:58.002309 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-bpw4j_36cf0be1-e796-4c9e-b232-2a0c0ceaaa79/cert-manager-controller/0.log" Jan 22 18:34:58 crc kubenswrapper[4758]: I0122 18:34:58.142327 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-qg57g_86017532-da20-4917-8f8b-34190218edc2/cert-manager-cainjector/2.log" Jan 22 18:34:58 crc kubenswrapper[4758]: I0122 18:34:58.184494 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-qg57g_86017532-da20-4917-8f8b-34190218edc2/cert-manager-cainjector/1.log" Jan 22 18:34:58 crc kubenswrapper[4758]: I0122 18:34:58.315222 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-hcn6c_9844066a-3c0e-4de2-b9d5-f6523e724066/cert-manager-webhook/0.log" Jan 22 18:35:05 crc kubenswrapper[4758]: I0122 18:35:05.808952 4758 scope.go:117] "RemoveContainer" containerID="c9e8da31eaeda42e5063e8764a836396b209f3fbacb8473b8179fc4d39590b99" Jan 22 18:35:05 crc kubenswrapper[4758]: E0122 18:35:05.809917 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:35:11 crc kubenswrapper[4758]: I0122 18:35:11.145557 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-vf6r8_851f106a-fb00-4a5d-9112-d188f5bf363d/nmstate-console-plugin/0.log" Jan 22 18:35:11 crc kubenswrapper[4758]: I0122 18:35:11.311931 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-bxw2x_9371e907-70ad-4d4e-85ed-42d886f3a58c/nmstate-handler/0.log" Jan 22 18:35:11 crc kubenswrapper[4758]: I0122 18:35:11.333039 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-zqjtk_2af60d67-9e48-435e-a5a5-3786c6e44da3/kube-rbac-proxy/0.log" Jan 22 18:35:11 crc kubenswrapper[4758]: I0122 18:35:11.422201 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-zqjtk_2af60d67-9e48-435e-a5a5-3786c6e44da3/nmstate-metrics/0.log" Jan 22 18:35:11 crc kubenswrapper[4758]: I0122 18:35:11.487674 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-xbrd4_6f530d4b-935a-43a2-91a1-d3e786e42edd/nmstate-operator/0.log" Jan 22 18:35:11 crc kubenswrapper[4758]: I0122 18:35:11.609509 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-6tvr2_ad84bac3-9a0e-40d9-a603-7d8503a45b32/nmstate-webhook/0.log" Jan 22 18:35:12 crc kubenswrapper[4758]: I0122 18:35:12.361214 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mhkmp"] Jan 22 18:35:12 crc kubenswrapper[4758]: E0122 18:35:12.361841 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52f21b16-b14c-457a-9598-e53499f22ad2" containerName="extract-utilities" Jan 22 18:35:12 crc kubenswrapper[4758]: I0122 18:35:12.362183 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="52f21b16-b14c-457a-9598-e53499f22ad2" containerName="extract-utilities" Jan 22 18:35:12 crc kubenswrapper[4758]: E0122 18:35:12.362239 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52f21b16-b14c-457a-9598-e53499f22ad2" containerName="extract-content" Jan 22 18:35:12 crc kubenswrapper[4758]: I0122 18:35:12.362250 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="52f21b16-b14c-457a-9598-e53499f22ad2" containerName="extract-content" Jan 22 18:35:12 crc kubenswrapper[4758]: E0122 18:35:12.362268 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52f21b16-b14c-457a-9598-e53499f22ad2" containerName="registry-server" Jan 22 18:35:12 crc kubenswrapper[4758]: I0122 18:35:12.362277 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="52f21b16-b14c-457a-9598-e53499f22ad2" containerName="registry-server" Jan 22 18:35:12 crc kubenswrapper[4758]: I0122 18:35:12.362610 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="52f21b16-b14c-457a-9598-e53499f22ad2" containerName="registry-server" Jan 22 18:35:12 crc kubenswrapper[4758]: I0122 18:35:12.365079 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mhkmp" Jan 22 18:35:12 crc kubenswrapper[4758]: I0122 18:35:12.380799 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mhkmp"] Jan 22 18:35:12 crc kubenswrapper[4758]: I0122 18:35:12.505377 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9w5k\" (UniqueName: \"kubernetes.io/projected/6ba6d715-1eb9-4e19-8d87-340f046aebc3-kube-api-access-w9w5k\") pod \"redhat-marketplace-mhkmp\" (UID: \"6ba6d715-1eb9-4e19-8d87-340f046aebc3\") " pod="openshift-marketplace/redhat-marketplace-mhkmp" Jan 22 18:35:12 crc kubenswrapper[4758]: I0122 18:35:12.505795 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ba6d715-1eb9-4e19-8d87-340f046aebc3-utilities\") pod \"redhat-marketplace-mhkmp\" (UID: \"6ba6d715-1eb9-4e19-8d87-340f046aebc3\") " pod="openshift-marketplace/redhat-marketplace-mhkmp" Jan 22 18:35:12 crc kubenswrapper[4758]: I0122 18:35:12.505831 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ba6d715-1eb9-4e19-8d87-340f046aebc3-catalog-content\") pod \"redhat-marketplace-mhkmp\" (UID: \"6ba6d715-1eb9-4e19-8d87-340f046aebc3\") " pod="openshift-marketplace/redhat-marketplace-mhkmp" Jan 22 18:35:12 crc kubenswrapper[4758]: I0122 18:35:12.608910 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ba6d715-1eb9-4e19-8d87-340f046aebc3-utilities\") pod \"redhat-marketplace-mhkmp\" (UID: \"6ba6d715-1eb9-4e19-8d87-340f046aebc3\") " pod="openshift-marketplace/redhat-marketplace-mhkmp" Jan 22 18:35:12 crc kubenswrapper[4758]: I0122 18:35:12.608966 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ba6d715-1eb9-4e19-8d87-340f046aebc3-catalog-content\") pod \"redhat-marketplace-mhkmp\" (UID: \"6ba6d715-1eb9-4e19-8d87-340f046aebc3\") " pod="openshift-marketplace/redhat-marketplace-mhkmp" Jan 22 18:35:12 crc kubenswrapper[4758]: I0122 18:35:12.609097 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9w5k\" (UniqueName: \"kubernetes.io/projected/6ba6d715-1eb9-4e19-8d87-340f046aebc3-kube-api-access-w9w5k\") pod \"redhat-marketplace-mhkmp\" (UID: \"6ba6d715-1eb9-4e19-8d87-340f046aebc3\") " pod="openshift-marketplace/redhat-marketplace-mhkmp" Jan 22 18:35:12 crc kubenswrapper[4758]: I0122 18:35:12.609549 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ba6d715-1eb9-4e19-8d87-340f046aebc3-utilities\") pod \"redhat-marketplace-mhkmp\" (UID: \"6ba6d715-1eb9-4e19-8d87-340f046aebc3\") " pod="openshift-marketplace/redhat-marketplace-mhkmp" Jan 22 18:35:12 crc kubenswrapper[4758]: I0122 18:35:12.609690 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ba6d715-1eb9-4e19-8d87-340f046aebc3-catalog-content\") pod \"redhat-marketplace-mhkmp\" (UID: \"6ba6d715-1eb9-4e19-8d87-340f046aebc3\") " pod="openshift-marketplace/redhat-marketplace-mhkmp" Jan 22 18:35:12 crc kubenswrapper[4758]: I0122 18:35:12.641667 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9w5k\" (UniqueName: \"kubernetes.io/projected/6ba6d715-1eb9-4e19-8d87-340f046aebc3-kube-api-access-w9w5k\") pod \"redhat-marketplace-mhkmp\" (UID: \"6ba6d715-1eb9-4e19-8d87-340f046aebc3\") " pod="openshift-marketplace/redhat-marketplace-mhkmp" Jan 22 18:35:12 crc kubenswrapper[4758]: I0122 18:35:12.691289 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mhkmp" Jan 22 18:35:13 crc kubenswrapper[4758]: I0122 18:35:13.203401 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mhkmp"] Jan 22 18:35:13 crc kubenswrapper[4758]: I0122 18:35:13.597477 4758 generic.go:334] "Generic (PLEG): container finished" podID="6ba6d715-1eb9-4e19-8d87-340f046aebc3" containerID="05c7eabdefc54991add4d8191883659f0705bee2a4ddd79a3f6765edd62f0efc" exitCode=0 Jan 22 18:35:13 crc kubenswrapper[4758]: I0122 18:35:13.597807 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mhkmp" event={"ID":"6ba6d715-1eb9-4e19-8d87-340f046aebc3","Type":"ContainerDied","Data":"05c7eabdefc54991add4d8191883659f0705bee2a4ddd79a3f6765edd62f0efc"} Jan 22 18:35:13 crc kubenswrapper[4758]: I0122 18:35:13.597840 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mhkmp" event={"ID":"6ba6d715-1eb9-4e19-8d87-340f046aebc3","Type":"ContainerStarted","Data":"4cec0f7f69d1e3e28e06f7a8c59bef855fb0d5c6c401fa78bf1365c733c7850d"} Jan 22 18:35:14 crc kubenswrapper[4758]: I0122 18:35:14.608862 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mhkmp" event={"ID":"6ba6d715-1eb9-4e19-8d87-340f046aebc3","Type":"ContainerStarted","Data":"263e578dd5be4f3a9bd946e8ebfb0f8ff20c8e6ead785a5ace90768d1ba7466d"} Jan 22 18:35:15 crc kubenswrapper[4758]: I0122 18:35:15.619473 4758 generic.go:334] "Generic (PLEG): container finished" podID="6ba6d715-1eb9-4e19-8d87-340f046aebc3" containerID="263e578dd5be4f3a9bd946e8ebfb0f8ff20c8e6ead785a5ace90768d1ba7466d" exitCode=0 Jan 22 18:35:15 crc kubenswrapper[4758]: I0122 18:35:15.619553 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mhkmp" event={"ID":"6ba6d715-1eb9-4e19-8d87-340f046aebc3","Type":"ContainerDied","Data":"263e578dd5be4f3a9bd946e8ebfb0f8ff20c8e6ead785a5ace90768d1ba7466d"} Jan 22 18:35:16 crc kubenswrapper[4758]: I0122 18:35:16.811423 4758 scope.go:117] "RemoveContainer" containerID="c9e8da31eaeda42e5063e8764a836396b209f3fbacb8473b8179fc4d39590b99" Jan 22 18:35:16 crc kubenswrapper[4758]: E0122 18:35:16.811793 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:35:17 crc kubenswrapper[4758]: I0122 18:35:17.639677 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mhkmp" event={"ID":"6ba6d715-1eb9-4e19-8d87-340f046aebc3","Type":"ContainerStarted","Data":"54d0cc9db022ab2491f51234a063ecc6b1b165d655e978406b0ed4b0f0e131e2"} Jan 22 18:35:17 crc kubenswrapper[4758]: I0122 18:35:17.671828 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mhkmp" podStartSLOduration=2.126306644 podStartE2EDuration="5.671773062s" podCreationTimestamp="2026-01-22 18:35:12 +0000 UTC" firstStartedPulling="2026-01-22 18:35:13.599426034 +0000 UTC m=+7535.082765319" lastFinishedPulling="2026-01-22 18:35:17.144892442 +0000 UTC m=+7538.628231737" observedRunningTime="2026-01-22 18:35:17.66065229 +0000 UTC m=+7539.143991575" watchObservedRunningTime="2026-01-22 18:35:17.671773062 +0000 UTC m=+7539.155112347" Jan 22 18:35:22 crc kubenswrapper[4758]: I0122 18:35:22.691955 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mhkmp" Jan 22 18:35:22 crc kubenswrapper[4758]: I0122 18:35:22.692623 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mhkmp" Jan 22 18:35:22 crc kubenswrapper[4758]: I0122 18:35:22.753804 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mhkmp" Jan 22 18:35:23 crc kubenswrapper[4758]: I0122 18:35:23.743174 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mhkmp" Jan 22 18:35:23 crc kubenswrapper[4758]: I0122 18:35:23.808694 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mhkmp"] Jan 22 18:35:25 crc kubenswrapper[4758]: I0122 18:35:25.712589 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mhkmp" podUID="6ba6d715-1eb9-4e19-8d87-340f046aebc3" containerName="registry-server" containerID="cri-o://54d0cc9db022ab2491f51234a063ecc6b1b165d655e978406b0ed4b0f0e131e2" gracePeriod=2 Jan 22 18:35:25 crc kubenswrapper[4758]: I0122 18:35:25.841529 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-54jp6_fdd4969c-d2b9-45fa-b5b2-da97462c0122/prometheus-operator/0.log" Jan 22 18:35:26 crc kubenswrapper[4758]: I0122 18:35:26.144098 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-647895bbd9-4wr75_e73c81be-8209-43d3-9756-49c2157dde87/prometheus-operator-admission-webhook/0.log" Jan 22 18:35:26 crc kubenswrapper[4758]: I0122 18:35:26.150839 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-647895bbd9-wx9dr_ce26f110-8bb8-42b0-82cc-a001c2c1ebaf/prometheus-operator-admission-webhook/0.log" Jan 22 18:35:26 crc kubenswrapper[4758]: I0122 18:35:26.238188 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mhkmp" Jan 22 18:35:26 crc kubenswrapper[4758]: I0122 18:35:26.302615 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ba6d715-1eb9-4e19-8d87-340f046aebc3-catalog-content\") pod \"6ba6d715-1eb9-4e19-8d87-340f046aebc3\" (UID: \"6ba6d715-1eb9-4e19-8d87-340f046aebc3\") " Jan 22 18:35:26 crc kubenswrapper[4758]: I0122 18:35:26.302734 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ba6d715-1eb9-4e19-8d87-340f046aebc3-utilities\") pod \"6ba6d715-1eb9-4e19-8d87-340f046aebc3\" (UID: \"6ba6d715-1eb9-4e19-8d87-340f046aebc3\") " Jan 22 18:35:26 crc kubenswrapper[4758]: I0122 18:35:26.302976 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9w5k\" (UniqueName: \"kubernetes.io/projected/6ba6d715-1eb9-4e19-8d87-340f046aebc3-kube-api-access-w9w5k\") pod \"6ba6d715-1eb9-4e19-8d87-340f046aebc3\" (UID: \"6ba6d715-1eb9-4e19-8d87-340f046aebc3\") " Jan 22 18:35:26 crc kubenswrapper[4758]: I0122 18:35:26.307587 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ba6d715-1eb9-4e19-8d87-340f046aebc3-utilities" (OuterVolumeSpecName: "utilities") pod "6ba6d715-1eb9-4e19-8d87-340f046aebc3" (UID: "6ba6d715-1eb9-4e19-8d87-340f046aebc3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 18:35:26 crc kubenswrapper[4758]: I0122 18:35:26.312991 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ba6d715-1eb9-4e19-8d87-340f046aebc3-kube-api-access-w9w5k" (OuterVolumeSpecName: "kube-api-access-w9w5k") pod "6ba6d715-1eb9-4e19-8d87-340f046aebc3" (UID: "6ba6d715-1eb9-4e19-8d87-340f046aebc3"). InnerVolumeSpecName "kube-api-access-w9w5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 18:35:26 crc kubenswrapper[4758]: I0122 18:35:26.404771 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ba6d715-1eb9-4e19-8d87-340f046aebc3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6ba6d715-1eb9-4e19-8d87-340f046aebc3" (UID: "6ba6d715-1eb9-4e19-8d87-340f046aebc3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 18:35:26 crc kubenswrapper[4758]: I0122 18:35:26.407608 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ba6d715-1eb9-4e19-8d87-340f046aebc3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 18:35:26 crc kubenswrapper[4758]: I0122 18:35:26.407638 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ba6d715-1eb9-4e19-8d87-340f046aebc3-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 18:35:26 crc kubenswrapper[4758]: I0122 18:35:26.407649 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9w5k\" (UniqueName: \"kubernetes.io/projected/6ba6d715-1eb9-4e19-8d87-340f046aebc3-kube-api-access-w9w5k\") on node \"crc\" DevicePath \"\"" Jan 22 18:35:26 crc kubenswrapper[4758]: I0122 18:35:26.457394 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-thgv5_e12dec2b-da40-4cad-92b5-91ab59c0e7c2/operator/0.log" Jan 22 18:35:26 crc kubenswrapper[4758]: I0122 18:35:26.490401 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-fgjds_1a0e3e73-5ee6-4155-b3b2-0ada1f94100e/perses-operator/0.log" Jan 22 18:35:26 crc kubenswrapper[4758]: I0122 18:35:26.723362 4758 generic.go:334] "Generic (PLEG): container finished" podID="6ba6d715-1eb9-4e19-8d87-340f046aebc3" containerID="54d0cc9db022ab2491f51234a063ecc6b1b165d655e978406b0ed4b0f0e131e2" exitCode=0 Jan 22 18:35:26 crc kubenswrapper[4758]: I0122 18:35:26.723412 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mhkmp" event={"ID":"6ba6d715-1eb9-4e19-8d87-340f046aebc3","Type":"ContainerDied","Data":"54d0cc9db022ab2491f51234a063ecc6b1b165d655e978406b0ed4b0f0e131e2"} Jan 22 18:35:26 crc kubenswrapper[4758]: I0122 18:35:26.723448 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mhkmp" event={"ID":"6ba6d715-1eb9-4e19-8d87-340f046aebc3","Type":"ContainerDied","Data":"4cec0f7f69d1e3e28e06f7a8c59bef855fb0d5c6c401fa78bf1365c733c7850d"} Jan 22 18:35:26 crc kubenswrapper[4758]: I0122 18:35:26.723459 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mhkmp" Jan 22 18:35:26 crc kubenswrapper[4758]: I0122 18:35:26.723483 4758 scope.go:117] "RemoveContainer" containerID="54d0cc9db022ab2491f51234a063ecc6b1b165d655e978406b0ed4b0f0e131e2" Jan 22 18:35:26 crc kubenswrapper[4758]: I0122 18:35:26.744146 4758 scope.go:117] "RemoveContainer" containerID="263e578dd5be4f3a9bd946e8ebfb0f8ff20c8e6ead785a5ace90768d1ba7466d" Jan 22 18:35:26 crc kubenswrapper[4758]: I0122 18:35:26.793203 4758 scope.go:117] "RemoveContainer" containerID="05c7eabdefc54991add4d8191883659f0705bee2a4ddd79a3f6765edd62f0efc" Jan 22 18:35:26 crc kubenswrapper[4758]: I0122 18:35:26.793365 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mhkmp"] Jan 22 18:35:26 crc kubenswrapper[4758]: I0122 18:35:26.830886 4758 scope.go:117] "RemoveContainer" containerID="54d0cc9db022ab2491f51234a063ecc6b1b165d655e978406b0ed4b0f0e131e2" Jan 22 18:35:26 crc kubenswrapper[4758]: E0122 18:35:26.831270 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54d0cc9db022ab2491f51234a063ecc6b1b165d655e978406b0ed4b0f0e131e2\": container with ID starting with 54d0cc9db022ab2491f51234a063ecc6b1b165d655e978406b0ed4b0f0e131e2 not found: ID does not exist" containerID="54d0cc9db022ab2491f51234a063ecc6b1b165d655e978406b0ed4b0f0e131e2" Jan 22 18:35:26 crc kubenswrapper[4758]: I0122 18:35:26.831363 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54d0cc9db022ab2491f51234a063ecc6b1b165d655e978406b0ed4b0f0e131e2"} err="failed to get container status \"54d0cc9db022ab2491f51234a063ecc6b1b165d655e978406b0ed4b0f0e131e2\": rpc error: code = NotFound desc = could not find container \"54d0cc9db022ab2491f51234a063ecc6b1b165d655e978406b0ed4b0f0e131e2\": container with ID starting with 54d0cc9db022ab2491f51234a063ecc6b1b165d655e978406b0ed4b0f0e131e2 not found: ID does not exist" Jan 22 18:35:26 crc kubenswrapper[4758]: I0122 18:35:26.831399 4758 scope.go:117] "RemoveContainer" containerID="263e578dd5be4f3a9bd946e8ebfb0f8ff20c8e6ead785a5ace90768d1ba7466d" Jan 22 18:35:26 crc kubenswrapper[4758]: E0122 18:35:26.831777 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"263e578dd5be4f3a9bd946e8ebfb0f8ff20c8e6ead785a5ace90768d1ba7466d\": container with ID starting with 263e578dd5be4f3a9bd946e8ebfb0f8ff20c8e6ead785a5ace90768d1ba7466d not found: ID does not exist" containerID="263e578dd5be4f3a9bd946e8ebfb0f8ff20c8e6ead785a5ace90768d1ba7466d" Jan 22 18:35:26 crc kubenswrapper[4758]: I0122 18:35:26.831816 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"263e578dd5be4f3a9bd946e8ebfb0f8ff20c8e6ead785a5ace90768d1ba7466d"} err="failed to get container status \"263e578dd5be4f3a9bd946e8ebfb0f8ff20c8e6ead785a5ace90768d1ba7466d\": rpc error: code = NotFound desc = could not find container \"263e578dd5be4f3a9bd946e8ebfb0f8ff20c8e6ead785a5ace90768d1ba7466d\": container with ID starting with 263e578dd5be4f3a9bd946e8ebfb0f8ff20c8e6ead785a5ace90768d1ba7466d not found: ID does not exist" Jan 22 18:35:26 crc kubenswrapper[4758]: I0122 18:35:26.831846 4758 scope.go:117] "RemoveContainer" containerID="05c7eabdefc54991add4d8191883659f0705bee2a4ddd79a3f6765edd62f0efc" Jan 22 18:35:26 crc kubenswrapper[4758]: E0122 18:35:26.832138 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05c7eabdefc54991add4d8191883659f0705bee2a4ddd79a3f6765edd62f0efc\": container with ID starting with 05c7eabdefc54991add4d8191883659f0705bee2a4ddd79a3f6765edd62f0efc not found: ID does not exist" containerID="05c7eabdefc54991add4d8191883659f0705bee2a4ddd79a3f6765edd62f0efc" Jan 22 18:35:26 crc kubenswrapper[4758]: I0122 18:35:26.832181 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05c7eabdefc54991add4d8191883659f0705bee2a4ddd79a3f6765edd62f0efc"} err="failed to get container status \"05c7eabdefc54991add4d8191883659f0705bee2a4ddd79a3f6765edd62f0efc\": rpc error: code = NotFound desc = could not find container \"05c7eabdefc54991add4d8191883659f0705bee2a4ddd79a3f6765edd62f0efc\": container with ID starting with 05c7eabdefc54991add4d8191883659f0705bee2a4ddd79a3f6765edd62f0efc not found: ID does not exist" Jan 22 18:35:26 crc kubenswrapper[4758]: I0122 18:35:26.860045 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mhkmp"] Jan 22 18:35:28 crc kubenswrapper[4758]: I0122 18:35:28.820030 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ba6d715-1eb9-4e19-8d87-340f046aebc3" path="/var/lib/kubelet/pods/6ba6d715-1eb9-4e19-8d87-340f046aebc3/volumes" Jan 22 18:35:31 crc kubenswrapper[4758]: I0122 18:35:31.808056 4758 scope.go:117] "RemoveContainer" containerID="c9e8da31eaeda42e5063e8764a836396b209f3fbacb8473b8179fc4d39590b99" Jan 22 18:35:31 crc kubenswrapper[4758]: E0122 18:35:31.808687 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:35:39 crc kubenswrapper[4758]: I0122 18:35:39.906329 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-k8lvt_ba3d731b-c87e-4003-a063-9977ae4eb0a2/kube-rbac-proxy/0.log" Jan 22 18:35:40 crc kubenswrapper[4758]: I0122 18:35:40.101554 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-k8lvt_ba3d731b-c87e-4003-a063-9977ae4eb0a2/controller/0.log" Jan 22 18:35:40 crc kubenswrapper[4758]: I0122 18:35:40.111223 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qs76m_00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10/cp-frr-files/0.log" Jan 22 18:35:40 crc kubenswrapper[4758]: I0122 18:35:40.338461 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qs76m_00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10/cp-frr-files/0.log" Jan 22 18:35:40 crc kubenswrapper[4758]: I0122 18:35:40.361806 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qs76m_00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10/cp-reloader/0.log" Jan 22 18:35:40 crc kubenswrapper[4758]: I0122 18:35:40.405689 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qs76m_00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10/cp-reloader/0.log" Jan 22 18:35:40 crc kubenswrapper[4758]: I0122 18:35:40.411200 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qs76m_00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10/cp-metrics/0.log" Jan 22 18:35:40 crc kubenswrapper[4758]: I0122 18:35:40.522146 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qs76m_00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10/cp-frr-files/0.log" Jan 22 18:35:40 crc kubenswrapper[4758]: I0122 18:35:40.543164 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qs76m_00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10/cp-reloader/0.log" Jan 22 18:35:40 crc kubenswrapper[4758]: I0122 18:35:40.571839 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qs76m_00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10/cp-metrics/0.log" Jan 22 18:35:40 crc kubenswrapper[4758]: I0122 18:35:40.591479 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qs76m_00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10/cp-metrics/0.log" Jan 22 18:35:40 crc kubenswrapper[4758]: I0122 18:35:40.757420 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qs76m_00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10/cp-frr-files/0.log" Jan 22 18:35:40 crc kubenswrapper[4758]: I0122 18:35:40.773385 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qs76m_00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10/cp-reloader/0.log" Jan 22 18:35:40 crc kubenswrapper[4758]: I0122 18:35:40.793892 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qs76m_00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10/cp-metrics/0.log" Jan 22 18:35:40 crc kubenswrapper[4758]: I0122 18:35:40.807976 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qs76m_00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10/controller/0.log" Jan 22 18:35:40 crc kubenswrapper[4758]: I0122 18:35:40.954939 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qs76m_00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10/frr-metrics/0.log" Jan 22 18:35:40 crc kubenswrapper[4758]: I0122 18:35:40.989836 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qs76m_00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10/kube-rbac-proxy/0.log" Jan 22 18:35:41 crc kubenswrapper[4758]: I0122 18:35:41.033547 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qs76m_00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10/kube-rbac-proxy-frr/0.log" Jan 22 18:35:41 crc kubenswrapper[4758]: I0122 18:35:41.195115 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qs76m_00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10/reloader/0.log" Jan 22 18:35:41 crc kubenswrapper[4758]: I0122 18:35:41.282161 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-np2j4_4612798c-6ae6-4a07-afe6-3f3574ee467b/frr-k8s-webhook-server/2.log" Jan 22 18:35:41 crc kubenswrapper[4758]: I0122 18:35:41.507467 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-58fc8b87c6-qmw5r_8afd29cc-2dab-460e-ad9d-f17690c15f41/manager/2.log" Jan 22 18:35:41 crc kubenswrapper[4758]: I0122 18:35:41.510074 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-np2j4_4612798c-6ae6-4a07-afe6-3f3574ee467b/frr-k8s-webhook-server/1.log" Jan 22 18:35:41 crc kubenswrapper[4758]: I0122 18:35:41.639064 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-58fc8b87c6-qmw5r_8afd29cc-2dab-460e-ad9d-f17690c15f41/manager/1.log" Jan 22 18:35:41 crc kubenswrapper[4758]: I0122 18:35:41.750180 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-755c77fb5-mjxnk_c95d135e-9d68-4e7f-843f-57f2419b596c/webhook-server/0.log" Jan 22 18:35:41 crc kubenswrapper[4758]: I0122 18:35:41.915431 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-lpprz_cc433179-ae5b-4250-80c2-97af371fdfed/kube-rbac-proxy/0.log" Jan 22 18:35:42 crc kubenswrapper[4758]: I0122 18:35:42.133151 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-lpprz_cc433179-ae5b-4250-80c2-97af371fdfed/speaker/1.log" Jan 22 18:35:42 crc kubenswrapper[4758]: I0122 18:35:42.184297 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-lpprz_cc433179-ae5b-4250-80c2-97af371fdfed/speaker/2.log" Jan 22 18:35:42 crc kubenswrapper[4758]: I0122 18:35:42.824017 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qs76m_00ba6dcc-ddc4-44b1-be0b-599c3e0c3a10/frr/0.log" Jan 22 18:35:44 crc kubenswrapper[4758]: I0122 18:35:44.808845 4758 scope.go:117] "RemoveContainer" containerID="c9e8da31eaeda42e5063e8764a836396b209f3fbacb8473b8179fc4d39590b99" Jan 22 18:35:44 crc kubenswrapper[4758]: E0122 18:35:44.809410 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:35:57 crc kubenswrapper[4758]: I0122 18:35:57.238082 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrdn59_265db705-34c5-40d6-b7ef-c58046650cc9/util/0.log" Jan 22 18:35:57 crc kubenswrapper[4758]: I0122 18:35:57.413175 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrdn59_265db705-34c5-40d6-b7ef-c58046650cc9/pull/0.log" Jan 22 18:35:57 crc kubenswrapper[4758]: I0122 18:35:57.446405 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrdn59_265db705-34c5-40d6-b7ef-c58046650cc9/util/0.log" Jan 22 18:35:57 crc kubenswrapper[4758]: I0122 18:35:57.498580 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrdn59_265db705-34c5-40d6-b7ef-c58046650cc9/pull/0.log" Jan 22 18:35:57 crc kubenswrapper[4758]: I0122 18:35:57.646236 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrdn59_265db705-34c5-40d6-b7ef-c58046650cc9/pull/0.log" Jan 22 18:35:57 crc kubenswrapper[4758]: I0122 18:35:57.706218 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrdn59_265db705-34c5-40d6-b7ef-c58046650cc9/util/0.log" Jan 22 18:35:57 crc kubenswrapper[4758]: I0122 18:35:57.753890 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrdn59_265db705-34c5-40d6-b7ef-c58046650cc9/extract/0.log" Jan 22 18:35:57 crc kubenswrapper[4758]: I0122 18:35:57.870130 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zkspj_89caa1d0-37ab-4cb9-b204-30a78b86fd9f/util/0.log" Jan 22 18:35:58 crc kubenswrapper[4758]: I0122 18:35:58.052582 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zkspj_89caa1d0-37ab-4cb9-b204-30a78b86fd9f/util/0.log" Jan 22 18:35:58 crc kubenswrapper[4758]: I0122 18:35:58.060944 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zkspj_89caa1d0-37ab-4cb9-b204-30a78b86fd9f/pull/0.log" Jan 22 18:35:58 crc kubenswrapper[4758]: I0122 18:35:58.078046 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zkspj_89caa1d0-37ab-4cb9-b204-30a78b86fd9f/pull/0.log" Jan 22 18:35:58 crc kubenswrapper[4758]: I0122 18:35:58.228485 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zkspj_89caa1d0-37ab-4cb9-b204-30a78b86fd9f/pull/0.log" Jan 22 18:35:58 crc kubenswrapper[4758]: I0122 18:35:58.258621 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zkspj_89caa1d0-37ab-4cb9-b204-30a78b86fd9f/extract/0.log" Jan 22 18:35:58 crc kubenswrapper[4758]: I0122 18:35:58.292690 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zkspj_89caa1d0-37ab-4cb9-b204-30a78b86fd9f/util/0.log" Jan 22 18:35:58 crc kubenswrapper[4758]: I0122 18:35:58.434984 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mhgrz_8d48ec26-2fe3-4ade-82f3-db3d61bf969c/util/0.log" Jan 22 18:35:58 crc kubenswrapper[4758]: I0122 18:35:58.592106 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mhgrz_8d48ec26-2fe3-4ade-82f3-db3d61bf969c/util/0.log" Jan 22 18:35:58 crc kubenswrapper[4758]: I0122 18:35:58.659002 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mhgrz_8d48ec26-2fe3-4ade-82f3-db3d61bf969c/pull/0.log" Jan 22 18:35:58 crc kubenswrapper[4758]: I0122 18:35:58.671988 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mhgrz_8d48ec26-2fe3-4ade-82f3-db3d61bf969c/pull/0.log" Jan 22 18:35:58 crc kubenswrapper[4758]: I0122 18:35:58.815305 4758 scope.go:117] "RemoveContainer" containerID="c9e8da31eaeda42e5063e8764a836396b209f3fbacb8473b8179fc4d39590b99" Jan 22 18:35:58 crc kubenswrapper[4758]: E0122 18:35:58.815541 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:35:58 crc kubenswrapper[4758]: I0122 18:35:58.844885 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mhgrz_8d48ec26-2fe3-4ade-82f3-db3d61bf969c/util/0.log" Jan 22 18:35:58 crc kubenswrapper[4758]: I0122 18:35:58.892878 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mhgrz_8d48ec26-2fe3-4ade-82f3-db3d61bf969c/pull/0.log" Jan 22 18:35:58 crc kubenswrapper[4758]: I0122 18:35:58.903011 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mhgrz_8d48ec26-2fe3-4ade-82f3-db3d61bf969c/extract/0.log" Jan 22 18:35:59 crc kubenswrapper[4758]: I0122 18:35:59.041843 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-66r7j_5790457f-38e4-4d41-8ea3-f6d950f5d376/extract-utilities/0.log" Jan 22 18:35:59 crc kubenswrapper[4758]: I0122 18:35:59.192633 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-66r7j_5790457f-38e4-4d41-8ea3-f6d950f5d376/extract-utilities/0.log" Jan 22 18:35:59 crc kubenswrapper[4758]: I0122 18:35:59.221650 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-66r7j_5790457f-38e4-4d41-8ea3-f6d950f5d376/extract-content/0.log" Jan 22 18:35:59 crc kubenswrapper[4758]: I0122 18:35:59.244598 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-66r7j_5790457f-38e4-4d41-8ea3-f6d950f5d376/extract-content/0.log" Jan 22 18:35:59 crc kubenswrapper[4758]: I0122 18:35:59.480598 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-66r7j_5790457f-38e4-4d41-8ea3-f6d950f5d376/extract-content/0.log" Jan 22 18:35:59 crc kubenswrapper[4758]: I0122 18:35:59.540585 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-66r7j_5790457f-38e4-4d41-8ea3-f6d950f5d376/extract-utilities/0.log" Jan 22 18:35:59 crc kubenswrapper[4758]: I0122 18:35:59.739315 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sfkpq_c9961771-fe17-45c0-ba58-04a487d45f06/extract-utilities/0.log" Jan 22 18:35:59 crc kubenswrapper[4758]: I0122 18:35:59.944440 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sfkpq_c9961771-fe17-45c0-ba58-04a487d45f06/extract-content/0.log" Jan 22 18:35:59 crc kubenswrapper[4758]: I0122 18:35:59.950985 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sfkpq_c9961771-fe17-45c0-ba58-04a487d45f06/extract-content/0.log" Jan 22 18:35:59 crc kubenswrapper[4758]: I0122 18:35:59.960383 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-66r7j_5790457f-38e4-4d41-8ea3-f6d950f5d376/registry-server/0.log" Jan 22 18:36:00 crc kubenswrapper[4758]: I0122 18:36:00.029155 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sfkpq_c9961771-fe17-45c0-ba58-04a487d45f06/extract-utilities/0.log" Jan 22 18:36:00 crc kubenswrapper[4758]: I0122 18:36:00.127761 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sfkpq_c9961771-fe17-45c0-ba58-04a487d45f06/extract-utilities/0.log" Jan 22 18:36:00 crc kubenswrapper[4758]: I0122 18:36:00.202108 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sfkpq_c9961771-fe17-45c0-ba58-04a487d45f06/extract-content/0.log" Jan 22 18:36:00 crc kubenswrapper[4758]: I0122 18:36:00.389649 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-f2gvw_6daa1231-490e-4ff7-9157-f49cdec96a5e/marketplace-operator/2.log" Jan 22 18:36:00 crc kubenswrapper[4758]: I0122 18:36:00.475129 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-f2gvw_6daa1231-490e-4ff7-9157-f49cdec96a5e/marketplace-operator/1.log" Jan 22 18:36:00 crc kubenswrapper[4758]: I0122 18:36:00.744717 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m8fjx_08b59c09-1a10-4c8a-946b-0f760e9ba4a6/extract-utilities/0.log" Jan 22 18:36:00 crc kubenswrapper[4758]: I0122 18:36:00.882259 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m8fjx_08b59c09-1a10-4c8a-946b-0f760e9ba4a6/extract-utilities/0.log" Jan 22 18:36:00 crc kubenswrapper[4758]: I0122 18:36:00.937759 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m8fjx_08b59c09-1a10-4c8a-946b-0f760e9ba4a6/extract-content/0.log" Jan 22 18:36:00 crc kubenswrapper[4758]: I0122 18:36:00.940259 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sfkpq_c9961771-fe17-45c0-ba58-04a487d45f06/registry-server/0.log" Jan 22 18:36:01 crc kubenswrapper[4758]: I0122 18:36:01.010884 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m8fjx_08b59c09-1a10-4c8a-946b-0f760e9ba4a6/extract-content/0.log" Jan 22 18:36:01 crc kubenswrapper[4758]: I0122 18:36:01.246226 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m8fjx_08b59c09-1a10-4c8a-946b-0f760e9ba4a6/extract-content/0.log" Jan 22 18:36:01 crc kubenswrapper[4758]: I0122 18:36:01.310897 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m8fjx_08b59c09-1a10-4c8a-946b-0f760e9ba4a6/extract-utilities/0.log" Jan 22 18:36:01 crc kubenswrapper[4758]: I0122 18:36:01.498405 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-45rp2_2970941d-360b-4f65-befc-15b942098ef1/extract-utilities/0.log" Jan 22 18:36:01 crc kubenswrapper[4758]: I0122 18:36:01.661900 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m8fjx_08b59c09-1a10-4c8a-946b-0f760e9ba4a6/registry-server/0.log" Jan 22 18:36:01 crc kubenswrapper[4758]: I0122 18:36:01.689642 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-45rp2_2970941d-360b-4f65-befc-15b942098ef1/extract-utilities/0.log" Jan 22 18:36:01 crc kubenswrapper[4758]: I0122 18:36:01.752294 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-45rp2_2970941d-360b-4f65-befc-15b942098ef1/extract-content/0.log" Jan 22 18:36:01 crc kubenswrapper[4758]: I0122 18:36:01.786563 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-45rp2_2970941d-360b-4f65-befc-15b942098ef1/extract-content/0.log" Jan 22 18:36:01 crc kubenswrapper[4758]: I0122 18:36:01.970514 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-45rp2_2970941d-360b-4f65-befc-15b942098ef1/extract-content/0.log" Jan 22 18:36:01 crc kubenswrapper[4758]: I0122 18:36:01.994671 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-45rp2_2970941d-360b-4f65-befc-15b942098ef1/extract-utilities/0.log" Jan 22 18:36:02 crc kubenswrapper[4758]: I0122 18:36:02.994023 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-45rp2_2970941d-360b-4f65-befc-15b942098ef1/registry-server/0.log" Jan 22 18:36:13 crc kubenswrapper[4758]: I0122 18:36:13.808603 4758 scope.go:117] "RemoveContainer" containerID="c9e8da31eaeda42e5063e8764a836396b209f3fbacb8473b8179fc4d39590b99" Jan 22 18:36:13 crc kubenswrapper[4758]: E0122 18:36:13.809610 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:36:15 crc kubenswrapper[4758]: I0122 18:36:15.161206 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-647895bbd9-wx9dr_ce26f110-8bb8-42b0-82cc-a001c2c1ebaf/prometheus-operator-admission-webhook/0.log" Jan 22 18:36:15 crc kubenswrapper[4758]: I0122 18:36:15.190877 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-647895bbd9-4wr75_e73c81be-8209-43d3-9756-49c2157dde87/prometheus-operator-admission-webhook/0.log" Jan 22 18:36:15 crc kubenswrapper[4758]: I0122 18:36:15.260084 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-54jp6_fdd4969c-d2b9-45fa-b5b2-da97462c0122/prometheus-operator/0.log" Jan 22 18:36:15 crc kubenswrapper[4758]: I0122 18:36:15.402232 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-thgv5_e12dec2b-da40-4cad-92b5-91ab59c0e7c2/operator/0.log" Jan 22 18:36:15 crc kubenswrapper[4758]: I0122 18:36:15.424242 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-fgjds_1a0e3e73-5ee6-4155-b3b2-0ada1f94100e/perses-operator/0.log" Jan 22 18:36:25 crc kubenswrapper[4758]: I0122 18:36:25.808497 4758 scope.go:117] "RemoveContainer" containerID="c9e8da31eaeda42e5063e8764a836396b209f3fbacb8473b8179fc4d39590b99" Jan 22 18:36:25 crc kubenswrapper[4758]: E0122 18:36:25.809489 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:36:27 crc kubenswrapper[4758]: E0122 18:36:27.190340 4758 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.223:35644->38.102.83.223:33887: write tcp 38.102.83.223:35644->38.102.83.223:33887: write: broken pipe Jan 22 18:36:30 crc kubenswrapper[4758]: E0122 18:36:30.758348 4758 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.223:38030->38.102.83.223:33887: read tcp 38.102.83.223:38030->38.102.83.223:33887: read: connection reset by peer Jan 22 18:36:38 crc kubenswrapper[4758]: I0122 18:36:38.816340 4758 scope.go:117] "RemoveContainer" containerID="c9e8da31eaeda42e5063e8764a836396b209f3fbacb8473b8179fc4d39590b99" Jan 22 18:36:38 crc kubenswrapper[4758]: E0122 18:36:38.819395 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:36:49 crc kubenswrapper[4758]: I0122 18:36:49.809035 4758 scope.go:117] "RemoveContainer" containerID="c9e8da31eaeda42e5063e8764a836396b209f3fbacb8473b8179fc4d39590b99" Jan 22 18:36:49 crc kubenswrapper[4758]: E0122 18:36:49.809620 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:37:03 crc kubenswrapper[4758]: I0122 18:37:03.807919 4758 scope.go:117] "RemoveContainer" containerID="c9e8da31eaeda42e5063e8764a836396b209f3fbacb8473b8179fc4d39590b99" Jan 22 18:37:03 crc kubenswrapper[4758]: E0122 18:37:03.809624 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:37:18 crc kubenswrapper[4758]: I0122 18:37:18.819200 4758 scope.go:117] "RemoveContainer" containerID="c9e8da31eaeda42e5063e8764a836396b209f3fbacb8473b8179fc4d39590b99" Jan 22 18:37:18 crc kubenswrapper[4758]: E0122 18:37:18.825627 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:37:29 crc kubenswrapper[4758]: I0122 18:37:29.808092 4758 scope.go:117] "RemoveContainer" containerID="c9e8da31eaeda42e5063e8764a836396b209f3fbacb8473b8179fc4d39590b99" Jan 22 18:37:29 crc kubenswrapper[4758]: E0122 18:37:29.808928 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:37:41 crc kubenswrapper[4758]: I0122 18:37:41.808484 4758 scope.go:117] "RemoveContainer" containerID="c9e8da31eaeda42e5063e8764a836396b209f3fbacb8473b8179fc4d39590b99" Jan 22 18:37:41 crc kubenswrapper[4758]: E0122 18:37:41.809391 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:37:54 crc kubenswrapper[4758]: I0122 18:37:54.809685 4758 scope.go:117] "RemoveContainer" containerID="c9e8da31eaeda42e5063e8764a836396b209f3fbacb8473b8179fc4d39590b99" Jan 22 18:37:54 crc kubenswrapper[4758]: E0122 18:37:54.810431 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:38:05 crc kubenswrapper[4758]: I0122 18:38:05.180699 4758 scope.go:117] "RemoveContainer" containerID="16d98b054a985a06cbf1a01ab07bd1796951c1d372c7eeacebf49207c18da139" Jan 22 18:38:08 crc kubenswrapper[4758]: I0122 18:38:08.823800 4758 scope.go:117] "RemoveContainer" containerID="c9e8da31eaeda42e5063e8764a836396b209f3fbacb8473b8179fc4d39590b99" Jan 22 18:38:08 crc kubenswrapper[4758]: E0122 18:38:08.827142 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:38:17 crc kubenswrapper[4758]: I0122 18:38:17.544188 4758 generic.go:334] "Generic (PLEG): container finished" podID="a357e497-4622-4b8e-9ea7-9bfd5efa4716" containerID="c63c4d153cae82f00105da0713ffa87273a2c3f987de7c02d199ccb1988003be" exitCode=0 Jan 22 18:38:17 crc kubenswrapper[4758]: I0122 18:38:17.544321 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6lbk6/must-gather-928fr" event={"ID":"a357e497-4622-4b8e-9ea7-9bfd5efa4716","Type":"ContainerDied","Data":"c63c4d153cae82f00105da0713ffa87273a2c3f987de7c02d199ccb1988003be"} Jan 22 18:38:17 crc kubenswrapper[4758]: I0122 18:38:17.545564 4758 scope.go:117] "RemoveContainer" containerID="c63c4d153cae82f00105da0713ffa87273a2c3f987de7c02d199ccb1988003be" Jan 22 18:38:17 crc kubenswrapper[4758]: I0122 18:38:17.815866 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6lbk6_must-gather-928fr_a357e497-4622-4b8e-9ea7-9bfd5efa4716/gather/0.log" Jan 22 18:38:23 crc kubenswrapper[4758]: I0122 18:38:23.808978 4758 scope.go:117] "RemoveContainer" containerID="c9e8da31eaeda42e5063e8764a836396b209f3fbacb8473b8179fc4d39590b99" Jan 22 18:38:23 crc kubenswrapper[4758]: E0122 18:38:23.810090 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:38:27 crc kubenswrapper[4758]: I0122 18:38:27.432324 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6lbk6/must-gather-928fr"] Jan 22 18:38:27 crc kubenswrapper[4758]: I0122 18:38:27.432974 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-6lbk6/must-gather-928fr" podUID="a357e497-4622-4b8e-9ea7-9bfd5efa4716" containerName="copy" containerID="cri-o://81c86fb605c2717632248fca8f61655ac5c10b9b226626cbc38f55aeec103df9" gracePeriod=2 Jan 22 18:38:27 crc kubenswrapper[4758]: I0122 18:38:27.452886 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6lbk6/must-gather-928fr"] Jan 22 18:38:27 crc kubenswrapper[4758]: I0122 18:38:27.691765 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6lbk6_must-gather-928fr_a357e497-4622-4b8e-9ea7-9bfd5efa4716/copy/0.log" Jan 22 18:38:27 crc kubenswrapper[4758]: I0122 18:38:27.692543 4758 generic.go:334] "Generic (PLEG): container finished" podID="a357e497-4622-4b8e-9ea7-9bfd5efa4716" containerID="81c86fb605c2717632248fca8f61655ac5c10b9b226626cbc38f55aeec103df9" exitCode=143 Jan 22 18:38:27 crc kubenswrapper[4758]: I0122 18:38:27.951495 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6lbk6_must-gather-928fr_a357e497-4622-4b8e-9ea7-9bfd5efa4716/copy/0.log" Jan 22 18:38:27 crc kubenswrapper[4758]: I0122 18:38:27.952112 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6lbk6/must-gather-928fr" Jan 22 18:38:28 crc kubenswrapper[4758]: I0122 18:38:28.097394 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zpzk\" (UniqueName: \"kubernetes.io/projected/a357e497-4622-4b8e-9ea7-9bfd5efa4716-kube-api-access-5zpzk\") pod \"a357e497-4622-4b8e-9ea7-9bfd5efa4716\" (UID: \"a357e497-4622-4b8e-9ea7-9bfd5efa4716\") " Jan 22 18:38:28 crc kubenswrapper[4758]: I0122 18:38:28.097571 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a357e497-4622-4b8e-9ea7-9bfd5efa4716-must-gather-output\") pod \"a357e497-4622-4b8e-9ea7-9bfd5efa4716\" (UID: \"a357e497-4622-4b8e-9ea7-9bfd5efa4716\") " Jan 22 18:38:28 crc kubenswrapper[4758]: I0122 18:38:28.107937 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a357e497-4622-4b8e-9ea7-9bfd5efa4716-kube-api-access-5zpzk" (OuterVolumeSpecName: "kube-api-access-5zpzk") pod "a357e497-4622-4b8e-9ea7-9bfd5efa4716" (UID: "a357e497-4622-4b8e-9ea7-9bfd5efa4716"). InnerVolumeSpecName "kube-api-access-5zpzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 18:38:28 crc kubenswrapper[4758]: I0122 18:38:28.115413 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zpzk\" (UniqueName: \"kubernetes.io/projected/a357e497-4622-4b8e-9ea7-9bfd5efa4716-kube-api-access-5zpzk\") on node \"crc\" DevicePath \"\"" Jan 22 18:38:28 crc kubenswrapper[4758]: I0122 18:38:28.299573 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a357e497-4622-4b8e-9ea7-9bfd5efa4716-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "a357e497-4622-4b8e-9ea7-9bfd5efa4716" (UID: "a357e497-4622-4b8e-9ea7-9bfd5efa4716"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 18:38:28 crc kubenswrapper[4758]: I0122 18:38:28.320214 4758 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a357e497-4622-4b8e-9ea7-9bfd5efa4716-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 22 18:38:28 crc kubenswrapper[4758]: I0122 18:38:28.703006 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6lbk6_must-gather-928fr_a357e497-4622-4b8e-9ea7-9bfd5efa4716/copy/0.log" Jan 22 18:38:28 crc kubenswrapper[4758]: I0122 18:38:28.703528 4758 scope.go:117] "RemoveContainer" containerID="81c86fb605c2717632248fca8f61655ac5c10b9b226626cbc38f55aeec103df9" Jan 22 18:38:28 crc kubenswrapper[4758]: I0122 18:38:28.703556 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6lbk6/must-gather-928fr" Jan 22 18:38:28 crc kubenswrapper[4758]: I0122 18:38:28.728536 4758 scope.go:117] "RemoveContainer" containerID="c63c4d153cae82f00105da0713ffa87273a2c3f987de7c02d199ccb1988003be" Jan 22 18:38:28 crc kubenswrapper[4758]: I0122 18:38:28.824888 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a357e497-4622-4b8e-9ea7-9bfd5efa4716" path="/var/lib/kubelet/pods/a357e497-4622-4b8e-9ea7-9bfd5efa4716/volumes" Jan 22 18:38:37 crc kubenswrapper[4758]: I0122 18:38:37.808189 4758 scope.go:117] "RemoveContainer" containerID="c9e8da31eaeda42e5063e8764a836396b209f3fbacb8473b8179fc4d39590b99" Jan 22 18:38:37 crc kubenswrapper[4758]: E0122 18:38:37.831875 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zsbtx_openshift-machine-config-operator(a4b5f24a-19df-4969-b547-a5acc323c58a)\"" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" Jan 22 18:38:50 crc kubenswrapper[4758]: I0122 18:38:50.808640 4758 scope.go:117] "RemoveContainer" containerID="c9e8da31eaeda42e5063e8764a836396b209f3fbacb8473b8179fc4d39590b99" Jan 22 18:38:51 crc kubenswrapper[4758]: I0122 18:38:51.046457 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerStarted","Data":"f0dcbfc0cf6009ec3ee4761ad3723a5dd815c7a59eaa43d0b2d683edfc73181d"} Jan 22 18:39:05 crc kubenswrapper[4758]: I0122 18:39:05.276242 4758 scope.go:117] "RemoveContainer" containerID="b687aebc0399c4f878489e9d64bacccd096a7201dd9aa16bde15484cd7ea5f08" Jan 22 18:41:13 crc kubenswrapper[4758]: I0122 18:41:13.837168 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 18:41:13 crc kubenswrapper[4758]: I0122 18:41:13.837921 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 18:41:43 crc kubenswrapper[4758]: I0122 18:41:43.837162 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 18:41:43 crc kubenswrapper[4758]: I0122 18:41:43.837933 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 18:42:13 crc kubenswrapper[4758]: I0122 18:42:13.837968 4758 patch_prober.go:28] interesting pod/machine-config-daemon-zsbtx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 18:42:13 crc kubenswrapper[4758]: I0122 18:42:13.838755 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 18:42:13 crc kubenswrapper[4758]: I0122 18:42:13.838818 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" Jan 22 18:42:13 crc kubenswrapper[4758]: I0122 18:42:13.839650 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f0dcbfc0cf6009ec3ee4761ad3723a5dd815c7a59eaa43d0b2d683edfc73181d"} pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 18:42:13 crc kubenswrapper[4758]: I0122 18:42:13.839755 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" podUID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerName="machine-config-daemon" containerID="cri-o://f0dcbfc0cf6009ec3ee4761ad3723a5dd815c7a59eaa43d0b2d683edfc73181d" gracePeriod=600 Jan 22 18:42:14 crc kubenswrapper[4758]: I0122 18:42:14.391159 4758 generic.go:334] "Generic (PLEG): container finished" podID="a4b5f24a-19df-4969-b547-a5acc323c58a" containerID="f0dcbfc0cf6009ec3ee4761ad3723a5dd815c7a59eaa43d0b2d683edfc73181d" exitCode=0 Jan 22 18:42:14 crc kubenswrapper[4758]: I0122 18:42:14.391215 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerDied","Data":"f0dcbfc0cf6009ec3ee4761ad3723a5dd815c7a59eaa43d0b2d683edfc73181d"} Jan 22 18:42:14 crc kubenswrapper[4758]: I0122 18:42:14.391701 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zsbtx" event={"ID":"a4b5f24a-19df-4969-b547-a5acc323c58a","Type":"ContainerStarted","Data":"e9958c643c087ddf5ddf0aa7379d0bb68940212d844fac0852139e19e1e62f5c"} Jan 22 18:42:14 crc kubenswrapper[4758]: I0122 18:42:14.391723 4758 scope.go:117] "RemoveContainer" containerID="c9e8da31eaeda42e5063e8764a836396b209f3fbacb8473b8179fc4d39590b99" Jan 22 18:42:40 crc kubenswrapper[4758]: I0122 18:42:40.438800 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t8dtf"] Jan 22 18:42:40 crc kubenswrapper[4758]: E0122 18:42:40.439964 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ba6d715-1eb9-4e19-8d87-340f046aebc3" containerName="registry-server" Jan 22 18:42:40 crc kubenswrapper[4758]: I0122 18:42:40.439990 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ba6d715-1eb9-4e19-8d87-340f046aebc3" containerName="registry-server" Jan 22 18:42:40 crc kubenswrapper[4758]: E0122 18:42:40.440015 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a357e497-4622-4b8e-9ea7-9bfd5efa4716" containerName="gather" Jan 22 18:42:40 crc kubenswrapper[4758]: I0122 18:42:40.440022 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a357e497-4622-4b8e-9ea7-9bfd5efa4716" containerName="gather" Jan 22 18:42:40 crc kubenswrapper[4758]: E0122 18:42:40.440031 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a357e497-4622-4b8e-9ea7-9bfd5efa4716" containerName="copy" Jan 22 18:42:40 crc kubenswrapper[4758]: I0122 18:42:40.440039 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a357e497-4622-4b8e-9ea7-9bfd5efa4716" containerName="copy" Jan 22 18:42:40 crc kubenswrapper[4758]: E0122 18:42:40.440053 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ba6d715-1eb9-4e19-8d87-340f046aebc3" containerName="extract-content" Jan 22 18:42:40 crc kubenswrapper[4758]: I0122 18:42:40.440061 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ba6d715-1eb9-4e19-8d87-340f046aebc3" containerName="extract-content" Jan 22 18:42:40 crc kubenswrapper[4758]: E0122 18:42:40.440097 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ba6d715-1eb9-4e19-8d87-340f046aebc3" containerName="extract-utilities" Jan 22 18:42:40 crc kubenswrapper[4758]: I0122 18:42:40.440105 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ba6d715-1eb9-4e19-8d87-340f046aebc3" containerName="extract-utilities" Jan 22 18:42:40 crc kubenswrapper[4758]: I0122 18:42:40.440367 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a357e497-4622-4b8e-9ea7-9bfd5efa4716" containerName="copy" Jan 22 18:42:40 crc kubenswrapper[4758]: I0122 18:42:40.440407 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a357e497-4622-4b8e-9ea7-9bfd5efa4716" containerName="gather" Jan 22 18:42:40 crc kubenswrapper[4758]: I0122 18:42:40.440418 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ba6d715-1eb9-4e19-8d87-340f046aebc3" containerName="registry-server" Jan 22 18:42:40 crc kubenswrapper[4758]: I0122 18:42:40.442241 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t8dtf" Jan 22 18:42:40 crc kubenswrapper[4758]: I0122 18:42:40.475217 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t8dtf"] Jan 22 18:42:40 crc kubenswrapper[4758]: I0122 18:42:40.543075 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3325e8a-0cd9-4333-ab9c-2211949bf197-utilities\") pod \"redhat-operators-t8dtf\" (UID: \"c3325e8a-0cd9-4333-ab9c-2211949bf197\") " pod="openshift-marketplace/redhat-operators-t8dtf" Jan 22 18:42:40 crc kubenswrapper[4758]: I0122 18:42:40.543207 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kvdw\" (UniqueName: \"kubernetes.io/projected/c3325e8a-0cd9-4333-ab9c-2211949bf197-kube-api-access-5kvdw\") pod \"redhat-operators-t8dtf\" (UID: \"c3325e8a-0cd9-4333-ab9c-2211949bf197\") " pod="openshift-marketplace/redhat-operators-t8dtf" Jan 22 18:42:40 crc kubenswrapper[4758]: I0122 18:42:40.543365 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3325e8a-0cd9-4333-ab9c-2211949bf197-catalog-content\") pod \"redhat-operators-t8dtf\" (UID: \"c3325e8a-0cd9-4333-ab9c-2211949bf197\") " pod="openshift-marketplace/redhat-operators-t8dtf" Jan 22 18:42:40 crc kubenswrapper[4758]: I0122 18:42:40.645874 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3325e8a-0cd9-4333-ab9c-2211949bf197-utilities\") pod \"redhat-operators-t8dtf\" (UID: \"c3325e8a-0cd9-4333-ab9c-2211949bf197\") " pod="openshift-marketplace/redhat-operators-t8dtf" Jan 22 18:42:40 crc kubenswrapper[4758]: I0122 18:42:40.646013 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kvdw\" (UniqueName: \"kubernetes.io/projected/c3325e8a-0cd9-4333-ab9c-2211949bf197-kube-api-access-5kvdw\") pod \"redhat-operators-t8dtf\" (UID: \"c3325e8a-0cd9-4333-ab9c-2211949bf197\") " pod="openshift-marketplace/redhat-operators-t8dtf" Jan 22 18:42:40 crc kubenswrapper[4758]: I0122 18:42:40.646222 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3325e8a-0cd9-4333-ab9c-2211949bf197-catalog-content\") pod \"redhat-operators-t8dtf\" (UID: \"c3325e8a-0cd9-4333-ab9c-2211949bf197\") " pod="openshift-marketplace/redhat-operators-t8dtf" Jan 22 18:42:40 crc kubenswrapper[4758]: I0122 18:42:40.646696 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3325e8a-0cd9-4333-ab9c-2211949bf197-utilities\") pod \"redhat-operators-t8dtf\" (UID: \"c3325e8a-0cd9-4333-ab9c-2211949bf197\") " pod="openshift-marketplace/redhat-operators-t8dtf" Jan 22 18:42:40 crc kubenswrapper[4758]: I0122 18:42:40.646886 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3325e8a-0cd9-4333-ab9c-2211949bf197-catalog-content\") pod \"redhat-operators-t8dtf\" (UID: \"c3325e8a-0cd9-4333-ab9c-2211949bf197\") " pod="openshift-marketplace/redhat-operators-t8dtf" Jan 22 18:42:40 crc kubenswrapper[4758]: I0122 18:42:40.680878 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kvdw\" (UniqueName: \"kubernetes.io/projected/c3325e8a-0cd9-4333-ab9c-2211949bf197-kube-api-access-5kvdw\") pod \"redhat-operators-t8dtf\" (UID: \"c3325e8a-0cd9-4333-ab9c-2211949bf197\") " pod="openshift-marketplace/redhat-operators-t8dtf" Jan 22 18:42:40 crc kubenswrapper[4758]: I0122 18:42:40.766241 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t8dtf" Jan 22 18:42:41 crc kubenswrapper[4758]: I0122 18:42:41.259276 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t8dtf"] Jan 22 18:42:41 crc kubenswrapper[4758]: I0122 18:42:41.710215 4758 generic.go:334] "Generic (PLEG): container finished" podID="c3325e8a-0cd9-4333-ab9c-2211949bf197" containerID="42cc6ef8557cf942a5745df0e0fde002bae911d9f7b5d95af371a52dbfc79d42" exitCode=0 Jan 22 18:42:41 crc kubenswrapper[4758]: I0122 18:42:41.710270 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8dtf" event={"ID":"c3325e8a-0cd9-4333-ab9c-2211949bf197","Type":"ContainerDied","Data":"42cc6ef8557cf942a5745df0e0fde002bae911d9f7b5d95af371a52dbfc79d42"} Jan 22 18:42:41 crc kubenswrapper[4758]: I0122 18:42:41.710663 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8dtf" event={"ID":"c3325e8a-0cd9-4333-ab9c-2211949bf197","Type":"ContainerStarted","Data":"e6dbd12dc2215ca38252c5e5d029177c8d45552f6456043ec4451641eccfe7ec"} Jan 22 18:42:41 crc kubenswrapper[4758]: I0122 18:42:41.713442 4758 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 18:42:49 crc kubenswrapper[4758]: I0122 18:42:49.809119 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8dtf" event={"ID":"c3325e8a-0cd9-4333-ab9c-2211949bf197","Type":"ContainerStarted","Data":"b02efc3134d8a46758e1e70cacca47325b675d6bfe1610d1196f27779c5c7ce7"} Jan 22 18:42:52 crc kubenswrapper[4758]: I0122 18:42:51.832971 4758 generic.go:334] "Generic (PLEG): container finished" podID="c3325e8a-0cd9-4333-ab9c-2211949bf197" containerID="b02efc3134d8a46758e1e70cacca47325b675d6bfe1610d1196f27779c5c7ce7" exitCode=0 Jan 22 18:42:52 crc kubenswrapper[4758]: I0122 18:42:51.833060 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8dtf" event={"ID":"c3325e8a-0cd9-4333-ab9c-2211949bf197","Type":"ContainerDied","Data":"b02efc3134d8a46758e1e70cacca47325b675d6bfe1610d1196f27779c5c7ce7"}